Dec 05 06:41:23 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023
Dec 05 06:41:23 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 05 06:41:23 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Dec 05 06:41:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 05 06:41:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 05 06:41:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 05 06:41:23 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 05 06:41:23 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Dec 05 06:41:23 localhost kernel: signal: max sigframe size: 1776
Dec 05 06:41:23 localhost kernel: BIOS-provided physical RAM map:
Dec 05 06:41:23 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 05 06:41:23 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 05 06:41:23 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 05 06:41:23 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 05 06:41:23 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 05 06:41:23 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 05 06:41:23 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 05 06:41:23 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable
Dec 05 06:41:23 localhost kernel: NX (Execute Disable) protection: active
Dec 05 06:41:23 localhost kernel: SMBIOS 2.8 present.
Dec 05 06:41:23 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 05 06:41:23 localhost kernel: Hypervisor detected: KVM
Dec 05 06:41:23 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 05 06:41:23 localhost kernel: kvm-clock: using sched offset of 1876819760 cycles
Dec 05 06:41:23 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 05 06:41:23 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 05 06:41:23 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 05 06:41:23 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 05 06:41:23 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000
Dec 05 06:41:23 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 05 06:41:23 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 05 06:41:23 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 05 06:41:23 localhost kernel: Using GB pages for direct mapping
Dec 05 06:41:23 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff]
Dec 05 06:41:23 localhost kernel: ACPI: Early table checksum verification disabled
Dec 05 06:41:23 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 05 06:41:23 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 06:41:23 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 06:41:23 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 06:41:23 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 05 06:41:23 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 06:41:23 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 06:41:23 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 05 06:41:23 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 05 06:41:23 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 05 06:41:23 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 05 06:41:23 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 05 06:41:23 localhost kernel: No NUMA configuration found
Dec 05 06:41:23 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff]
Dec 05 06:41:23 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff]
Dec 05 06:41:23 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB)
Dec 05 06:41:23 localhost kernel: Zone ranges:
Dec 05 06:41:23 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 05 06:41:23 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 05 06:41:23 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000043fffffff]
Dec 05 06:41:23 localhost kernel:   Device   empty
Dec 05 06:41:23 localhost kernel: Movable zone start for each node
Dec 05 06:41:23 localhost kernel: Early memory node ranges
Dec 05 06:41:23 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 05 06:41:23 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 05 06:41:23 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000043fffffff]
Dec 05 06:41:23 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff]
Dec 05 06:41:23 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 05 06:41:23 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 05 06:41:23 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 05 06:41:23 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 05 06:41:23 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 05 06:41:23 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 05 06:41:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 05 06:41:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 05 06:41:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 05 06:41:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 05 06:41:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 05 06:41:23 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 05 06:41:23 localhost kernel: TSC deadline timer available
Dec 05 06:41:23 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs
Dec 05 06:41:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 05 06:41:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 05 06:41:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 05 06:41:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 05 06:41:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 05 06:41:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 05 06:41:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 05 06:41:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 05 06:41:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 05 06:41:23 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 05 06:41:23 localhost kernel: Booting paravirtualized kernel on KVM
Dec 05 06:41:23 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 05 06:41:23 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 05 06:41:23 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144
Dec 05 06:41:23 localhost kernel: pcpu-alloc: s188416 r8192 d28672 u262144 alloc=1*2097152
Dec 05 06:41:23 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 05 06:41:23 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 05 06:41:23 localhost kernel: Fallback order for Node 0: 0 
Dec 05 06:41:23 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 4128475
Dec 05 06:41:23 localhost kernel: Policy zone: Normal
Dec 05 06:41:23 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Dec 05 06:41:23 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space.
Dec 05 06:41:23 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
Dec 05 06:41:23 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 05 06:41:23 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 05 06:41:23 localhost kernel: software IO TLB: area num 8.
Dec 05 06:41:23 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved)
Dec 05 06:41:23 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0
Dec 05 06:41:23 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 05 06:41:23 localhost kernel: ftrace: allocating 44803 entries in 176 pages
Dec 05 06:41:23 localhost kernel: ftrace: allocated 176 pages with 3 groups
Dec 05 06:41:23 localhost kernel: Dynamic Preempt: voluntary
Dec 05 06:41:23 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 05 06:41:23 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 05 06:41:23 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 05 06:41:23 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 05 06:41:23 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 05 06:41:23 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 05 06:41:23 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 05 06:41:23 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 05 06:41:23 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 05 06:41:23 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 05 06:41:23 localhost kernel: random: crng init done (trusting CPU's manufacturer)
Dec 05 06:41:23 localhost kernel: Console: colour VGA+ 80x25
Dec 05 06:41:23 localhost kernel: printk: console [tty0] enabled
Dec 05 06:41:23 localhost kernel: printk: console [ttyS0] enabled
Dec 05 06:41:23 localhost kernel: ACPI: Core revision 20211217
Dec 05 06:41:23 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 05 06:41:23 localhost kernel: x2apic enabled
Dec 05 06:41:23 localhost kernel: Switched APIC routing to physical x2apic.
Dec 05 06:41:23 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 05 06:41:23 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 05 06:41:23 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 05 06:41:23 localhost kernel: LSM: Security Framework initializing
Dec 05 06:41:23 localhost kernel: Yama: becoming mindful.
Dec 05 06:41:23 localhost kernel: SELinux:  Initializing.
Dec 05 06:41:23 localhost kernel: LSM support for eBPF active
Dec 05 06:41:23 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 05 06:41:23 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 05 06:41:23 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 05 06:41:23 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 05 06:41:23 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 05 06:41:23 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 05 06:41:23 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 05 06:41:23 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 05 06:41:23 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Dec 05 06:41:23 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 05 06:41:23 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 05 06:41:23 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 05 06:41:23 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 05 06:41:23 localhost kernel: Freeing SMP alternatives memory: 36K
Dec 05 06:41:23 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 05 06:41:23 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues.
Dec 05 06:41:23 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Dec 05 06:41:23 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Dec 05 06:41:23 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Dec 05 06:41:23 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 05 06:41:23 localhost kernel: ... version:                0
Dec 05 06:41:23 localhost kernel: ... bit width:              48
Dec 05 06:41:23 localhost kernel: ... generic registers:      6
Dec 05 06:41:23 localhost kernel: ... value mask:             0000ffffffffffff
Dec 05 06:41:23 localhost kernel: ... max period:             00007fffffffffff
Dec 05 06:41:23 localhost kernel: ... fixed-purpose events:   0
Dec 05 06:41:23 localhost kernel: ... event mask:             000000000000003f
Dec 05 06:41:23 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 05 06:41:23 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 05 06:41:23 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 05 06:41:23 localhost kernel: x86: Booting SMP configuration:
Dec 05 06:41:23 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 05 06:41:23 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 05 06:41:23 localhost kernel: smpboot: Max logical packages: 8
Dec 05 06:41:23 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 05 06:41:23 localhost kernel: node 0 deferred pages initialised in 25ms
Dec 05 06:41:23 localhost kernel: devtmpfs: initialized
Dec 05 06:41:23 localhost kernel: x86/mm: Memory block size: 128MB
Dec 05 06:41:23 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 05 06:41:23 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Dec 05 06:41:23 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 05 06:41:23 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 05 06:41:23 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations
Dec 05 06:41:23 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 05 06:41:23 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 05 06:41:23 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 05 06:41:23 localhost kernel: audit: type=2000 audit(1764916882.321:1): state=initialized audit_enabled=0 res=1
Dec 05 06:41:23 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 05 06:41:23 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 05 06:41:23 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 05 06:41:23 localhost kernel: cpuidle: using governor menu
Dec 05 06:41:23 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB
Dec 05 06:41:23 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 05 06:41:23 localhost kernel: PCI: Using configuration type 1 for base access
Dec 05 06:41:23 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 05 06:41:23 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 05 06:41:23 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB
Dec 05 06:41:23 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Dec 05 06:41:23 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Dec 05 06:41:23 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 05 06:41:23 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 05 06:41:23 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 05 06:41:23 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 05 06:41:23 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 05 06:41:23 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video)
Dec 05 06:41:23 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Dec 05 06:41:23 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Dec 05 06:41:23 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 05 06:41:23 localhost kernel: ACPI: Interpreter enabled
Dec 05 06:41:23 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 05 06:41:23 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 05 06:41:23 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 05 06:41:23 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 05 06:41:23 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 05 06:41:23 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 05 06:41:23 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [3] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [4] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [5] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [6] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [7] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [8] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [9] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [10] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [11] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [12] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [13] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [14] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [15] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [16] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [17] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [18] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [19] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [20] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [21] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [22] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [23] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [24] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [25] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [26] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [27] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [28] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [29] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [30] registered
Dec 05 06:41:23 localhost kernel: acpiphp: Slot [31] registered
Dec 05 06:41:23 localhost kernel: PCI host bridge to bus 0000:00
Dec 05 06:41:23 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 05 06:41:23 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 05 06:41:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 05 06:41:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 05 06:41:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window]
Dec 05 06:41:23 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 05 06:41:23 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.2: reg 0x20: [io  0xc100-0xc11f]
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 05 06:41:23 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Dec 05 06:41:23 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Dec 05 06:41:23 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 05 06:41:23 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff]
Dec 05 06:41:23 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref]
Dec 05 06:41:23 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 05 06:41:23 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Dec 05 06:41:23 localhost kernel: pci 0000:00:03.0: reg 0x10: [io  0xc080-0xc0bf]
Dec 05 06:41:23 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff]
Dec 05 06:41:23 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 05 06:41:23 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref]
Dec 05 06:41:23 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Dec 05 06:41:23 localhost kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Dec 05 06:41:23 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff]
Dec 05 06:41:23 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 05 06:41:23 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00
Dec 05 06:41:23 localhost kernel: pci 0000:00:05.0: reg 0x10: [io  0xc0c0-0xc0ff]
Dec 05 06:41:23 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 05 06:41:23 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00
Dec 05 06:41:23 localhost kernel: pci 0000:00:06.0: reg 0x10: [io  0xc120-0xc13f]
Dec 05 06:41:23 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 05 06:41:23 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 05 06:41:23 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 05 06:41:23 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 05 06:41:23 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 05 06:41:23 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 05 06:41:23 localhost kernel: iommu: Default domain type: Translated 
Dec 05 06:41:23 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Dec 05 06:41:23 localhost kernel: SCSI subsystem initialized
Dec 05 06:41:23 localhost kernel: ACPI: bus type USB registered
Dec 05 06:41:23 localhost kernel: usbcore: registered new interface driver usbfs
Dec 05 06:41:23 localhost kernel: usbcore: registered new interface driver hub
Dec 05 06:41:23 localhost kernel: usbcore: registered new device driver usb
Dec 05 06:41:23 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 05 06:41:23 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 05 06:41:23 localhost kernel: PTP clock support registered
Dec 05 06:41:23 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 05 06:41:23 localhost kernel: NetLabel: Initializing
Dec 05 06:41:23 localhost kernel: NetLabel:  domain hash size = 128
Dec 05 06:41:23 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 05 06:41:23 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 05 06:41:23 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 05 06:41:23 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 05 06:41:23 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 05 06:41:23 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 05 06:41:23 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 05 06:41:23 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 05 06:41:23 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 05 06:41:23 localhost kernel: vgaarb: loaded
Dec 05 06:41:23 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 05 06:41:23 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 05 06:41:23 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 05 06:41:23 localhost kernel: pnp: PnP ACPI init
Dec 05 06:41:23 localhost kernel: pnp 00:03: [dma 2]
Dec 05 06:41:23 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 05 06:41:23 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 05 06:41:23 localhost kernel: NET: Registered PF_INET protocol family
Dec 05 06:41:23 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 05 06:41:23 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear)
Dec 05 06:41:23 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 05 06:41:23 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 05 06:41:23 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 05 06:41:23 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536)
Dec 05 06:41:23 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear)
Dec 05 06:41:23 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear)
Dec 05 06:41:23 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear)
Dec 05 06:41:23 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 05 06:41:23 localhost kernel: NET: Registered PF_XDP protocol family
Dec 05 06:41:23 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 05 06:41:23 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 05 06:41:23 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 05 06:41:23 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 05 06:41:23 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window]
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 05 06:41:23 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 05 06:41:23 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 05 06:41:23 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 27530 usecs
Dec 05 06:41:23 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 05 06:41:23 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 05 06:41:23 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 05 06:41:23 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 05 06:41:23 localhost kernel: ACPI: bus type thunderbolt registered
Dec 05 06:41:23 localhost kernel: Initialise system trusted keyrings
Dec 05 06:41:23 localhost kernel: Key type blacklist registered
Dec 05 06:41:23 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0
Dec 05 06:41:23 localhost kernel: zbud: loaded
Dec 05 06:41:23 localhost kernel: integrity: Platform Keyring initialized
Dec 05 06:41:23 localhost kernel: NET: Registered PF_ALG protocol family
Dec 05 06:41:23 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 05 06:41:23 localhost kernel: Key type asymmetric registered
Dec 05 06:41:23 localhost kernel: Asymmetric key parser 'x509' registered
Dec 05 06:41:23 localhost kernel: Running certificate verification selftests
Dec 05 06:41:23 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 05 06:41:23 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 05 06:41:23 localhost kernel: io scheduler mq-deadline registered
Dec 05 06:41:23 localhost kernel: io scheduler kyber registered
Dec 05 06:41:23 localhost kernel: io scheduler bfq registered
Dec 05 06:41:23 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 05 06:41:23 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 05 06:41:23 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 05 06:41:23 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 05 06:41:23 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 05 06:41:23 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 05 06:41:23 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 05 06:41:23 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 05 06:41:23 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 05 06:41:23 localhost kernel: Non-volatile memory driver v1.3
Dec 05 06:41:23 localhost kernel: rdac: device handler registered
Dec 05 06:41:23 localhost kernel: hp_sw: device handler registered
Dec 05 06:41:23 localhost kernel: emc: device handler registered
Dec 05 06:41:23 localhost kernel: alua: device handler registered
Dec 05 06:41:23 localhost kernel: libphy: Fixed MDIO Bus: probed
Dec 05 06:41:23 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Dec 05 06:41:23 localhost kernel: ehci-pci: EHCI PCI platform driver
Dec 05 06:41:23 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Dec 05 06:41:23 localhost kernel: ohci-pci: OHCI PCI platform driver
Dec 05 06:41:23 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver
Dec 05 06:41:23 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 05 06:41:23 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 05 06:41:23 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 05 06:41:23 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 05 06:41:23 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 05 06:41:23 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 05 06:41:23 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 05 06:41:23 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd
Dec 05 06:41:23 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 05 06:41:23 localhost kernel: hub 1-0:1.0: USB hub found
Dec 05 06:41:23 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 05 06:41:23 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 05 06:41:23 localhost kernel: usbserial: USB Serial support registered for generic
Dec 05 06:41:23 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 05 06:41:23 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 05 06:41:23 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 05 06:41:23 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 05 06:41:23 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 05 06:41:23 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 05 06:41:23 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 05 06:41:23 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-05T06:41:22 UTC (1764916882)
Dec 05 06:41:23 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 05 06:41:23 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 05 06:41:23 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 05 06:41:23 localhost kernel: usbcore: registered new interface driver usbhid
Dec 05 06:41:23 localhost kernel: usbhid: USB HID core driver
Dec 05 06:41:23 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 05 06:41:23 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 05 06:41:23 localhost kernel: Initializing XFRM netlink socket
Dec 05 06:41:23 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 05 06:41:23 localhost kernel: Segment Routing with IPv6
Dec 05 06:41:23 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 05 06:41:23 localhost kernel: mpls_gso: MPLS GSO support
Dec 05 06:41:23 localhost kernel: IPI shorthand broadcast: enabled
Dec 05 06:41:23 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 05 06:41:23 localhost kernel: AES CTR mode by8 optimization enabled
Dec 05 06:41:23 localhost kernel: sched_clock: Marking stable (726201730, 179096074)->(1035371160, -130073356)
Dec 05 06:41:23 localhost kernel: registered taskstats version 1
Dec 05 06:41:23 localhost kernel: Loading compiled-in X.509 certificates
Dec 05 06:41:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Dec 05 06:41:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 05 06:41:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 05 06:41:23 localhost kernel: zswap: loaded using pool lzo/zbud
Dec 05 06:41:23 localhost kernel: page_owner is disabled
Dec 05 06:41:23 localhost kernel: Key type big_key registered
Dec 05 06:41:23 localhost kernel: Freeing initrd memory: 74232K
Dec 05 06:41:23 localhost kernel: Key type encrypted registered
Dec 05 06:41:23 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 05 06:41:23 localhost kernel: Loading compiled-in module X.509 certificates
Dec 05 06:41:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Dec 05 06:41:23 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 05 06:41:23 localhost kernel: ima: No architecture policies found
Dec 05 06:41:23 localhost kernel: evm: Initialising EVM extended attributes:
Dec 05 06:41:23 localhost kernel: evm: security.selinux
Dec 05 06:41:23 localhost kernel: evm: security.SMACK64 (disabled)
Dec 05 06:41:23 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 05 06:41:23 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 05 06:41:23 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 05 06:41:23 localhost kernel: evm: security.apparmor (disabled)
Dec 05 06:41:23 localhost kernel: evm: security.ima
Dec 05 06:41:23 localhost kernel: evm: security.capability
Dec 05 06:41:23 localhost kernel: evm: HMAC attrs: 0x1
Dec 05 06:41:23 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 05 06:41:23 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 05 06:41:23 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 05 06:41:23 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 05 06:41:23 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 05 06:41:23 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 05 06:41:23 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 05 06:41:23 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 05 06:41:23 localhost kernel: Freeing unused decrypted memory: 2036K
Dec 05 06:41:23 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K
Dec 05 06:41:23 localhost kernel: Write protecting the kernel read-only data: 26624k
Dec 05 06:41:23 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Dec 05 06:41:23 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K
Dec 05 06:41:23 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 05 06:41:23 localhost kernel: Run /init as init process
Dec 05 06:41:23 localhost kernel:   with arguments:
Dec 05 06:41:23 localhost kernel:     /init
Dec 05 06:41:23 localhost kernel:   with environment:
Dec 05 06:41:23 localhost kernel:     HOME=/
Dec 05 06:41:23 localhost kernel:     TERM=linux
Dec 05 06:41:23 localhost kernel:     BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64
Dec 05 06:41:23 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 05 06:41:23 localhost systemd[1]: Detected virtualization kvm.
Dec 05 06:41:23 localhost systemd[1]: Detected architecture x86-64.
Dec 05 06:41:23 localhost systemd[1]: Running in initrd.
Dec 05 06:41:23 localhost systemd[1]: No hostname configured, using default hostname.
Dec 05 06:41:23 localhost systemd[1]: Hostname set to <localhost>.
Dec 05 06:41:23 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 05 06:41:23 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 05 06:41:23 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 05 06:41:23 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 05 06:41:23 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 05 06:41:23 localhost systemd[1]: Reached target Local File Systems.
Dec 05 06:41:23 localhost systemd[1]: Reached target Path Units.
Dec 05 06:41:23 localhost systemd[1]: Reached target Slice Units.
Dec 05 06:41:23 localhost systemd[1]: Reached target Swaps.
Dec 05 06:41:23 localhost systemd[1]: Reached target Timer Units.
Dec 05 06:41:23 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 05 06:41:23 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 05 06:41:23 localhost systemd[1]: Listening on Journal Socket.
Dec 05 06:41:23 localhost systemd[1]: Listening on udev Control Socket.
Dec 05 06:41:23 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 05 06:41:23 localhost systemd[1]: Reached target Socket Units.
Dec 05 06:41:23 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 05 06:41:23 localhost systemd[1]: Starting Journal Service...
Dec 05 06:41:23 localhost systemd[1]: Starting Load Kernel Modules...
Dec 05 06:41:23 localhost systemd[1]: Starting Create System Users...
Dec 05 06:41:23 localhost systemd[1]: Starting Setup Virtual Console...
Dec 05 06:41:23 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 05 06:41:23 localhost systemd[1]: Finished Load Kernel Modules.
Dec 05 06:41:23 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 05 06:41:23 localhost systemd-journald[283]: Journal started
Dec 05 06:41:23 localhost systemd-journald[283]: Runtime Journal (/run/log/journal/38a014e5f2114fa18868c362af7c3bc6) is 8.0M, max 314.7M, 306.7M free.
Dec 05 06:41:23 localhost systemd-modules-load[284]: Module 'msr' is built in
Dec 05 06:41:23 localhost systemd[1]: Started Journal Service.
Dec 05 06:41:23 localhost systemd[1]: Finished Setup Virtual Console.
Dec 05 06:41:23 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 05 06:41:23 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 05 06:41:23 localhost systemd[1]: Starting dracut cmdline hook...
Dec 05 06:41:23 localhost systemd-sysusers[285]: Creating group 'sgx' with GID 997.
Dec 05 06:41:23 localhost systemd-sysusers[285]: Creating group 'users' with GID 100.
Dec 05 06:41:23 localhost systemd-sysusers[285]: Creating group 'dbus' with GID 81.
Dec 05 06:41:23 localhost systemd-sysusers[285]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 05 06:41:23 localhost systemd[1]: Finished Create System Users.
Dec 05 06:41:23 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 05 06:41:23 localhost dracut-cmdline[290]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9
Dec 05 06:41:23 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 05 06:41:23 localhost dracut-cmdline[290]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Dec 05 06:41:23 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 05 06:41:23 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 05 06:41:23 localhost systemd[1]: Finished dracut cmdline hook.
Dec 05 06:41:23 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 05 06:41:23 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 05 06:41:23 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 05 06:41:23 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com
Dec 05 06:41:23 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 05 06:41:23 localhost kernel: RPC: Registered udp transport module.
Dec 05 06:41:23 localhost kernel: RPC: Registered tcp transport module.
Dec 05 06:41:23 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 05 06:41:23 localhost rpc.statd[406]: Version 2.5.4 starting
Dec 05 06:41:23 localhost rpc.statd[406]: Initializing NSM state
Dec 05 06:41:23 localhost rpc.idmapd[411]: Setting log level to 0
Dec 05 06:41:23 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 05 06:41:23 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 05 06:41:23 localhost systemd-udevd[424]: Using default interface naming scheme 'rhel-9.0'.
Dec 05 06:41:23 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 05 06:41:23 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 05 06:41:23 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 05 06:41:23 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 05 06:41:23 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 05 06:41:23 localhost systemd[1]: Reached target System Initialization.
Dec 05 06:41:23 localhost systemd[1]: Reached target Basic System.
Dec 05 06:41:23 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 05 06:41:23 localhost systemd[1]: Reached target Network.
Dec 05 06:41:23 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 05 06:41:23 localhost systemd[1]: Starting dracut initqueue hook...
Dec 05 06:41:23 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB)
Dec 05 06:41:23 localhost kernel: libata version 3.00 loaded.
Dec 05 06:41:23 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec 05 06:41:23 localhost kernel: GPT:20971519 != 838860799
Dec 05 06:41:23 localhost kernel: GPT:Alternate GPT header not at the end of the disk.
Dec 05 06:41:23 localhost kernel: GPT:20971519 != 838860799
Dec 05 06:41:23 localhost kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 05 06:41:23 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 05 06:41:23 localhost kernel:  vda: vda1 vda2 vda3 vda4
Dec 05 06:41:23 localhost kernel: scsi host0: ata_piix
Dec 05 06:41:23 localhost kernel: scsi host1: ata_piix
Dec 05 06:41:23 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Dec 05 06:41:23 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Dec 05 06:41:23 localhost systemd-udevd[430]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 06:41:23 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Dec 05 06:41:24 localhost systemd[1]: Reached target Initrd Root Device.
Dec 05 06:41:24 localhost kernel: ata1: found unknown device (class 0)
Dec 05 06:41:24 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 05 06:41:24 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 05 06:41:24 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 05 06:41:24 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 05 06:41:24 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 05 06:41:24 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 05 06:41:24 localhost systemd[1]: Finished dracut initqueue hook.
Dec 05 06:41:24 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 05 06:41:24 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 05 06:41:24 localhost systemd[1]: Reached target Remote File Systems.
Dec 05 06:41:24 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 05 06:41:24 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 05 06:41:24 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a...
Dec 05 06:41:24 localhost systemd-fsck[511]: /usr/sbin/fsck.xfs: XFS file system.
Dec 05 06:41:24 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Dec 05 06:41:24 localhost systemd[1]: Mounting /sysroot...
Dec 05 06:41:24 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 05 06:41:24 localhost kernel: XFS (vda4): Mounting V5 Filesystem
Dec 05 06:41:24 localhost kernel: XFS (vda4): Ending clean mount
Dec 05 06:41:24 localhost systemd[1]: Mounted /sysroot.
Dec 05 06:41:24 localhost systemd[1]: Reached target Initrd Root File System.
Dec 05 06:41:24 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 05 06:41:24 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 05 06:41:24 localhost systemd[1]: Reached target Initrd File Systems.
Dec 05 06:41:24 localhost systemd[1]: Reached target Initrd Default Target.
Dec 05 06:41:24 localhost systemd[1]: Starting dracut mount hook...
Dec 05 06:41:24 localhost systemd[1]: Finished dracut mount hook.
Dec 05 06:41:24 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 05 06:41:24 localhost rpc.idmapd[411]: exiting on signal 15
Dec 05 06:41:24 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 05 06:41:24 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 05 06:41:24 localhost systemd[1]: Stopped target Network.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Timer Units.
Dec 05 06:41:24 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 05 06:41:24 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Basic System.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Path Units.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Remote File Systems.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Slice Units.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Socket Units.
Dec 05 06:41:24 localhost systemd[1]: Stopped target System Initialization.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Local File Systems.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Swaps.
Dec 05 06:41:24 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped dracut mount hook.
Dec 05 06:41:24 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 05 06:41:24 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 05 06:41:24 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 05 06:41:24 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 05 06:41:24 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 05 06:41:24 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped Load Kernel Modules.
Dec 05 06:41:24 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 05 06:41:24 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 05 06:41:24 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 05 06:41:24 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 05 06:41:24 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 05 06:41:24 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 05 06:41:24 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Closed udev Control Socket.
Dec 05 06:41:24 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Closed udev Kernel Socket.
Dec 05 06:41:24 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 05 06:41:24 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 05 06:41:24 localhost systemd[1]: Starting Cleanup udev Database...
Dec 05 06:41:24 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 05 06:41:24 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 05 06:41:24 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Stopped Create System Users.
Dec 05 06:41:24 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 05 06:41:24 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 05 06:41:24 localhost systemd[1]: Finished Cleanup udev Database.
Dec 05 06:41:24 localhost systemd[1]: Reached target Switch Root.
Dec 05 06:41:24 localhost systemd[1]: Starting Switch Root...
Dec 05 06:41:24 localhost systemd[1]: Switching root.
Dec 05 06:41:25 localhost systemd-journald[283]: Journal stopped
Dec 05 06:41:25 localhost systemd-journald[283]: Received SIGTERM from PID 1 (systemd).
Dec 05 06:41:25 localhost kernel: audit: type=1404 audit(1764916885.049:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 05 06:41:25 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 06:41:25 localhost kernel: SELinux:  policy capability open_perms=1
Dec 05 06:41:25 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 06:41:25 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 05 06:41:25 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 06:41:25 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 06:41:25 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 06:41:25 localhost kernel: audit: type=1403 audit(1764916885.128:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 05 06:41:25 localhost systemd[1]: Successfully loaded SELinux policy in 81.021ms.
Dec 05 06:41:25 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.346ms.
Dec 05 06:41:25 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 05 06:41:25 localhost systemd[1]: Detected virtualization kvm.
Dec 05 06:41:25 localhost systemd[1]: Detected architecture x86-64.
Dec 05 06:41:25 localhost systemd-rc-local-generator[583]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 06:41:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 06:41:25 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 05 06:41:25 localhost systemd[1]: Stopped Switch Root.
Dec 05 06:41:25 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 05 06:41:25 localhost systemd[1]: Created slice Slice /system/getty.
Dec 05 06:41:25 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 05 06:41:25 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 05 06:41:25 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 05 06:41:25 localhost systemd[1]: Created slice Slice /system/systemd-fsck.
Dec 05 06:41:25 localhost systemd[1]: Created slice User and Session Slice.
Dec 05 06:41:25 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 05 06:41:25 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 05 06:41:25 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 05 06:41:25 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 05 06:41:25 localhost systemd[1]: Stopped target Switch Root.
Dec 05 06:41:25 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 05 06:41:25 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 05 06:41:25 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 05 06:41:25 localhost systemd[1]: Reached target Path Units.
Dec 05 06:41:25 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 05 06:41:25 localhost systemd[1]: Reached target Slice Units.
Dec 05 06:41:25 localhost systemd[1]: Reached target Swaps.
Dec 05 06:41:25 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 05 06:41:25 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 05 06:41:25 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 05 06:41:25 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 05 06:41:25 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 05 06:41:25 localhost systemd[1]: Listening on udev Control Socket.
Dec 05 06:41:25 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 05 06:41:25 localhost systemd[1]: Mounting Huge Pages File System...
Dec 05 06:41:25 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 05 06:41:25 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 05 06:41:25 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 05 06:41:25 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 05 06:41:25 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 05 06:41:25 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 05 06:41:25 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 05 06:41:25 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 05 06:41:25 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 05 06:41:25 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 05 06:41:25 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 05 06:41:25 localhost systemd[1]: Stopped Journal Service.
Dec 05 06:41:25 localhost systemd[1]: Starting Journal Service...
Dec 05 06:41:25 localhost systemd[1]: Starting Load Kernel Modules...
Dec 05 06:41:25 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 05 06:41:25 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 05 06:41:25 localhost kernel: ACPI: bus type drm_connector registered
Dec 05 06:41:25 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 05 06:41:25 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 05 06:41:25 localhost kernel: fuse: init (API version 7.36)
Dec 05 06:41:25 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 05 06:41:25 localhost systemd-journald[619]: Journal started
Dec 05 06:41:25 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/d70e7573f9252a22999953aab4dc4dc5) is 8.0M, max 314.7M, 306.7M free.
Dec 05 06:41:25 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 05 06:41:25 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 05 06:41:25 localhost systemd-modules-load[620]: Module 'msr' is built in
Dec 05 06:41:25 localhost systemd[1]: Mounted Huge Pages File System.
Dec 05 06:41:25 localhost systemd[1]: Started Journal Service.
Dec 05 06:41:25 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 05 06:41:25 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 05 06:41:25 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 05 06:41:25 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 05 06:41:25 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 05 06:41:25 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 05 06:41:25 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 05 06:41:25 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 05 06:41:25 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 05 06:41:25 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 05 06:41:25 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 05 06:41:25 localhost systemd[1]: Finished Load Kernel Modules.
Dec 05 06:41:25 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 05 06:41:25 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 05 06:41:25 localhost systemd[1]: Mounting FUSE Control File System...
Dec 05 06:41:25 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 05 06:41:25 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 05 06:41:25 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 05 06:41:25 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 05 06:41:25 localhost systemd[1]: Starting Load/Save Random Seed...
Dec 05 06:41:25 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 05 06:41:25 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/d70e7573f9252a22999953aab4dc4dc5) is 8.0M, max 314.7M, 306.7M free.
Dec 05 06:41:25 localhost systemd-journald[619]: Received client request to flush runtime journal.
Dec 05 06:41:25 localhost systemd[1]: Starting Create System Users...
Dec 05 06:41:25 localhost systemd[1]: Mounted FUSE Control File System.
Dec 05 06:41:25 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 05 06:41:25 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 05 06:41:25 localhost systemd[1]: Finished Load/Save Random Seed.
Dec 05 06:41:25 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 05 06:41:25 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 05 06:41:25 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 05 06:41:25 localhost systemd-sysusers[633]: Creating group 'sgx' with GID 989.
Dec 05 06:41:25 localhost systemd-sysusers[633]: Creating group 'systemd-oom' with GID 988.
Dec 05 06:41:25 localhost systemd-sysusers[633]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988.
Dec 05 06:41:25 localhost systemd[1]: Finished Create System Users.
Dec 05 06:41:25 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 05 06:41:25 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 05 06:41:25 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 05 06:41:25 localhost systemd[1]: Set up automount EFI System Partition Automount.
Dec 05 06:41:26 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 05 06:41:26 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 05 06:41:26 localhost systemd-udevd[637]: Using default interface naming scheme 'rhel-9.0'.
Dec 05 06:41:26 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 05 06:41:26 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 05 06:41:26 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 05 06:41:26 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 05 06:41:26 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 05 06:41:26 localhost systemd-udevd[654]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 06:41:26 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped.
Dec 05 06:41:26 localhost systemd[1]: Mounting /boot...
Dec 05 06:41:26 localhost kernel: XFS (vda3): Mounting V5 Filesystem
Dec 05 06:41:26 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped.
Dec 05 06:41:26 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7...
Dec 05 06:41:26 localhost kernel: XFS (vda3): Ending clean mount
Dec 05 06:41:26 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff)
Dec 05 06:41:26 localhost systemd-fsck[688]: fsck.fat 4.2 (2021-01-31)
Dec 05 06:41:26 localhost systemd-fsck[688]: /dev/vda2: 12 files, 1782/51145 clusters
Dec 05 06:41:26 localhost systemd[1]: Mounted /boot.
Dec 05 06:41:26 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7.
Dec 05 06:41:26 localhost systemd[1]: Mounting /boot/efi...
Dec 05 06:41:26 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 05 06:41:26 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 05 06:41:26 localhost systemd[1]: Mounted /boot/efi.
Dec 05 06:41:26 localhost systemd[1]: Reached target Local File Systems.
Dec 05 06:41:26 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 05 06:41:26 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 05 06:41:26 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 05 06:41:26 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 05 06:41:26 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 05 06:41:26 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 05 06:41:26 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 05 06:41:26 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 700 (bootctl)
Dec 05 06:41:26 localhost systemd[1]: Starting File System Check on /dev/vda2...
Dec 05 06:41:26 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 05 06:41:26 localhost systemd[1]: Starting Security Auditing Service...
Dec 05 06:41:26 localhost systemd[1]: Starting RPC Bind...
Dec 05 06:41:26 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 05 06:41:26 localhost systemd[1]: Finished File System Check on /dev/vda2.
Dec 05 06:41:26 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 05 06:41:26 localhost auditd[708]: audit dispatcher initialized with q_depth=1200 and 1 active plugins
Dec 05 06:41:26 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 05 06:41:26 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 05 06:41:26 localhost auditd[708]: Init complete, auditd 3.0.7 listening for events (startup state enable)
Dec 05 06:41:26 localhost systemd[1]: Started RPC Bind.
Dec 05 06:41:26 localhost kernel: Console: switching to colour dummy device 80x25
Dec 05 06:41:26 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 05 06:41:26 localhost kernel: [drm] features: -context_init
Dec 05 06:41:26 localhost kernel: [drm] number of scanouts: 1
Dec 05 06:41:26 localhost kernel: [drm] number of cap sets: 0
Dec 05 06:41:26 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 05 06:41:26 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0
Dec 05 06:41:26 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called
Dec 05 06:41:26 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 05 06:41:26 localhost kernel: SVM: TSC scaling supported
Dec 05 06:41:26 localhost kernel: kvm: Nested Virtualization enabled
Dec 05 06:41:26 localhost kernel: SVM: kvm: Nested Paging enabled
Dec 05 06:41:26 localhost kernel: SVM: LBR virtualization supported
Dec 05 06:41:26 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 05 06:41:26 localhost augenrules[714]: /sbin/augenrules: No change
Dec 05 06:41:26 localhost augenrules[725]: No rules
Dec 05 06:41:26 localhost augenrules[725]: enabled 1
Dec 05 06:41:26 localhost augenrules[725]: failure 1
Dec 05 06:41:26 localhost augenrules[725]: pid 708
Dec 05 06:41:26 localhost augenrules[725]: rate_limit 0
Dec 05 06:41:26 localhost augenrules[725]: backlog_limit 8192
Dec 05 06:41:26 localhost augenrules[725]: lost 0
Dec 05 06:41:26 localhost augenrules[725]: backlog 3
Dec 05 06:41:26 localhost augenrules[725]: backlog_wait_time 60000
Dec 05 06:41:26 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 05 06:41:26 localhost augenrules[725]: enabled 1
Dec 05 06:41:26 localhost augenrules[725]: failure 1
Dec 05 06:41:26 localhost augenrules[725]: pid 708
Dec 05 06:41:26 localhost augenrules[725]: rate_limit 0
Dec 05 06:41:26 localhost augenrules[725]: backlog_limit 8192
Dec 05 06:41:26 localhost augenrules[725]: lost 0
Dec 05 06:41:26 localhost augenrules[725]: backlog 0
Dec 05 06:41:26 localhost augenrules[725]: backlog_wait_time 60000
Dec 05 06:41:26 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 05 06:41:26 localhost augenrules[725]: enabled 1
Dec 05 06:41:26 localhost augenrules[725]: failure 1
Dec 05 06:41:26 localhost augenrules[725]: pid 708
Dec 05 06:41:26 localhost augenrules[725]: rate_limit 0
Dec 05 06:41:26 localhost augenrules[725]: backlog_limit 8192
Dec 05 06:41:26 localhost augenrules[725]: lost 0
Dec 05 06:41:26 localhost augenrules[725]: backlog 0
Dec 05 06:41:26 localhost augenrules[725]: backlog_wait_time 60000
Dec 05 06:41:26 localhost augenrules[725]: backlog_wait_time_actual 0
Dec 05 06:41:26 localhost systemd[1]: Started Security Auditing Service.
Dec 05 06:41:26 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 05 06:41:26 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 05 06:41:26 localhost systemd[1]: Mounting EFI System Partition Automount...
Dec 05 06:41:26 localhost systemd[1]: Mounted EFI System Partition Automount.
Dec 05 06:41:26 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 05 06:41:26 localhost systemd[1]: Starting Update is Completed...
Dec 05 06:41:26 localhost systemd[1]: Finished Update is Completed.
Dec 05 06:41:26 localhost systemd[1]: Reached target System Initialization.
Dec 05 06:41:26 localhost systemd[1]: Started dnf makecache --timer.
Dec 05 06:41:26 localhost systemd[1]: Started Daily rotation of log files.
Dec 05 06:41:26 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 05 06:41:26 localhost systemd[1]: Reached target Timer Units.
Dec 05 06:41:26 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 05 06:41:26 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 05 06:41:26 localhost systemd[1]: Reached target Socket Units.
Dec 05 06:41:26 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)...
Dec 05 06:41:26 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 05 06:41:26 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 05 06:41:26 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 05 06:41:26 localhost systemd[1]: Reached target Basic System.
Dec 05 06:41:26 localhost dbus-broker-lau[744]: Ready
Dec 05 06:41:26 localhost systemd[1]: Starting NTP client/server...
Dec 05 06:41:26 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 05 06:41:26 localhost systemd[1]: Started irqbalance daemon.
Dec 05 06:41:26 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 05 06:41:26 localhost systemd[1]: Starting System Logging Service...
Dec 05 06:41:26 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 06:41:26 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 06:41:26 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 06:41:26 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 05 06:41:26 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 05 06:41:26 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 05 06:41:26 localhost rsyslogd[756]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="756" x-info="https://www.rsyslog.com"] start
Dec 05 06:41:26 localhost rsyslogd[756]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ]
Dec 05 06:41:26 localhost systemd[1]: Starting User Login Management...
Dec 05 06:41:26 localhost systemd[1]: Started System Logging Service.
Dec 05 06:41:26 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 05 06:41:26 localhost chronyd[767]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 05 06:41:26 localhost chronyd[767]: Using right/UTC timezone to obtain leap second data
Dec 05 06:41:26 localhost chronyd[767]: Loaded seccomp filter (level 2)
Dec 05 06:41:26 localhost systemd-logind[762]: New seat seat0.
Dec 05 06:41:26 localhost systemd[1]: Started NTP client/server.
Dec 05 06:41:26 localhost systemd-logind[762]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 05 06:41:26 localhost systemd-logind[762]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 05 06:41:26 localhost systemd[1]: Started User Login Management.
Dec 05 06:41:26 localhost rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 06:41:27 localhost cloud-init[771]: Cloud-init v. 22.1-9.el9 running 'init-local' at Fri, 05 Dec 2025 06:41:27 +0000. Up 5.27 seconds.
Dec 05 06:41:27 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 05 06:41:27 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 05 06:41:27 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpmnsruono.mount: Deactivated successfully.
Dec 05 06:41:27 localhost systemd[1]: Starting Hostname Service...
Dec 05 06:41:27 localhost systemd[1]: Started Hostname Service.
Dec 05 06:41:27 np0005546420.novalocal systemd-hostnamed[785]: Hostname set to <np0005546420.novalocal> (static)
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Finished Initial cloud-init job (pre-networking).
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Reached target Preparation for Network.
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Starting Network Manager...
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.5591] NetworkManager (version 1.42.2-1.el9) is starting... (boot:a8dd9a05-2244-4441-b15f-7008b65d122c)
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.5597] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Started Network Manager.
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.5629] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Reached target Network.
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.5761] manager[0x55827a328020]: monitoring kernel firmware directory '/lib/firmware'.
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Starting Enable periodic update of entitlement certificates....
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.5826] hostname: hostname: using hostnamed
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.5827] hostname: static hostname changed from (none) to "np0005546420.novalocal"
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.5832] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Started Enable periodic update of entitlement certificates..
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Reached target NFS client services.
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6000] manager[0x55827a328020]: rfkill: Wi-Fi hardware radio set enabled
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Reached target Remote File Systems.
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6003] manager[0x55827a328020]: rfkill: WWAN hardware radio set enabled
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6050] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6051] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6053] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6054] manager: Networking is enabled by state file
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6075] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6076] settings: Loaded settings plugin: keyfile (internal)
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6105] dhcp: init: Using DHCP client 'internal'
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6108] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6123] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6128] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6138] device (lo): Activation: starting connection 'lo' (99500e8b-d0d5-4734-90d8-637f5b223e08)
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6147] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6150] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6186] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6188] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6191] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6192] device (eth0): carrier: link connected
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6194] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6199] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6206] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6212] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6213] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6216] manager: NetworkManager state is now CONNECTING
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6217] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6227] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6230] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6355] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6356] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6360] device (lo): Activation: successful, device activated.
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6768] dhcp4 (eth0): state changed new lease, address=38.102.83.241
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6771] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6789] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6810] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6811] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6814] manager: NetworkManager state is now CONNECTED_SITE
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6816] device (eth0): Activation: successful, device activated.
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6818] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 05 06:41:27 np0005546420.novalocal NetworkManager[790]: <info>  [1764916887.6820] manager: startup complete
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Starting Authorization Manager...
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: Cloud-init v. 22.1-9.el9 running 'init' at Fri, 05 Dec 2025 06:41:27 +0000. Up 6.09 seconds.
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: |  eth0  | True |        38.102.83.241         | 255.255.255.0 | global | fa:16:3e:ed:1b:d3 |
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: |  eth0  | True | fe80::f816:3eff:feed:1bd3/64 |       .       |  link  | fa:16:3e:ed:1b:d3 |
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 05 06:41:27 np0005546420.novalocal cloud-init[1030]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 05 06:41:28 np0005546420.novalocal cloud-init[1030]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Started Dynamic System Tuning Daemon.
Dec 05 06:41:27 np0005546420.novalocal polkitd[1032]: Started polkitd version 0.117
Dec 05 06:41:27 np0005546420.novalocal systemd[1]: Started Authorization Manager.
Dec 05 06:41:27 np0005546420.novalocal polkitd[1032]: Loading rules from directory /etc/polkit-1/rules.d
Dec 05 06:41:27 np0005546420.novalocal polkitd[1032]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 05 06:41:27 np0005546420.novalocal polkitd[1032]: Finished loading, compiling and executing 4 rules
Dec 05 06:41:27 np0005546420.novalocal polkitd[1032]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 05 06:41:29 np0005546420.novalocal useradd[1116]: new group: name=cloud-user, GID=1001
Dec 05 06:41:29 np0005546420.novalocal useradd[1116]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 05 06:41:29 np0005546420.novalocal useradd[1116]: add 'cloud-user' to group 'adm'
Dec 05 06:41:29 np0005546420.novalocal useradd[1116]: add 'cloud-user' to group 'systemd-journal'
Dec 05 06:41:29 np0005546420.novalocal useradd[1116]: add 'cloud-user' to shadow group 'adm'
Dec 05 06:41:29 np0005546420.novalocal useradd[1116]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: Generating public/private rsa key pair.
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: The key fingerprint is:
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: SHA256:ttkSyX+BHFVaRnP2gDCQw0VueuSJaWbhHxiRuQ4eymg root@np0005546420.novalocal
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: The key's randomart image is:
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: +---[RSA 3072]----+
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |       .oB=..+B o|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |        *o o.+ =.|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |        oo= .   .|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |      oo.& +     |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |   o o +S B .    |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |  E o .=.O . .   |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: | .      + + .    |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |         . .     |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |                 |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: +----[SHA256]-----+
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: Generating public/private ecdsa key pair.
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: The key fingerprint is:
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: SHA256:ul03913kZJUDL2vLKWojeG5z/jwm55mYiXtV2qEq3po root@np0005546420.novalocal
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: The key's randomart image is:
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: +---[ECDSA 256]---+
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |             .   |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |              o .|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |             . +.|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |             oo o|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |        S   =o. +|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |       .   +o.o= |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |      o   +.o+. o|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |     . O+O*=+o .o|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |      *E%BB*.   o|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: +----[SHA256]-----+
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: Generating public/private ed25519 key pair.
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: The key fingerprint is:
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: SHA256:BK0AOv13tO6oO31yV8d3fCIhNpOK/0Fb3kOY6DM9kHg root@np0005546420.novalocal
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: The key's randomart image is:
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: +--[ED25519 256]--+
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |  ..  ..         |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: | o  .  ..        |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |o .  . .o  .     |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: | . .  .o o*o.o   |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |    . ..SoE++o.. |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |     ..o.+ *oo+ =|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |     . .. *.+oooo|
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |    . oo+ .+ . . |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: |    o+.+.o.      |
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1030]: +----[SHA256]-----+
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Finished Initial cloud-init job (metadata service crawler).
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Reached target Network is Online.
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Starting Apply the settings specified in cloud-config...
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot).
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 05 06:41:30 np0005546420.novalocal sm-notify[1129]: Version 2.5.4 starting
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Starting Permit User Sessions...
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Finished Permit User Sessions.
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Started Command Scheduler.
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Started Getty on tty1.
Dec 05 06:41:30 np0005546420.novalocal sshd[1130]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 05 06:41:30 np0005546420.novalocal crond[1132]: (CRON) STARTUP (1.5.7)
Dec 05 06:41:30 np0005546420.novalocal crond[1132]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Reached target Login Prompts.
Dec 05 06:41:30 np0005546420.novalocal crond[1132]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 25% if used.)
Dec 05 06:41:30 np0005546420.novalocal crond[1132]: (CRON) INFO (running with inotify support)
Dec 05 06:41:30 np0005546420.novalocal sshd[1130]: Server listening on 0.0.0.0 port 22.
Dec 05 06:41:30 np0005546420.novalocal sshd[1130]: Server listening on :: port 22.
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Reached target Multi-User System.
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 05 06:41:30 np0005546420.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 05 06:41:30 np0005546420.novalocal kdumpctl[1136]: kdump: No kdump initial ramdisk found.
Dec 05 06:41:30 np0005546420.novalocal kdumpctl[1136]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img
Dec 05 06:41:30 np0005546420.novalocal cloud-init[1250]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Fri, 05 Dec 2025 06:41:30 +0000. Up 9.13 seconds.
Dec 05 06:41:31 np0005546420.novalocal systemd[1]: Finished Apply the settings specified in cloud-config.
Dec 05 06:41:31 np0005546420.novalocal systemd[1]: Starting Execute cloud user/final scripts...
Dec 05 06:41:31 np0005546420.novalocal dracut[1415]: dracut-057-21.git20230214.el9
Dec 05 06:41:31 np0005546420.novalocal cloud-init[1433]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Fri, 05 Dec 2025 06:41:31 +0000. Up 9.52 seconds.
Dec 05 06:41:31 np0005546420.novalocal cloud-init[1440]: #############################################################
Dec 05 06:41:31 np0005546420.novalocal cloud-init[1445]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64
Dec 05 06:41:31 np0005546420.novalocal cloud-init[1450]: 256 SHA256:ul03913kZJUDL2vLKWojeG5z/jwm55mYiXtV2qEq3po root@np0005546420.novalocal (ECDSA)
Dec 05 06:41:31 np0005546420.novalocal cloud-init[1456]: 256 SHA256:BK0AOv13tO6oO31yV8d3fCIhNpOK/0Fb3kOY6DM9kHg root@np0005546420.novalocal (ED25519)
Dec 05 06:41:31 np0005546420.novalocal cloud-init[1462]: 3072 SHA256:ttkSyX+BHFVaRnP2gDCQw0VueuSJaWbhHxiRuQ4eymg root@np0005546420.novalocal (RSA)
Dec 05 06:41:31 np0005546420.novalocal cloud-init[1464]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 05 06:41:31 np0005546420.novalocal cloud-init[1466]: #############################################################
Dec 05 06:41:31 np0005546420.novalocal cloud-init[1433]: Cloud-init v. 22.1-9.el9 finished at Fri, 05 Dec 2025 06:41:31 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.74 seconds
Dec 05 06:41:31 np0005546420.novalocal systemd[1]: Reloading Network Manager...
Dec 05 06:41:31 np0005546420.novalocal NetworkManager[790]: <info>  [1764916891.6661] audit: op="reload" arg="0" pid=1543 uid=0 result="success"
Dec 05 06:41:31 np0005546420.novalocal NetworkManager[790]: <info>  [1764916891.6671] config: signal: SIGHUP (no changes from disk)
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 05 06:41:31 np0005546420.novalocal systemd[1]: Reloaded Network Manager.
Dec 05 06:41:31 np0005546420.novalocal systemd[1]: Finished Execute cloud user/final scripts.
Dec 05 06:41:31 np0005546420.novalocal systemd[1]: Reached target Cloud-init target.
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 05 06:41:31 np0005546420.novalocal dracut[1417]: dracut module 'resume' will not be installed, because it's in the list to be omitted!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: memstrack is not available
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: memstrack is not available
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: *** Including module: systemd ***
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: *** Including module: systemd-initrd ***
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: *** Including module: i18n ***
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: No KEYMAP configured.
Dec 05 06:41:32 np0005546420.novalocal dracut[1417]: *** Including module: drm ***
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]: *** Including module: prefixdevname ***
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]: *** Including module: kernel-modules ***
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]: *** Including module: kernel-modules-extra ***
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]: *** Including module: qemu ***
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]: *** Including module: fstab-sys ***
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]: *** Including module: rootfs-block ***
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]: *** Including module: terminfo ***
Dec 05 06:41:33 np0005546420.novalocal dracut[1417]: *** Including module: udev-rules ***
Dec 05 06:41:34 np0005546420.novalocal chronyd[767]: Selected source 138.197.164.54 (2.rhel.pool.ntp.org)
Dec 05 06:41:34 np0005546420.novalocal chronyd[767]: System clock TAI offset set to 37 seconds
Dec 05 06:41:34 np0005546420.novalocal dracut[1417]: Skipping udev rule: 91-permissions.rules
Dec 05 06:41:34 np0005546420.novalocal dracut[1417]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 05 06:41:34 np0005546420.novalocal dracut[1417]: *** Including module: virtiofs ***
Dec 05 06:41:34 np0005546420.novalocal dracut[1417]: *** Including module: dracut-systemd ***
Dec 05 06:41:34 np0005546420.novalocal dracut[1417]: *** Including module: usrmount ***
Dec 05 06:41:34 np0005546420.novalocal dracut[1417]: *** Including module: base ***
Dec 05 06:41:34 np0005546420.novalocal dracut[1417]: *** Including module: fs-lib ***
Dec 05 06:41:34 np0005546420.novalocal dracut[1417]: *** Including module: kdumpbase ***
Dec 05 06:41:34 np0005546420.novalocal dracut[1417]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 05 06:41:34 np0005546420.novalocal dracut[1417]:   microcode_ctl module: mangling fw_dir
Dec 05 06:41:34 np0005546420.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: configuration "intel" is ignored
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]:     microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware"
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]: *** Including module: shutdown ***
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]: *** Including module: squash ***
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]: *** Including modules done ***
Dec 05 06:41:35 np0005546420.novalocal dracut[1417]: *** Installing kernel module dependencies ***
Dec 05 06:41:36 np0005546420.novalocal dracut[1417]: *** Installing kernel module dependencies done ***
Dec 05 06:41:36 np0005546420.novalocal dracut[1417]: *** Resolving executable dependencies ***
Dec 05 06:41:37 np0005546420.novalocal sshd[3588]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:41:37 np0005546420.novalocal sshd[3606]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:41:37 np0005546420.novalocal sshd[3606]: Unable to negotiate with 38.102.83.114 port 38810: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 05 06:41:37 np0005546420.novalocal sshd[3621]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:41:37 np0005546420.novalocal sshd[3588]: Connection closed by 38.102.83.114 port 38794 [preauth]
Dec 05 06:41:37 np0005546420.novalocal sshd[3660]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:41:37 np0005546420.novalocal sshd[3660]: Unable to negotiate with 38.102.83.114 port 38838: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 05 06:41:37 np0005546420.novalocal sshd[3672]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:41:37 np0005546420.novalocal sshd[3672]: Unable to negotiate with 38.102.83.114 port 38854: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 05 06:41:37 np0005546420.novalocal sshd[3686]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:41:37 np0005546420.novalocal sshd[3706]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:41:37 np0005546420.novalocal sshd[3621]: Connection closed by 38.102.83.114 port 38826 [preauth]
Dec 05 06:41:37 np0005546420.novalocal sshd[3720]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:41:37 np0005546420.novalocal sshd[3720]: fatal: mm_answer_sign: sign: error in libcrypto
Dec 05 06:41:37 np0005546420.novalocal sshd[3740]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:41:37 np0005546420.novalocal sshd[3740]: Unable to negotiate with 38.102.83.114 port 38898: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 05 06:41:37 np0005546420.novalocal sshd[3686]: Connection closed by 38.102.83.114 port 38870 [preauth]
Dec 05 06:41:37 np0005546420.novalocal sshd[3706]: Connection closed by 38.102.83.114 port 38876 [preauth]
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: *** Resolving executable dependencies done ***
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: *** Hardlinking files ***
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: Mode:           real
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: Files:          1099
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: Linked:         3 files
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: Compared:       0 xattrs
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: Compared:       373 files
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: Saved:          61.04 KiB
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: Duration:       0.029962 seconds
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: *** Hardlinking files done ***
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: Could not find 'strip'. Not stripping the initramfs.
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: *** Generating early-microcode cpio image ***
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: *** Constructing AuthenticAMD.bin ***
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: *** Store current command line parameters ***
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: Stored kernel commandline:
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: No dracut internal kernel commandline stored in the initramfs
Dec 05 06:41:37 np0005546420.novalocal dracut[1417]: *** Install squash loader ***
Dec 05 06:41:37 np0005546420.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 06:41:38 np0005546420.novalocal dracut[1417]: *** Squashing the files inside the initramfs ***
Dec 05 06:41:39 np0005546420.novalocal dracut[1417]: *** Squashing the files inside the initramfs done ***
Dec 05 06:41:39 np0005546420.novalocal dracut[1417]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' ***
Dec 05 06:41:39 np0005546420.novalocal dracut[1417]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done ***
Dec 05 06:41:40 np0005546420.novalocal kdumpctl[1136]: kdump: kexec: loaded kdump kernel
Dec 05 06:41:40 np0005546420.novalocal kdumpctl[1136]: kdump: Starting kdump: [OK]
Dec 05 06:41:40 np0005546420.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 05 06:41:40 np0005546420.novalocal systemd[1]: Startup finished in 1.222s (kernel) + 2.015s (initrd) + 14.959s (userspace) = 18.197s.
Dec 05 06:41:57 np0005546420.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 06:44:07 np0005546420.novalocal systemd[1]: Unmounting EFI System Partition Automount...
Dec 05 06:44:07 np0005546420.novalocal systemd[1]: efi.mount: Deactivated successfully.
Dec 05 06:44:07 np0005546420.novalocal systemd[1]: Unmounted EFI System Partition Automount.
Dec 05 06:46:21 np0005546420.novalocal sshd[4178]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:46:21 np0005546420.novalocal sshd[4178]: Accepted publickey for zuul from 38.102.83.114 port 53596 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 05 06:46:21 np0005546420.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 05 06:46:21 np0005546420.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 05 06:46:21 np0005546420.novalocal systemd-logind[762]: New session 1 of user zuul.
Dec 05 06:46:21 np0005546420.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 05 06:46:21 np0005546420.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Queued start job for default target Main User Target.
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Created slice User Application Slice.
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Reached target Paths.
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Reached target Timers.
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Starting D-Bus User Message Bus Socket...
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Starting Create User's Volatile Files and Directories...
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Finished Create User's Volatile Files and Directories.
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Listening on D-Bus User Message Bus Socket.
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Reached target Sockets.
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Reached target Basic System.
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Reached target Main User Target.
Dec 05 06:46:21 np0005546420.novalocal systemd[4182]: Startup finished in 86ms.
Dec 05 06:46:21 np0005546420.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 05 06:46:21 np0005546420.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 05 06:46:21 np0005546420.novalocal sshd[4178]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 06:46:22 np0005546420.novalocal python3[4234]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 06:46:31 np0005546420.novalocal python3[4252]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 06:46:39 np0005546420.novalocal python3[4304]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 06:46:40 np0005546420.novalocal python3[4334]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 05 06:46:42 np0005546420.novalocal python3[4350]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:46:43 np0005546420.novalocal python3[4364]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:46:44 np0005546420.novalocal python3[4423]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 06:46:45 np0005546420.novalocal python3[4464]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764917204.4541454-392-263426929919289/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=cb8438a2b38642fd8d5aa4c34b846ebc_id_rsa follow=False checksum=279822aa185303a1622fd64f0c6305b91ca04c54 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:46:46 np0005546420.novalocal python3[4537]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 06:46:46 np0005546420.novalocal python3[4578]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764917206.0327322-491-113082351961158/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=cb8438a2b38642fd8d5aa4c34b846ebc_id_rsa.pub follow=False checksum=a810c4b730db53312f48868578c3039315af7db6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:46:48 np0005546420.novalocal python3[4606]: ansible-ping Invoked with data=pong
Dec 05 06:46:50 np0005546420.novalocal python3[4620]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 06:46:53 np0005546420.novalocal python3[4672]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 05 06:46:56 np0005546420.novalocal python3[4696]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:46:56 np0005546420.novalocal python3[4710]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:46:56 np0005546420.novalocal python3[4724]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:46:57 np0005546420.novalocal python3[4738]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:46:57 np0005546420.novalocal python3[4752]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:46:58 np0005546420.novalocal python3[4766]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:47:00 np0005546420.novalocal sudo[4780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fgciivjwpcvcqgqfzgezhfizclbkacpc ; /usr/bin/python3
Dec 05 06:47:00 np0005546420.novalocal sudo[4780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:47:00 np0005546420.novalocal python3[4782]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:47:00 np0005546420.novalocal sudo[4780]: pam_unix(sudo:session): session closed for user root
Dec 05 06:47:02 np0005546420.novalocal sudo[4828]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xafmewkldcuwelhlxfhlbjvceivdlmlb ; /usr/bin/python3
Dec 05 06:47:02 np0005546420.novalocal sudo[4828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:47:02 np0005546420.novalocal python3[4830]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 06:47:02 np0005546420.novalocal sudo[4828]: pam_unix(sudo:session): session closed for user root
Dec 05 06:47:02 np0005546420.novalocal sudo[4871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjkstejjsrscojbmjqhggdqancavvflg ; /usr/bin/python3
Dec 05 06:47:02 np0005546420.novalocal sudo[4871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:47:02 np0005546420.novalocal python3[4873]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764917222.1414464-102-264587105440435/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:47:02 np0005546420.novalocal sudo[4871]: pam_unix(sudo:session): session closed for user root
Dec 05 06:47:10 np0005546420.novalocal python3[4901]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:10 np0005546420.novalocal python3[4915]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:10 np0005546420.novalocal python3[4929]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:11 np0005546420.novalocal python3[4943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:11 np0005546420.novalocal python3[4957]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:11 np0005546420.novalocal python3[4971]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:11 np0005546420.novalocal python3[4985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:12 np0005546420.novalocal python3[4999]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:12 np0005546420.novalocal python3[5013]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:12 np0005546420.novalocal python3[5027]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:13 np0005546420.novalocal python3[5041]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:13 np0005546420.novalocal python3[5055]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:13 np0005546420.novalocal python3[5069]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:13 np0005546420.novalocal python3[5083]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:14 np0005546420.novalocal python3[5097]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:14 np0005546420.novalocal python3[5111]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:14 np0005546420.novalocal python3[5125]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:14 np0005546420.novalocal python3[5139]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:15 np0005546420.novalocal python3[5153]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:15 np0005546420.novalocal python3[5167]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:15 np0005546420.novalocal python3[5181]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:17 np0005546420.novalocal python3[5195]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:17 np0005546420.novalocal python3[5209]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:18 np0005546420.novalocal python3[5223]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:18 np0005546420.novalocal python3[5237]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:18 np0005546420.novalocal python3[5251]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 06:47:19 np0005546420.novalocal sudo[5265]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nawczygnluabxabdqlddyrflfzlrafra ; /usr/bin/python3
Dec 05 06:47:19 np0005546420.novalocal sudo[5265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:47:19 np0005546420.novalocal python3[5267]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 05 06:47:19 np0005546420.novalocal systemd[1]: Starting Time & Date Service...
Dec 05 06:47:19 np0005546420.novalocal systemd[1]: Started Time & Date Service.
Dec 05 06:47:19 np0005546420.novalocal systemd-timedated[5269]: Changed time zone to 'UTC' (UTC).
Dec 05 06:47:19 np0005546420.novalocal sudo[5265]: pam_unix(sudo:session): session closed for user root
Dec 05 06:47:21 np0005546420.novalocal sudo[5286]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vqhdjeyttlodxnpnjgrktypbkqfveskl ; /usr/bin/python3
Dec 05 06:47:21 np0005546420.novalocal sudo[5286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:47:21 np0005546420.novalocal python3[5288]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:47:21 np0005546420.novalocal sudo[5286]: pam_unix(sudo:session): session closed for user root
Dec 05 06:47:22 np0005546420.novalocal python3[5334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 06:47:22 np0005546420.novalocal python3[5375]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764917242.310049-496-160800514331242/source _original_basename=tmpokbbjqoa follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:47:24 np0005546420.novalocal python3[5435]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 06:47:24 np0005546420.novalocal python3[5476]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764917243.9731426-587-182757629649361/source _original_basename=tmphygbpicc follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:47:26 np0005546420.novalocal sudo[5536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eeulhzhecshelwektiziibjzaztflbec ; /usr/bin/python3
Dec 05 06:47:26 np0005546420.novalocal sudo[5536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:47:26 np0005546420.novalocal python3[5538]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 06:47:26 np0005546420.novalocal sudo[5536]: pam_unix(sudo:session): session closed for user root
Dec 05 06:47:26 np0005546420.novalocal sudo[5579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jzlncknvzwwddqgkivbbayaenkbbmcpy ; /usr/bin/python3
Dec 05 06:47:26 np0005546420.novalocal sudo[5579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:47:26 np0005546420.novalocal python3[5581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764917246.0905392-731-91111053488151/source _original_basename=tmpv52lq3jq follow=False checksum=9afea3fa7e450257b25577284f0f4f0dfca88d28 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:47:26 np0005546420.novalocal sudo[5579]: pam_unix(sudo:session): session closed for user root
Dec 05 06:47:27 np0005546420.novalocal python3[5609]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 06:47:28 np0005546420.novalocal python3[5625]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 06:47:29 np0005546420.novalocal sudo[5673]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pynqexokdeikxouotqiyvpmhbodgyfyd ; /usr/bin/python3
Dec 05 06:47:29 np0005546420.novalocal sudo[5673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:47:29 np0005546420.novalocal python3[5675]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 06:47:29 np0005546420.novalocal sudo[5673]: pam_unix(sudo:session): session closed for user root
Dec 05 06:47:29 np0005546420.novalocal sudo[5716]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hqtqoiaecirljlciwwdnfrvuoxmpukqq ; /usr/bin/python3
Dec 05 06:47:29 np0005546420.novalocal sudo[5716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:47:29 np0005546420.novalocal python3[5718]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764917249.3151443-855-76353278523939/source _original_basename=tmp9ugjo9q7 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:47:29 np0005546420.novalocal sudo[5716]: pam_unix(sudo:session): session closed for user root
Dec 05 06:47:41 np0005546420.novalocal sudo[5747]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsldbfhrxhrbpmnotfzgielqoikrvzdk ; /usr/bin/python3
Dec 05 06:47:41 np0005546420.novalocal sudo[5747]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:47:41 np0005546420.novalocal python3[5749]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-1075-137b-000000000023-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 06:47:41 np0005546420.novalocal sudo[5747]: pam_unix(sudo:session): session closed for user root
Dec 05 06:47:49 np0005546420.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 06:47:52 np0005546420.novalocal python3[5770]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-1075-137b-000000000024-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 05 06:47:54 np0005546420.novalocal python3[5788]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:48:12 np0005546420.novalocal sudo[5802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hcwzbpgsdabbbdsevqvwwzcpcuucivvw ; /usr/bin/python3
Dec 05 06:48:12 np0005546420.novalocal sudo[5802]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:48:12 np0005546420.novalocal python3[5804]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:48:12 np0005546420.novalocal sudo[5802]: pam_unix(sudo:session): session closed for user root
Dec 05 06:48:52 np0005546420.novalocal systemd[4182]: Starting Mark boot as successful...
Dec 05 06:48:52 np0005546420.novalocal systemd[4182]: Finished Mark boot as successful.
Dec 05 06:49:13 np0005546420.novalocal sshd[4191]: Received disconnect from 38.102.83.114 port 53596:11: disconnected by user
Dec 05 06:49:13 np0005546420.novalocal sshd[4191]: Disconnected from user zuul 38.102.83.114 port 53596
Dec 05 06:49:13 np0005546420.novalocal sshd[4178]: pam_unix(sshd:session): session closed for user zuul
Dec 05 06:49:13 np0005546420.novalocal systemd-logind[762]: Session 1 logged out. Waiting for processes to exit.
Dec 05 06:51:52 np0005546420.novalocal systemd[4182]: Created slice User Background Tasks Slice.
Dec 05 06:51:52 np0005546420.novalocal systemd[4182]: Starting Cleanup of User's Temporary Files and Directories...
Dec 05 06:51:52 np0005546420.novalocal systemd[4182]: Finished Cleanup of User's Temporary Files and Directories.
Dec 05 06:51:55 np0005546420.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000
Dec 05 06:51:55 np0005546420.novalocal kernel: pci 0000:00:07.0: reg 0x10: [io  0x0000-0x003f]
Dec 05 06:51:55 np0005546420.novalocal kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff]
Dec 05 06:51:55 np0005546420.novalocal kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref]
Dec 05 06:51:55 np0005546420.novalocal kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref]
Dec 05 06:51:55 np0005546420.novalocal kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref]
Dec 05 06:51:55 np0005546420.novalocal kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref]
Dec 05 06:51:55 np0005546420.novalocal kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff]
Dec 05 06:51:55 np0005546420.novalocal kernel: pci 0000:00:07.0: BAR 0: assigned [io  0x1000-0x103f]
Dec 05 06:51:55 np0005546420.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 05 06:51:55 np0005546420.novalocal NetworkManager[790]: <info>  [1764917515.7013] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 05 06:51:55 np0005546420.novalocal systemd-udevd[5807]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 06:51:55 np0005546420.novalocal NetworkManager[790]: <info>  [1764917515.7155] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Dec 05 06:51:55 np0005546420.novalocal NetworkManager[790]: <info>  [1764917515.7175] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 05 06:51:55 np0005546420.novalocal NetworkManager[790]: <info>  [1764917515.7178] device (eth1): carrier: link connected
Dec 05 06:51:55 np0005546420.novalocal NetworkManager[790]: <info>  [1764917515.7180] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Dec 05 06:51:55 np0005546420.novalocal NetworkManager[790]: <info>  [1764917515.7182] policy: auto-activating connection 'Wired connection 1' (443f69db-ae6b-342f-91cd-e05d83575e0a)
Dec 05 06:51:55 np0005546420.novalocal NetworkManager[790]: <info>  [1764917515.7185] device (eth1): Activation: starting connection 'Wired connection 1' (443f69db-ae6b-342f-91cd-e05d83575e0a)
Dec 05 06:51:55 np0005546420.novalocal NetworkManager[790]: <info>  [1764917515.7186] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Dec 05 06:51:55 np0005546420.novalocal NetworkManager[790]: <info>  [1764917515.7188] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Dec 05 06:51:55 np0005546420.novalocal NetworkManager[790]: <info>  [1764917515.7191] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Dec 05 06:51:55 np0005546420.novalocal NetworkManager[790]: <info>  [1764917515.7193] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 05 06:51:56 np0005546420.novalocal sshd[5810]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:51:56 np0005546420.novalocal sshd[5810]: Accepted publickey for zuul from 38.102.83.114 port 42522 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 06:51:56 np0005546420.novalocal kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Dec 05 06:51:56 np0005546420.novalocal systemd-logind[762]: New session 3 of user zuul.
Dec 05 06:51:56 np0005546420.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 05 06:51:56 np0005546420.novalocal sshd[5810]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 06:51:56 np0005546420.novalocal python3[5827]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-be73-2a20-000000000408-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 06:52:10 np0005546420.novalocal sudo[5875]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpqijoktdohlrguymkywviggqhlnoelk ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 05 06:52:10 np0005546420.novalocal sudo[5875]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:52:10 np0005546420.novalocal python3[5877]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 06:52:10 np0005546420.novalocal sudo[5875]: pam_unix(sudo:session): session closed for user root
Dec 05 06:52:10 np0005546420.novalocal sudo[5918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bufanajxncgzframebnmwmezahjsewor ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 05 06:52:10 np0005546420.novalocal sudo[5918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:52:10 np0005546420.novalocal python3[5920]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764917529.8751054-486-162757347105418/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=2c3a2d5a2ed6756bd6906c4323018279fcf48f30 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:52:10 np0005546420.novalocal sudo[5918]: pam_unix(sudo:session): session closed for user root
Dec 05 06:52:10 np0005546420.novalocal sudo[5948]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhdqgcpfllettggltnowohuyxtfdfigz ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 05 06:52:10 np0005546420.novalocal sudo[5948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:52:11 np0005546420.novalocal python3[5950]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: Stopping Network Manager...
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[790]: <info>  [1764917531.2172] caught SIGTERM, shutting down normally.
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[790]: <info>  [1764917531.2260] dhcp4 (eth0): canceled DHCP transaction
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[790]: <info>  [1764917531.2260] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[790]: <info>  [1764917531.2260] dhcp4 (eth0): state changed no lease
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[790]: <info>  [1764917531.2264] manager: NetworkManager state is now CONNECTING
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[790]: <info>  [1764917531.2350] dhcp4 (eth1): canceled DHCP transaction
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[790]: <info>  [1764917531.2350] dhcp4 (eth1): state changed no lease
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[790]: <info>  [1764917531.2430] exiting (success)
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: Stopped Network Manager.
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: NetworkManager.service: Consumed 3.532s CPU time.
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: Starting Network Manager...
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.2942] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:a8dd9a05-2244-4441-b15f-7008b65d122c)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.2943] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.2963] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: Started Network Manager.
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.3040] manager[0x55e5c0fea090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: Starting Hostname Service...
Dec 05 06:52:11 np0005546420.novalocal sudo[5948]: pam_unix(sudo:session): session closed for user root
Dec 05 06:52:11 np0005546420.novalocal systemd[1]: Started Hostname Service.
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.3926] hostname: hostname: using hostnamed
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.3926] hostname: static hostname changed from (none) to "np0005546420.novalocal"
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.3934] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.3941] manager[0x55e5c0fea090]: rfkill: Wi-Fi hardware radio set enabled
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.3941] manager[0x55e5c0fea090]: rfkill: WWAN hardware radio set enabled
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.3985] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.3985] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.3986] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.3986] manager: Networking is enabled by state file
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4006] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4007] settings: Loaded settings plugin: keyfile (internal)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4062] dhcp: init: Using DHCP client 'internal'
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4066] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4075] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4083] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4097] device (lo): Activation: starting connection 'lo' (99500e8b-d0d5-4734-90d8-637f5b223e08)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4106] device (eth0): carrier: link connected
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4113] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4123] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4123] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4134] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4144] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4152] device (eth1): carrier: link connected
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4158] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4167] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (443f69db-ae6b-342f-91cd-e05d83575e0a) (indicated)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4168] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4176] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4186] device (eth1): Activation: starting connection 'Wired connection 1' (443f69db-ae6b-342f-91cd-e05d83575e0a)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4214] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4219] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4222] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4226] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4231] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4233] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4237] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4241] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4247] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4251] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4264] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4267] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4305] dhcp4 (eth0): state changed new lease, address=38.102.83.241
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4309] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4392] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4394] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4402] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4405] device (lo): Activation: successful, device activated.
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4442] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4443] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4446] manager: NetworkManager state is now CONNECTED_SITE
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4448] device (eth0): Activation: successful, device activated.
Dec 05 06:52:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917531.4451] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 05 06:52:11 np0005546420.novalocal python3[6032]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-be73-2a20-00000000012b-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 06:52:21 np0005546420.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 06:52:41 np0005546420.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 06:52:56 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917576.8111] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:56 np0005546420.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 06:52:56 np0005546420.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 06:52:56 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917576.8354] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:56 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917576.8364] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Dec 05 06:52:56 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917576.8396] device (eth1): Activation: successful, device activated.
Dec 05 06:52:56 np0005546420.novalocal NetworkManager[5963]: <info>  [1764917576.8411] manager: startup complete
Dec 05 06:52:56 np0005546420.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 05 06:53:06 np0005546420.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 06:53:11 np0005546420.novalocal sshd[5813]: Received disconnect from 38.102.83.114 port 42522:11: disconnected by user
Dec 05 06:53:11 np0005546420.novalocal sshd[5813]: Disconnected from user zuul 38.102.83.114 port 42522
Dec 05 06:53:11 np0005546420.novalocal sshd[5810]: pam_unix(sshd:session): session closed for user zuul
Dec 05 06:53:11 np0005546420.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 05 06:53:11 np0005546420.novalocal systemd[1]: session-3.scope: Consumed 1.453s CPU time.
Dec 05 06:53:11 np0005546420.novalocal systemd-logind[762]: Session 3 logged out. Waiting for processes to exit.
Dec 05 06:53:11 np0005546420.novalocal systemd-logind[762]: Removed session 3.
Dec 05 06:54:05 np0005546420.novalocal sshd[6050]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:54:05 np0005546420.novalocal sshd[6050]: Accepted publickey for zuul from 38.102.83.114 port 38164 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 06:54:05 np0005546420.novalocal systemd-logind[762]: New session 4 of user zuul.
Dec 05 06:54:05 np0005546420.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 05 06:54:05 np0005546420.novalocal sshd[6050]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 06:54:05 np0005546420.novalocal sudo[6099]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sdgmldcuvoenhtcbnmyqoakfvntkaika ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 05 06:54:05 np0005546420.novalocal sudo[6099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:54:05 np0005546420.novalocal python3[6101]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 06:54:05 np0005546420.novalocal sudo[6099]: pam_unix(sudo:session): session closed for user root
Dec 05 06:54:05 np0005546420.novalocal sudo[6142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qteezqyujbajcvoduwmeqzuhlafdhzga ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 05 06:54:05 np0005546420.novalocal sudo[6142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 06:54:05 np0005546420.novalocal python3[6144]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764917645.2345512-628-52710758606137/source _original_basename=tmpwevdeeap follow=False checksum=e3566e5142abb120e69cfbdd458d4460af3b26ce backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 06:54:05 np0005546420.novalocal sudo[6142]: pam_unix(sudo:session): session closed for user root
Dec 05 06:54:09 np0005546420.novalocal sshd[6050]: pam_unix(sshd:session): session closed for user zuul
Dec 05 06:54:09 np0005546420.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 05 06:54:09 np0005546420.novalocal systemd-logind[762]: Session 4 logged out. Waiting for processes to exit.
Dec 05 06:54:09 np0005546420.novalocal systemd-logind[762]: Removed session 4.
Dec 05 06:54:35 np0005546420.novalocal sshd[6159]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:54:37 np0005546420.novalocal sshd[6159]: Invalid user admin from 45.135.232.92 port 58636
Dec 05 06:54:37 np0005546420.novalocal sshd[6159]: Connection reset by invalid user admin 45.135.232.92 port 58636 [preauth]
Dec 05 06:54:37 np0005546420.novalocal sshd[6161]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:54:39 np0005546420.novalocal sshd[6161]: Connection reset by authenticating user root 45.135.232.92 port 58660 [preauth]
Dec 05 06:54:39 np0005546420.novalocal sshd[6163]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:54:41 np0005546420.novalocal sshd[6163]: Connection reset by authenticating user root 45.135.232.92 port 58676 [preauth]
Dec 05 06:54:42 np0005546420.novalocal sshd[6165]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:54:44 np0005546420.novalocal sshd[6165]: Connection reset by authenticating user root 45.135.232.92 port 58702 [preauth]
Dec 05 06:54:44 np0005546420.novalocal sshd[6167]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:54:45 np0005546420.novalocal sshd[6167]: Invalid user osmc from 45.135.232.92 port 27072
Dec 05 06:54:46 np0005546420.novalocal sshd[6167]: Connection reset by invalid user osmc 45.135.232.92 port 27072 [preauth]
Dec 05 06:56:52 np0005546420.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Dec 05 06:56:52 np0005546420.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 05 06:56:52 np0005546420.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Dec 05 06:56:52 np0005546420.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 05 06:57:59 np0005546420.novalocal sshd[6173]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:58:01 np0005546420.novalocal sshd[6173]: Connection reset by authenticating user root 45.140.17.124 port 21830 [preauth]
Dec 05 06:58:01 np0005546420.novalocal sshd[6175]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:58:03 np0005546420.novalocal sshd[6175]: Connection reset by authenticating user root 45.140.17.124 port 25764 [preauth]
Dec 05 06:58:03 np0005546420.novalocal sshd[6177]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:58:04 np0005546420.novalocal sshd[6177]: Invalid user ubuntu from 45.140.17.124 port 25780
Dec 05 06:58:04 np0005546420.novalocal sshd[6177]: Connection reset by invalid user ubuntu 45.140.17.124 port 25780 [preauth]
Dec 05 06:58:05 np0005546420.novalocal sshd[6180]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:58:07 np0005546420.novalocal sshd[6180]: Invalid user oracle from 45.140.17.124 port 25796
Dec 05 06:58:07 np0005546420.novalocal sshd[6180]: Connection reset by invalid user oracle 45.140.17.124 port 25796 [preauth]
Dec 05 06:58:07 np0005546420.novalocal sshd[6182]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 06:58:09 np0005546420.novalocal sshd[6182]: Connection reset by authenticating user root 45.140.17.124 port 25814 [preauth]
Dec 05 07:01:01 np0005546420.novalocal CROND[6185]: (root) CMD (run-parts /etc/cron.hourly)
Dec 05 07:01:01 np0005546420.novalocal run-parts[6188]: (/etc/cron.hourly) starting 0anacron
Dec 05 07:01:01 np0005546420.novalocal anacron[6196]: Anacron started on 2025-12-05
Dec 05 07:01:01 np0005546420.novalocal anacron[6196]: Will run job `cron.daily' in 23 min.
Dec 05 07:01:01 np0005546420.novalocal anacron[6196]: Will run job `cron.weekly' in 43 min.
Dec 05 07:01:01 np0005546420.novalocal anacron[6196]: Will run job `cron.monthly' in 63 min.
Dec 05 07:01:01 np0005546420.novalocal anacron[6196]: Jobs will be executed sequentially
Dec 05 07:01:01 np0005546420.novalocal run-parts[6198]: (/etc/cron.hourly) finished 0anacron
Dec 05 07:01:01 np0005546420.novalocal CROND[6184]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 05 07:02:00 np0005546420.novalocal sshd[6201]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:02:00 np0005546420.novalocal sshd[6201]: Accepted publickey for zuul from 38.102.83.114 port 55538 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 07:02:00 np0005546420.novalocal systemd-logind[762]: New session 5 of user zuul.
Dec 05 07:02:00 np0005546420.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 05 07:02:00 np0005546420.novalocal sshd[6201]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 07:02:00 np0005546420.novalocal sudo[6218]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktzochmegyuufmiljhkfgnptmfafwphf ; /usr/bin/python3
Dec 05 07:02:00 np0005546420.novalocal sudo[6218]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:00 np0005546420.novalocal python3[6220]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-ac58-428a-000000001d10-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:02:00 np0005546420.novalocal sudo[6218]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:02 np0005546420.novalocal sudo[6236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ufqkgqamgrrokrkoreeeaduskickzlqe ; /usr/bin/python3
Dec 05 07:02:02 np0005546420.novalocal sudo[6236]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:02 np0005546420.novalocal python3[6238]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:02:02 np0005546420.novalocal sudo[6236]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:02 np0005546420.novalocal sudo[6252]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hosgwqniuesvxmwkkqcselmkkqsvfyvo ; /usr/bin/python3
Dec 05 07:02:02 np0005546420.novalocal sudo[6252]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:02 np0005546420.novalocal python3[6254]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:02:02 np0005546420.novalocal sudo[6252]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:02 np0005546420.novalocal sudo[6268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dsxwkigljhlviaqwcjhptjwkavixeqzf ; /usr/bin/python3
Dec 05 07:02:02 np0005546420.novalocal sudo[6268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:02 np0005546420.novalocal python3[6270]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:02:02 np0005546420.novalocal sudo[6268]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:02 np0005546420.novalocal sudo[6284]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lzgcbprfpvzmezvggeaiwltyhbdtcvgb ; /usr/bin/python3
Dec 05 07:02:02 np0005546420.novalocal sudo[6284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:03 np0005546420.novalocal python3[6286]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:02:03 np0005546420.novalocal sudo[6284]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:03 np0005546420.novalocal sudo[6300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhbkqbbwyvaysnzmqrlentiykhiqkjhs ; /usr/bin/python3
Dec 05 07:02:03 np0005546420.novalocal sudo[6300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:03 np0005546420.novalocal python3[6302]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:02:03 np0005546420.novalocal sudo[6300]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:04 np0005546420.novalocal sudo[6348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phpwvwinvliyabhuocwtupovuwwfolyc ; /usr/bin/python3
Dec 05 07:02:04 np0005546420.novalocal sudo[6348]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:05 np0005546420.novalocal python3[6350]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:02:05 np0005546420.novalocal sudo[6348]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:05 np0005546420.novalocal sudo[6391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vuaxmpjgcqixnnhhaaajoiqoddpxbzdc ; /usr/bin/python3
Dec 05 07:02:05 np0005546420.novalocal sudo[6391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:05 np0005546420.novalocal python3[6393]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764918124.7538025-646-39622818075073/source _original_basename=tmpmquzb1b0 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:02:05 np0005546420.novalocal sudo[6391]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:06 np0005546420.novalocal sudo[6421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmtayadhsbxohaxhahhhjjbzeonzebgf ; /usr/bin/python3
Dec 05 07:02:06 np0005546420.novalocal sudo[6421]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:06 np0005546420.novalocal python3[6423]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 07:02:07 np0005546420.novalocal systemd[1]: Reloading.
Dec 05 07:02:07 np0005546420.novalocal systemd-rc-local-generator[6440]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:02:07 np0005546420.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:02:07 np0005546420.novalocal sudo[6421]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:08 np0005546420.novalocal sudo[6467]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vokcofsobtvrmlzwodzbhxousgkzmqjt ; /usr/bin/python3
Dec 05 07:02:08 np0005546420.novalocal sudo[6467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:08 np0005546420.novalocal python3[6469]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 05 07:02:08 np0005546420.novalocal sudo[6467]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:09 np0005546420.novalocal sudo[6483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxkmntsnjjnmzmjequjymzmzyvxdnsgy ; /usr/bin/python3
Dec 05 07:02:09 np0005546420.novalocal sudo[6483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:10 np0005546420.novalocal python3[6485]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:02:10 np0005546420.novalocal sudo[6483]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:10 np0005546420.novalocal sudo[6501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hcbziumbifcnvydafefygjlidpobguvd ; /usr/bin/python3
Dec 05 07:02:10 np0005546420.novalocal sudo[6501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:10 np0005546420.novalocal python3[6503]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:02:10 np0005546420.novalocal sudo[6501]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:10 np0005546420.novalocal sudo[6519]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gwketgterxxbdrwemighgzhfzqxhzlez ; /usr/bin/python3
Dec 05 07:02:10 np0005546420.novalocal sudo[6519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:10 np0005546420.novalocal python3[6521]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:02:10 np0005546420.novalocal sudo[6519]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:10 np0005546420.novalocal sudo[6537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-drenbxamgqorhyjdvjfsydpgrtsnrggn ; /usr/bin/python3
Dec 05 07:02:10 np0005546420.novalocal sudo[6537]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:02:10 np0005546420.novalocal python3[6539]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:02:10 np0005546420.novalocal sudo[6537]: pam_unix(sudo:session): session closed for user root
Dec 05 07:02:22 np0005546420.novalocal python3[6558]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-ac58-428a-000000001d17-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:02:22 np0005546420.novalocal python3[6577]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 07:02:25 np0005546420.novalocal sshd[6201]: pam_unix(sshd:session): session closed for user zuul
Dec 05 07:02:25 np0005546420.novalocal systemd-logind[762]: Session 5 logged out. Waiting for processes to exit.
Dec 05 07:02:25 np0005546420.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 05 07:02:25 np0005546420.novalocal systemd[1]: session-5.scope: Consumed 4.042s CPU time.
Dec 05 07:02:25 np0005546420.novalocal systemd-logind[762]: Removed session 5.
Dec 05 07:03:49 np0005546420.novalocal sshd[6585]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:03:49 np0005546420.novalocal sshd[6585]: Accepted publickey for zuul from 38.102.83.114 port 33928 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 07:03:49 np0005546420.novalocal systemd-logind[762]: New session 6 of user zuul.
Dec 05 07:03:49 np0005546420.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 05 07:03:49 np0005546420.novalocal sshd[6585]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 07:03:49 np0005546420.novalocal sudo[6602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aguuqgetowmoufypsmlurpajyvkerboa ; /usr/bin/python3
Dec 05 07:03:49 np0005546420.novalocal sudo[6602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:03:50 np0005546420.novalocal systemd[1]: Starting RHSM dbus service...
Dec 05 07:03:50 np0005546420.novalocal systemd[1]: Started RHSM dbus service.
Dec 05 07:03:50 np0005546420.novalocal rhsm-service[6609]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 05 07:03:50 np0005546420.novalocal rhsm-service[6609]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 05 07:03:50 np0005546420.novalocal rhsm-service[6609]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 05 07:03:50 np0005546420.novalocal rhsm-service[6609]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 05 07:03:52 np0005546420.novalocal rhsm-service[6609]:  INFO [subscription_manager.managerlib:90] Consumer created: np0005546420.novalocal (c9187bef-3dbe-4968-b437-f5e9b8b2ee84)
Dec 05 07:03:52 np0005546420.novalocal subscription-manager[6609]: Registered system with identity: c9187bef-3dbe-4968-b437-f5e9b8b2ee84
Dec 05 07:03:53 np0005546420.novalocal rhsm-service[6609]:  INFO [subscription_manager.entcertlib:131] certs updated:
Dec 05 07:03:53 np0005546420.novalocal rhsm-service[6609]: Total updates: 1
Dec 05 07:03:53 np0005546420.novalocal rhsm-service[6609]: Found (local) serial# []
Dec 05 07:03:53 np0005546420.novalocal rhsm-service[6609]: Expected (UEP) serial# [2099617795495645104]
Dec 05 07:03:53 np0005546420.novalocal rhsm-service[6609]: Added (new)
Dec 05 07:03:53 np0005546420.novalocal rhsm-service[6609]:   [sn:2099617795495645104 ( Content Access,) @ /etc/pki/entitlement/2099617795495645104.pem]
Dec 05 07:03:53 np0005546420.novalocal rhsm-service[6609]: Deleted (rogue):
Dec 05 07:03:53 np0005546420.novalocal rhsm-service[6609]:   <NONE>
Dec 05 07:03:53 np0005546420.novalocal subscription-manager[6609]: Added subscription for 'Content Access' contract 'None'
Dec 05 07:03:53 np0005546420.novalocal subscription-manager[6609]: Added subscription for product ' Content Access'
Dec 05 07:03:57 np0005546420.novalocal rhsm-service[6609]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 05 07:03:57 np0005546420.novalocal rhsm-service[6609]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 05 07:03:57 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:03:57 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:03:57 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:03:57 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:03:57 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:03:58 np0005546420.novalocal sudo[6602]: pam_unix(sudo:session): session closed for user root
Dec 05 07:04:06 np0005546420.novalocal python3[6700]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163e3b-3c83-91be-d6a9-00000000000d-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:04:07 np0005546420.novalocal sudo[6717]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-edcfpydfsocjemcsnpidpnkchezpbuuw ; /usr/bin/python3
Dec 05 07:04:07 np0005546420.novalocal sudo[6717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:04:07 np0005546420.novalocal python3[6719]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 07:04:38 np0005546420.novalocal setsebool[6794]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 05 07:04:38 np0005546420.novalocal setsebool[6794]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 05 07:04:46 np0005546420.novalocal kernel: SELinux:  Converting 407 SID table entries...
Dec 05 07:04:46 np0005546420.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 07:04:46 np0005546420.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 05 07:04:46 np0005546420.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 07:04:46 np0005546420.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 05 07:04:46 np0005546420.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 07:04:46 np0005546420.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 07:04:46 np0005546420.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 07:04:58 np0005546420.novalocal dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=3 res=1
Dec 05 07:04:58 np0005546420.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 07:04:58 np0005546420.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 05 07:04:58 np0005546420.novalocal systemd[1]: Reloading.
Dec 05 07:04:59 np0005546420.novalocal systemd-rc-local-generator[7642]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:04:59 np0005546420.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:04:59 np0005546420.novalocal systemd[1]: Starting dnf makecache...
Dec 05 07:04:59 np0005546420.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 07:04:59 np0005546420.novalocal dnf[7804]: Updating Subscription Management repositories.
Dec 05 07:05:00 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:05:00 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:05:00 np0005546420.novalocal sudo[6717]: pam_unix(sudo:session): session closed for user root
Dec 05 07:05:01 np0005546420.novalocal dnf[7804]: Failed determining last makecache time.
Dec 05 07:05:01 np0005546420.novalocal dnf[7804]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  35 kB/s | 4.5 kB     00:00
Dec 05 07:05:01 np0005546420.novalocal dnf[7804]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   30 kB/s | 4.1 kB     00:00
Dec 05 07:05:01 np0005546420.novalocal dnf[7804]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  29 kB/s | 4.5 kB     00:00
Dec 05 07:05:01 np0005546420.novalocal dnf[7804]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   32 kB/s | 4.1 kB     00:00
Dec 05 07:05:02 np0005546420.novalocal dnf[7804]: Metadata cache created.
Dec 05 07:05:02 np0005546420.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 05 07:05:02 np0005546420.novalocal systemd[1]: Finished dnf makecache.
Dec 05 07:05:02 np0005546420.novalocal systemd[1]: dnf-makecache.service: Consumed 2.652s CPU time.
Dec 05 07:05:08 np0005546420.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 07:05:08 np0005546420.novalocal systemd[1]: Finished man-db-cache-update.service.
Dec 05 07:05:08 np0005546420.novalocal systemd[1]: man-db-cache-update.service: Consumed 11.434s CPU time.
Dec 05 07:05:08 np0005546420.novalocal systemd[1]: run-rdac380ad6f1f4c9186b2169161bb8c20.service: Deactivated successfully.
Dec 05 07:05:45 np0005546420.novalocal sshd[18376]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:05:47 np0005546420.novalocal sshd[18376]: Invalid user oracle from 91.202.233.33 port 63958
Dec 05 07:05:47 np0005546420.novalocal sshd[18376]: Connection reset by invalid user oracle 91.202.233.33 port 63958 [preauth]
Dec 05 07:05:47 np0005546420.novalocal sshd[18378]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:05:50 np0005546420.novalocal sshd[18378]: Connection reset by authenticating user root 91.202.233.33 port 63994 [preauth]
Dec 05 07:05:50 np0005546420.novalocal sshd[18380]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:05:52 np0005546420.novalocal sshd[18380]: Connection reset by authenticating user root 91.202.233.33 port 45478 [preauth]
Dec 05 07:05:53 np0005546420.novalocal sshd[18382]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:05:54 np0005546420.novalocal sudo[18397]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abyikxomzdsffadrbxkeotykxalcslek ; /usr/bin/python3
Dec 05 07:05:54 np0005546420.novalocal sudo[18397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:05:54 np0005546420.novalocal systemd[1]: var-lib-containers-storage-overlay-compat1928880639-merged.mount: Deactivated successfully.
Dec 05 07:05:54 np0005546420.novalocal podman[18400]: 2025-12-05 07:05:54.545077424 +0000 UTC m=+0.096818208 system refresh
Dec 05 07:05:54 np0005546420.novalocal sudo[18397]: pam_unix(sudo:session): session closed for user root
Dec 05 07:05:55 np0005546420.novalocal sshd[18382]: Connection reset by authenticating user root 91.202.233.33 port 45494 [preauth]
Dec 05 07:05:55 np0005546420.novalocal sshd[18439]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:05:55 np0005546420.novalocal systemd[4182]: Starting D-Bus User Message Bus...
Dec 05 07:05:55 np0005546420.novalocal dbus-broker-launch[18460]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 05 07:05:55 np0005546420.novalocal dbus-broker-launch[18460]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 05 07:05:55 np0005546420.novalocal systemd[4182]: Started D-Bus User Message Bus.
Dec 05 07:05:55 np0005546420.novalocal dbus-broker-lau[18460]: Ready
Dec 05 07:05:55 np0005546420.novalocal systemd[4182]: selinux: avc:  op=load_policy lsm=selinux seqno=3 res=1
Dec 05 07:05:55 np0005546420.novalocal systemd[4182]: Created slice Slice /user.
Dec 05 07:05:55 np0005546420.novalocal systemd[4182]: podman-18443.scope: unit configures an IP firewall, but not running as root.
Dec 05 07:05:55 np0005546420.novalocal systemd[4182]: (This warning is only shown for the first unit using IP firewalling.)
Dec 05 07:05:55 np0005546420.novalocal systemd[4182]: Started podman-18443.scope.
Dec 05 07:05:55 np0005546420.novalocal systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:05:56 np0005546420.novalocal systemd[4182]: Started podman-pause-bc34188e.scope.
Dec 05 07:05:57 np0005546420.novalocal sshd[18439]: Connection reset by authenticating user root 91.202.233.33 port 45524 [preauth]
Dec 05 07:05:59 np0005546420.novalocal sshd[6585]: pam_unix(sshd:session): session closed for user zuul
Dec 05 07:05:59 np0005546420.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Dec 05 07:05:59 np0005546420.novalocal systemd[1]: session-6.scope: Consumed 50.800s CPU time.
Dec 05 07:05:59 np0005546420.novalocal systemd-logind[762]: Session 6 logged out. Waiting for processes to exit.
Dec 05 07:05:59 np0005546420.novalocal systemd-logind[762]: Removed session 6.
Dec 05 07:06:15 np0005546420.novalocal sshd[18465]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:06:15 np0005546420.novalocal sshd[18466]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:06:15 np0005546420.novalocal sshd[18464]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:06:15 np0005546420.novalocal sshd[18467]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:06:15 np0005546420.novalocal sshd[18468]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:06:15 np0005546420.novalocal sshd[18465]: Unable to negotiate with 38.102.83.70 port 45808: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 05 07:06:15 np0005546420.novalocal sshd[18466]: Connection closed by 38.102.83.70 port 45776 [preauth]
Dec 05 07:06:15 np0005546420.novalocal sshd[18467]: Connection closed by 38.102.83.70 port 45784 [preauth]
Dec 05 07:06:15 np0005546420.novalocal sshd[18464]: Unable to negotiate with 38.102.83.70 port 45794: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 05 07:06:15 np0005546420.novalocal sshd[18468]: Unable to negotiate with 38.102.83.70 port 45810: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 05 07:06:21 np0005546420.novalocal sshd[18474]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:06:21 np0005546420.novalocal sshd[18474]: Accepted publickey for zuul from 38.102.83.114 port 36956 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 07:06:21 np0005546420.novalocal systemd-logind[762]: New session 7 of user zuul.
Dec 05 07:06:21 np0005546420.novalocal systemd[1]: Started Session 7 of User zuul.
Dec 05 07:06:21 np0005546420.novalocal sshd[18474]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 07:06:21 np0005546420.novalocal python3[18491]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMItrNJD3Qo5RZ9GVEvrDsRHCNoqv/QCdFAerIbUnRZyqMIrTCHiUzK01hguMY3G31c8ICa3d4AuOJ+Y8G23vfU= zuul@np0005546412.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 07:06:22 np0005546420.novalocal sudo[18505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxrmtodsowrdwehaysgjjjyhvrsmvtkz ; /usr/bin/python3
Dec 05 07:06:22 np0005546420.novalocal sudo[18505]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:06:22 np0005546420.novalocal python3[18507]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMItrNJD3Qo5RZ9GVEvrDsRHCNoqv/QCdFAerIbUnRZyqMIrTCHiUzK01hguMY3G31c8ICa3d4AuOJ+Y8G23vfU= zuul@np0005546412.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 07:06:22 np0005546420.novalocal sudo[18505]: pam_unix(sudo:session): session closed for user root
Dec 05 07:06:23 np0005546420.novalocal sshd[18474]: pam_unix(sshd:session): session closed for user zuul
Dec 05 07:06:23 np0005546420.novalocal systemd[1]: session-7.scope: Deactivated successfully.
Dec 05 07:06:23 np0005546420.novalocal systemd-logind[762]: Session 7 logged out. Waiting for processes to exit.
Dec 05 07:06:23 np0005546420.novalocal systemd-logind[762]: Removed session 7.
Dec 05 07:08:06 np0005546420.novalocal sshd[18510]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:08:06 np0005546420.novalocal sshd[18510]: Accepted publickey for zuul from 38.102.83.114 port 36952 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 07:08:06 np0005546420.novalocal systemd-logind[762]: New session 8 of user zuul.
Dec 05 07:08:06 np0005546420.novalocal systemd[1]: Started Session 8 of User zuul.
Dec 05 07:08:06 np0005546420.novalocal sshd[18510]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 07:08:07 np0005546420.novalocal sudo[18527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qzgsyepvpkdxeqctlizmsecddfxxogwm ; /usr/bin/python3
Dec 05 07:08:07 np0005546420.novalocal sudo[18527]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:08:07 np0005546420.novalocal python3[18529]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 07:08:07 np0005546420.novalocal sudo[18527]: pam_unix(sudo:session): session closed for user root
Dec 05 07:08:08 np0005546420.novalocal sudo[18543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zepheucwwhozagpvkkaljeqoogwxjzjl ; /usr/bin/python3
Dec 05 07:08:08 np0005546420.novalocal sudo[18543]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:08:08 np0005546420.novalocal python3[18545]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546420.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 05 07:08:08 np0005546420.novalocal sudo[18543]: pam_unix(sudo:session): session closed for user root
Dec 05 07:08:09 np0005546420.novalocal sudo[18593]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pixiykxrjgvuzbntdbrswxrotwoxivpo ; /usr/bin/python3
Dec 05 07:08:09 np0005546420.novalocal sudo[18593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:08:09 np0005546420.novalocal python3[18595]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:08:09 np0005546420.novalocal sudo[18593]: pam_unix(sudo:session): session closed for user root
Dec 05 07:08:10 np0005546420.novalocal sudo[18636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hsrbhiaiwvwakjprvarwcojvgxopwtyt ; /usr/bin/python3
Dec 05 07:08:10 np0005546420.novalocal sudo[18636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:08:10 np0005546420.novalocal python3[18638]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764918489.5917256-137-275970634097747/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=cb8438a2b38642fd8d5aa4c34b846ebc_id_rsa follow=False checksum=279822aa185303a1622fd64f0c6305b91ca04c54 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:08:10 np0005546420.novalocal sudo[18636]: pam_unix(sudo:session): session closed for user root
Dec 05 07:08:11 np0005546420.novalocal sudo[18698]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfpkbqhwhyhgqafmsomehkgixeqirkgo ; /usr/bin/python3
Dec 05 07:08:11 np0005546420.novalocal sudo[18698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:08:11 np0005546420.novalocal python3[18700]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:08:11 np0005546420.novalocal sudo[18698]: pam_unix(sudo:session): session closed for user root
Dec 05 07:08:11 np0005546420.novalocal sudo[18741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iwcqfwujqxdmmcwbuenzmjkuibrfyjxx ; /usr/bin/python3
Dec 05 07:08:11 np0005546420.novalocal sudo[18741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:08:11 np0005546420.novalocal python3[18743]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764918491.2377255-225-265946698327501/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=cb8438a2b38642fd8d5aa4c34b846ebc_id_rsa.pub follow=False checksum=a810c4b730db53312f48868578c3039315af7db6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:08:11 np0005546420.novalocal sudo[18741]: pam_unix(sudo:session): session closed for user root
Dec 05 07:08:13 np0005546420.novalocal sudo[18771]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtjyxcbklhpjhbcrzijsdmtywtupqcuc ; /usr/bin/python3
Dec 05 07:08:13 np0005546420.novalocal sudo[18771]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:08:14 np0005546420.novalocal python3[18773]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:08:14 np0005546420.novalocal sudo[18771]: pam_unix(sudo:session): session closed for user root
Dec 05 07:08:15 np0005546420.novalocal python3[18819]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:08:15 np0005546420.novalocal python3[18835]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmpzb99pgxw recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:08:16 np0005546420.novalocal python3[18895]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:08:16 np0005546420.novalocal python3[18911]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpzx1wcj9x recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:08:18 np0005546420.novalocal python3[18971]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:08:18 np0005546420.novalocal python3[18987]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpoi_rw38x recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:08:19 np0005546420.novalocal sshd[18510]: pam_unix(sshd:session): session closed for user zuul
Dec 05 07:08:19 np0005546420.novalocal systemd[1]: session-8.scope: Deactivated successfully.
Dec 05 07:08:19 np0005546420.novalocal systemd[1]: session-8.scope: Consumed 3.697s CPU time.
Dec 05 07:08:19 np0005546420.novalocal systemd-logind[762]: Session 8 logged out. Waiting for processes to exit.
Dec 05 07:08:19 np0005546420.novalocal systemd-logind[762]: Removed session 8.
Dec 05 07:10:38 np0005546420.novalocal sshd[19003]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:10:38 np0005546420.novalocal sshd[19003]: Accepted publickey for zuul from 38.102.83.70 port 55026 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 07:10:38 np0005546420.novalocal systemd-logind[762]: New session 9 of user zuul.
Dec 05 07:10:38 np0005546420.novalocal systemd[1]: Started Session 9 of User zuul.
Dec 05 07:10:38 np0005546420.novalocal sshd[19003]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 07:10:39 np0005546420.novalocal python3[19049]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:15:39 np0005546420.novalocal sshd[19006]: Received disconnect from 38.102.83.70 port 55026:11: disconnected by user
Dec 05 07:15:39 np0005546420.novalocal sshd[19006]: Disconnected from user zuul 38.102.83.70 port 55026
Dec 05 07:15:39 np0005546420.novalocal sshd[19003]: pam_unix(sshd:session): session closed for user zuul
Dec 05 07:15:39 np0005546420.novalocal systemd[1]: session-9.scope: Deactivated successfully.
Dec 05 07:15:39 np0005546420.novalocal systemd-logind[762]: Session 9 logged out. Waiting for processes to exit.
Dec 05 07:15:39 np0005546420.novalocal systemd-logind[762]: Removed session 9.
Dec 05 07:18:49 np0005546420.novalocal sshd[19052]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:18:52 np0005546420.novalocal sshd[19052]: Invalid user install from 45.135.232.92 port 28126
Dec 05 07:18:53 np0005546420.novalocal sshd[19052]: Connection reset by invalid user install 45.135.232.92 port 28126 [preauth]
Dec 05 07:18:53 np0005546420.novalocal sshd[19054]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:18:55 np0005546420.novalocal sshd[19054]: Connection reset by authenticating user root 45.135.232.92 port 28136 [preauth]
Dec 05 07:18:55 np0005546420.novalocal sshd[19056]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:18:56 np0005546420.novalocal sshd[19056]: Invalid user admin from 45.135.232.92 port 43206
Dec 05 07:18:57 np0005546420.novalocal sshd[19056]: Connection reset by invalid user admin 45.135.232.92 port 43206 [preauth]
Dec 05 07:18:57 np0005546420.novalocal sshd[19059]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:18:59 np0005546420.novalocal sshd[19059]: Connection reset by authenticating user root 45.135.232.92 port 43220 [preauth]
Dec 05 07:18:59 np0005546420.novalocal sshd[19061]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:19:01 np0005546420.novalocal sshd[19061]: Invalid user admin from 45.135.232.92 port 43244
Dec 05 07:19:02 np0005546420.novalocal sshd[19061]: Connection reset by invalid user admin 45.135.232.92 port 43244 [preauth]
Dec 05 07:21:38 np0005546420.novalocal sshd[19064]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:21:40 np0005546420.novalocal sshd[19064]: Connection reset by authenticating user root 45.140.17.124 port 64974 [preauth]
Dec 05 07:21:41 np0005546420.novalocal sshd[19066]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:21:42 np0005546420.novalocal sshd[19066]: Connection reset by authenticating user root 45.140.17.124 port 64984 [preauth]
Dec 05 07:21:42 np0005546420.novalocal sshd[19069]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:21:44 np0005546420.novalocal sshd[19069]: Connection reset by authenticating user root 45.140.17.124 port 34476 [preauth]
Dec 05 07:21:45 np0005546420.novalocal sshd[19071]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:21:46 np0005546420.novalocal sshd[19071]: Connection reset by authenticating user root 45.140.17.124 port 34480 [preauth]
Dec 05 07:21:47 np0005546420.novalocal sshd[19073]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:21:48 np0005546420.novalocal sshd[19073]: Connection reset by authenticating user root 45.140.17.124 port 34492 [preauth]
Dec 05 07:22:44 np0005546420.novalocal sshd[19075]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:22:45 np0005546420.novalocal sshd[19075]: error: kex_exchange_identification: Connection closed by remote host
Dec 05 07:22:45 np0005546420.novalocal sshd[19075]: Connection closed by 124.163.255.210 port 2569
Dec 05 07:23:22 np0005546420.novalocal sshd[19077]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:23:22 np0005546420.novalocal sshd[19077]: Accepted publickey for zuul from 38.102.83.114 port 45968 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 07:23:22 np0005546420.novalocal systemd-logind[762]: New session 10 of user zuul.
Dec 05 07:23:22 np0005546420.novalocal systemd[1]: Started Session 10 of User zuul.
Dec 05 07:23:22 np0005546420.novalocal sshd[19077]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 07:23:22 np0005546420.novalocal python3[19094]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163e3b-3c83-6324-9d4a-00000000000c-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:23:24 np0005546420.novalocal sudo[19112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxwzizvipyqlkopuykaltlymlxabagsf ; /usr/bin/python3
Dec 05 07:23:24 np0005546420.novalocal sudo[19112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:23:24 np0005546420.novalocal python3[19114]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163e3b-3c83-6324-9d4a-00000000000d-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:23:26 np0005546420.novalocal sudo[19112]: pam_unix(sudo:session): session closed for user root
Dec 05 07:23:29 np0005546420.novalocal sudo[19131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvznkotezcrjfbegcdrbxfytkinfihxh ; /usr/bin/python3
Dec 05 07:23:29 np0005546420.novalocal sudo[19131]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:23:29 np0005546420.novalocal python3[19133]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False
Dec 05 07:23:32 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:23:57 np0005546420.novalocal sudo[19131]: pam_unix(sudo:session): session closed for user root
Dec 05 07:24:01 np0005546420.novalocal anacron[6196]: Job `cron.daily' started
Dec 05 07:24:01 np0005546420.novalocal anacron[6196]: Job `cron.daily' terminated
Dec 05 07:24:26 np0005546420.novalocal sudo[19290]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tpqvkfwcpxybpewyrktdaqhusnbqmhlf ; /usr/bin/python3
Dec 05 07:24:26 np0005546420.novalocal sudo[19290]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:24:26 np0005546420.novalocal python3[19292]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False
Dec 05 07:24:29 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:24:31 np0005546420.novalocal sudo[19290]: pam_unix(sudo:session): session closed for user root
Dec 05 07:24:37 np0005546420.novalocal sudo[19430]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwvfkibrpslobdgcjhikppjypxrtiikw ; /usr/bin/python3
Dec 05 07:24:37 np0005546420.novalocal sudo[19430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:24:38 np0005546420.novalocal python3[19432]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False
Dec 05 07:24:41 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:24:41 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:24:47 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:24:47 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:24:54 np0005546420.novalocal sudo[19430]: pam_unix(sudo:session): session closed for user root
Dec 05 07:25:09 np0005546420.novalocal sudo[19825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqudwrkalaotheulrapceefvhofukfjn ; /usr/bin/python3
Dec 05 07:25:09 np0005546420.novalocal sudo[19825]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:25:09 np0005546420.novalocal python3[19827]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Dec 05 07:25:12 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:25:12 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:25:18 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:25:18 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:25:25 np0005546420.novalocal sudo[19825]: pam_unix(sudo:session): session closed for user root
Dec 05 07:25:40 np0005546420.novalocal sudo[20221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymhqlbjhqnxjgouwqvoepiugpecyydam ; /usr/bin/python3
Dec 05 07:25:40 np0005546420.novalocal sudo[20221]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:25:40 np0005546420.novalocal python3[20223]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Dec 05 07:25:43 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:25:43 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:25:48 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:25:49 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:25:57 np0005546420.novalocal sudo[20221]: pam_unix(sudo:session): session closed for user root
Dec 05 07:26:12 np0005546420.novalocal sudo[20618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zweqdajpndbatovfhsifqiiortpyujck ; /usr/bin/python3
Dec 05 07:26:12 np0005546420.novalocal sudo[20618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:26:14 np0005546420.novalocal python3[20620]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-000000000013-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:26:16 np0005546420.novalocal sudo[20618]: pam_unix(sudo:session): session closed for user root
Dec 05 07:26:18 np0005546420.novalocal sudo[20637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ejgplwyfazpzdfvmhoybsrscwlyvaoht ; /usr/bin/python3
Dec 05 07:26:18 np0005546420.novalocal sudo[20637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:26:18 np0005546420.novalocal python3[20639]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 07:26:39 np0005546420.novalocal kernel: SELinux:  Converting 490 SID table entries...
Dec 05 07:26:39 np0005546420.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 07:26:39 np0005546420.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 05 07:26:39 np0005546420.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 07:26:39 np0005546420.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 05 07:26:39 np0005546420.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 07:26:39 np0005546420.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 07:26:39 np0005546420.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 07:26:40 np0005546420.novalocal groupadd[20736]: group added to /etc/group: name=unbound, GID=987
Dec 05 07:26:40 np0005546420.novalocal groupadd[20736]: group added to /etc/gshadow: name=unbound
Dec 05 07:26:40 np0005546420.novalocal groupadd[20736]: new group: name=unbound, GID=987
Dec 05 07:26:40 np0005546420.novalocal useradd[20743]: new user: name=unbound, UID=987, GID=987, home=/etc/unbound, shell=/sbin/nologin, from=none
Dec 05 07:26:40 np0005546420.novalocal dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Dec 05 07:26:40 np0005546420.novalocal systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 05 07:26:40 np0005546420.novalocal groupadd[20756]: group added to /etc/group: name=openvswitch, GID=986
Dec 05 07:26:40 np0005546420.novalocal groupadd[20756]: group added to /etc/gshadow: name=openvswitch
Dec 05 07:26:40 np0005546420.novalocal groupadd[20756]: new group: name=openvswitch, GID=986
Dec 05 07:26:40 np0005546420.novalocal useradd[20763]: new user: name=openvswitch, UID=986, GID=986, home=/, shell=/sbin/nologin, from=none
Dec 05 07:26:40 np0005546420.novalocal groupadd[20771]: group added to /etc/group: name=hugetlbfs, GID=985
Dec 05 07:26:40 np0005546420.novalocal groupadd[20771]: group added to /etc/gshadow: name=hugetlbfs
Dec 05 07:26:40 np0005546420.novalocal groupadd[20771]: new group: name=hugetlbfs, GID=985
Dec 05 07:26:40 np0005546420.novalocal usermod[20779]: add 'openvswitch' to group 'hugetlbfs'
Dec 05 07:26:40 np0005546420.novalocal usermod[20779]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 05 07:26:43 np0005546420.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 07:26:43 np0005546420.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 05 07:26:43 np0005546420.novalocal systemd[1]: Reloading.
Dec 05 07:26:43 np0005546420.novalocal systemd-sysv-generator[21303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:26:43 np0005546420.novalocal systemd-rc-local-generator[21300]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:26:44 np0005546420.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:26:44 np0005546420.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 07:26:45 np0005546420.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 07:26:45 np0005546420.novalocal systemd[1]: Finished man-db-cache-update.service.
Dec 05 07:26:45 np0005546420.novalocal systemd[1]: run-r9b53915d748a4dd087d1880648c3fbaf.service: Deactivated successfully.
Dec 05 07:26:45 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:26:45 np0005546420.novalocal sudo[20637]: pam_unix(sudo:session): session closed for user root
Dec 05 07:26:45 np0005546420.novalocal rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:27:12 np0005546420.novalocal sudo[21842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbnkdkfgjnoscvbndtfdffdfwsrroyhm ; /usr/bin/python3
Dec 05 07:27:12 np0005546420.novalocal sudo[21842]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:12 np0005546420.novalocal python3[21844]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-000000000015-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:27:31 np0005546420.novalocal sudo[21842]: pam_unix(sudo:session): session closed for user root
Dec 05 07:27:45 np0005546420.novalocal sudo[21862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-osxrgkalavzaehpdqqsojurlglkqymss ; /usr/bin/python3
Dec 05 07:27:45 np0005546420.novalocal sudo[21862]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:45 np0005546420.novalocal python3[21864]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:27:45 np0005546420.novalocal sudo[21862]: pam_unix(sudo:session): session closed for user root
Dec 05 07:27:46 np0005546420.novalocal sudo[21910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-miscodurxnfxrdrgpbvqucarhnndhjsj ; /usr/bin/python3
Dec 05 07:27:46 np0005546420.novalocal sudo[21910]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:46 np0005546420.novalocal python3[21912]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:27:46 np0005546420.novalocal sudo[21910]: pam_unix(sudo:session): session closed for user root
Dec 05 07:27:46 np0005546420.novalocal sudo[21953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpdntgvinlvjroowtrpeneveiwsognyk ; /usr/bin/python3
Dec 05 07:27:46 np0005546420.novalocal sudo[21953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:46 np0005546420.novalocal python3[21955]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764919665.8958836-291-135121771337828/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=9333f42ac4b9baf349a5c32f7bcba3335b5912e0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:27:46 np0005546420.novalocal sudo[21953]: pam_unix(sudo:session): session closed for user root
Dec 05 07:27:47 np0005546420.novalocal sudo[21983]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxordnfqzlnojywfplgphnolzuidkfdp ; /usr/bin/python3
Dec 05 07:27:47 np0005546420.novalocal sudo[21983]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:48 np0005546420.novalocal python3[21985]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network  state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 05 07:27:48 np0005546420.novalocal sudo[21983]: pam_unix(sudo:session): session closed for user root
Dec 05 07:27:48 np0005546420.novalocal systemd-journald[619]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation.
Dec 05 07:27:48 np0005546420.novalocal systemd-journald[619]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 05 07:27:48 np0005546420.novalocal rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 07:27:48 np0005546420.novalocal rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 07:27:48 np0005546420.novalocal sudo[22004]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dfmansegohsquflccazmszdbuwiwqcnh ; /usr/bin/python3
Dec 05 07:27:48 np0005546420.novalocal sudo[22004]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:48 np0005546420.novalocal python3[22006]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 05 07:27:48 np0005546420.novalocal sudo[22004]: pam_unix(sudo:session): session closed for user root
Dec 05 07:27:48 np0005546420.novalocal sudo[22024]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lptwfvslifmymjffgpewlrkedvqoflyy ; /usr/bin/python3
Dec 05 07:27:48 np0005546420.novalocal sudo[22024]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:48 np0005546420.novalocal python3[22026]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 05 07:27:48 np0005546420.novalocal sudo[22024]: pam_unix(sudo:session): session closed for user root
Dec 05 07:27:48 np0005546420.novalocal sudo[22044]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ufapkihnjtmmzzgjhgnhgmzavixoaidh ; /usr/bin/python3
Dec 05 07:27:48 np0005546420.novalocal sudo[22044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:48 np0005546420.novalocal python3[22046]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 05 07:27:49 np0005546420.novalocal sudo[22044]: pam_unix(sudo:session): session closed for user root
Dec 05 07:27:49 np0005546420.novalocal sudo[22064]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tlrknwwvlofeutotlbujoqbjnpqujzli ; /usr/bin/python3
Dec 05 07:27:49 np0005546420.novalocal sudo[22064]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:49 np0005546420.novalocal python3[22066]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 05 07:27:49 np0005546420.novalocal sudo[22064]: pam_unix(sudo:session): session closed for user root
Dec 05 07:27:51 np0005546420.novalocal sudo[22084]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exfarwhslsdsetoceigsaagxyoajkltk ; /usr/bin/python3
Dec 05 07:27:51 np0005546420.novalocal sudo[22084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:51 np0005546420.novalocal python3[22086]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 07:27:51 np0005546420.novalocal systemd[1]: Starting LSB: Bring up/down networking...
Dec 05 07:27:51 np0005546420.novalocal network[22089]: WARN      : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 07:27:51 np0005546420.novalocal network[22100]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 07:27:51 np0005546420.novalocal network[22089]: WARN      : [network] 'network-scripts' will be removed from distribution in near future.
Dec 05 07:27:51 np0005546420.novalocal network[22101]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:27:51 np0005546420.novalocal network[22089]: WARN      : [network] It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 07:27:51 np0005546420.novalocal network[22102]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 07:27:51 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919671.6268] audit: op="connections-reload" pid=22130 uid=0 result="success"
Dec 05 07:27:51 np0005546420.novalocal network[22089]: Bringing up loopback interface:  [  OK  ]
Dec 05 07:27:51 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919671.8430] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22218 uid=0 result="success"
Dec 05 07:27:51 np0005546420.novalocal network[22089]: Bringing up interface eth0:  [  OK  ]
Dec 05 07:27:51 np0005546420.novalocal systemd[1]: Started LSB: Bring up/down networking.
Dec 05 07:27:51 np0005546420.novalocal sudo[22084]: pam_unix(sudo:session): session closed for user root
Dec 05 07:27:52 np0005546420.novalocal sudo[22258]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjgonsqvncjjmyhumljdpvvnhpuyngss ; /usr/bin/python3
Dec 05 07:27:52 np0005546420.novalocal sudo[22258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:52 np0005546420.novalocal python3[22260]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 07:27:52 np0005546420.novalocal systemd[1]: Starting Open vSwitch Database Unit...
Dec 05 07:27:52 np0005546420.novalocal chown[22264]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 05 07:27:52 np0005546420.novalocal ovs-ctl[22269]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 05 07:27:52 np0005546420.novalocal ovs-ctl[22269]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 05 07:27:52 np0005546420.novalocal ovs-ctl[22269]: Starting ovsdb-server [  OK  ]
Dec 05 07:27:52 np0005546420.novalocal ovs-vsctl[22319]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 05 07:27:52 np0005546420.novalocal ovs-vsctl[22339]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"c2157608-8f70-44ef-883c-3db22f367c76\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\""
Dec 05 07:27:52 np0005546420.novalocal ovs-ctl[22269]: Configuring Open vSwitch system IDs [  OK  ]
Dec 05 07:27:52 np0005546420.novalocal ovs-ctl[22269]: Enabling remote OVSDB managers [  OK  ]
Dec 05 07:27:52 np0005546420.novalocal systemd[1]: Started Open vSwitch Database Unit.
Dec 05 07:27:52 np0005546420.novalocal ovs-vsctl[22345]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005546420.novalocal
Dec 05 07:27:52 np0005546420.novalocal systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 05 07:27:52 np0005546420.novalocal systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 05 07:27:52 np0005546420.novalocal systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 05 07:27:52 np0005546420.novalocal kernel: openvswitch: Open vSwitch switching datapath
Dec 05 07:27:52 np0005546420.novalocal ovs-ctl[22389]: Inserting openvswitch module [  OK  ]
Dec 05 07:27:52 np0005546420.novalocal ovs-ctl[22358]: Starting ovs-vswitchd [  OK  ]
Dec 05 07:27:52 np0005546420.novalocal ovs-vsctl[22408]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005546420.novalocal
Dec 05 07:27:52 np0005546420.novalocal ovs-ctl[22358]: Enabling remote OVSDB managers [  OK  ]
Dec 05 07:27:52 np0005546420.novalocal systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 05 07:27:52 np0005546420.novalocal systemd[1]: Starting Open vSwitch...
Dec 05 07:27:52 np0005546420.novalocal systemd[1]: Finished Open vSwitch.
Dec 05 07:27:52 np0005546420.novalocal sudo[22258]: pam_unix(sudo:session): session closed for user root
Dec 05 07:27:55 np0005546420.novalocal sudo[22424]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ztyrseuhwufdalzkagcowinbgqxpduzv ; /usr/bin/python3
Dec 05 07:27:55 np0005546420.novalocal sudo[22424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:27:55 np0005546420.novalocal python3[22426]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-00000000001a-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:27:56 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919676.4454] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22584 uid=0 result="success"
Dec 05 07:27:56 np0005546420.novalocal ifup[22585]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:27:56 np0005546420.novalocal ifup[22586]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:27:56 np0005546420.novalocal ifup[22587]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:27:56 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919676.4820] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22593 uid=0 result="success"
Dec 05 07:27:56 np0005546420.novalocal ovs-vsctl[22595]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:67:6b:07 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex
Dec 05 07:27:56 np0005546420.novalocal kernel: device ovs-system entered promiscuous mode
Dec 05 07:27:56 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919676.5150] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4)
Dec 05 07:27:56 np0005546420.novalocal kernel: Timeout policy base is empty
Dec 05 07:27:56 np0005546420.novalocal kernel: Failed to associated timeout policy `ovs_test_tp'
Dec 05 07:27:56 np0005546420.novalocal systemd-udevd[22597]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 07:27:56 np0005546420.novalocal kernel: device br-ex entered promiscuous mode
Dec 05 07:27:56 np0005546420.novalocal systemd-udevd[22612]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 07:27:56 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919676.5625] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5)
Dec 05 07:27:56 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919676.5951] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22623 uid=0 result="success"
Dec 05 07:27:56 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919676.6193] device (br-ex): carrier: link connected
Dec 05 07:27:59 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919679.6755] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22652 uid=0 result="success"
Dec 05 07:27:59 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919679.7240] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22667 uid=0 result="success"
Dec 05 07:27:59 np0005546420.novalocal NET[22692]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf
Dec 05 07:27:59 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919679.8228] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed')
Dec 05 07:27:59 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919679.8319] dhcp4 (eth1): canceled DHCP transaction
Dec 05 07:27:59 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919679.8319] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 05 07:27:59 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919679.8319] dhcp4 (eth1): state changed no lease
Dec 05 07:27:59 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919679.8350] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22701 uid=0 result="success"
Dec 05 07:27:59 np0005546420.novalocal ifup[22702]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:27:59 np0005546420.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 07:27:59 np0005546420.novalocal ifup[22703]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:27:59 np0005546420.novalocal ifup[22705]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:27:59 np0005546420.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 07:27:59 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919679.8767] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22717 uid=0 result="success"
Dec 05 07:27:59 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919679.9248] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22729 uid=0 result="success"
Dec 05 07:27:59 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919679.9318] device (eth1): carrier: link connected
Dec 05 07:27:59 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919679.9547] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22738 uid=0 result="success"
Dec 05 07:27:59 np0005546420.novalocal ipv6_wait_tentative[22750]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Dec 05 07:28:00 np0005546420.novalocal ipv6_wait_tentative[22755]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Dec 05 07:28:02 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919682.0296] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22764 uid=0 result="success"
Dec 05 07:28:02 np0005546420.novalocal ovs-vsctl[22779]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1
Dec 05 07:28:02 np0005546420.novalocal kernel: device eth1 entered promiscuous mode
Dec 05 07:28:02 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919682.1038] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22787 uid=0 result="success"
Dec 05 07:28:02 np0005546420.novalocal ifup[22788]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:28:02 np0005546420.novalocal ifup[22789]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:28:02 np0005546420.novalocal ifup[22790]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:28:02 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919682.1370] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22796 uid=0 result="success"
Dec 05 07:28:02 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919682.1833] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22806 uid=0 result="success"
Dec 05 07:28:02 np0005546420.novalocal ifup[22807]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:28:02 np0005546420.novalocal ifup[22808]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:28:02 np0005546420.novalocal ifup[22809]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:28:02 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919682.2194] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22815 uid=0 result="success"
Dec 05 07:28:02 np0005546420.novalocal ovs-vsctl[22818]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal
Dec 05 07:28:02 np0005546420.novalocal kernel: device vlan20 entered promiscuous mode
Dec 05 07:28:02 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919682.2602] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/6)
Dec 05 07:28:02 np0005546420.novalocal systemd-udevd[22820]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 07:28:02 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919682.2859] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22829 uid=0 result="success"
Dec 05 07:28:02 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919682.3058] device (vlan20): carrier: link connected
Dec 05 07:28:05 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919685.3577] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22858 uid=0 result="success"
Dec 05 07:28:05 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919685.4043] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22873 uid=0 result="success"
Dec 05 07:28:05 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919685.4661] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22894 uid=0 result="success"
Dec 05 07:28:05 np0005546420.novalocal ifup[22895]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:28:05 np0005546420.novalocal ifup[22896]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:28:05 np0005546420.novalocal ifup[22897]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:28:05 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919685.4986] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22903 uid=0 result="success"
Dec 05 07:28:05 np0005546420.novalocal ovs-vsctl[22906]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal
Dec 05 07:28:05 np0005546420.novalocal kernel: device vlan22 entered promiscuous mode
Dec 05 07:28:05 np0005546420.novalocal systemd-udevd[22908]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 07:28:05 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919685.5390] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/7)
Dec 05 07:28:05 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919685.5645] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22918 uid=0 result="success"
Dec 05 07:28:05 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919685.5859] device (vlan22): carrier: link connected
Dec 05 07:28:08 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919688.6352] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22948 uid=0 result="success"
Dec 05 07:28:08 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919688.6800] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22963 uid=0 result="success"
Dec 05 07:28:08 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919688.7396] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22984 uid=0 result="success"
Dec 05 07:28:08 np0005546420.novalocal ifup[22985]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:28:08 np0005546420.novalocal ifup[22986]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:28:08 np0005546420.novalocal ifup[22987]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:28:08 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919688.7737] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22993 uid=0 result="success"
Dec 05 07:28:08 np0005546420.novalocal ovs-vsctl[22996]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal
Dec 05 07:28:08 np0005546420.novalocal kernel: device vlan44 entered promiscuous mode
Dec 05 07:28:08 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919688.8559] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/8)
Dec 05 07:28:08 np0005546420.novalocal systemd-udevd[22998]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 07:28:08 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919688.8875] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23008 uid=0 result="success"
Dec 05 07:28:08 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919688.9094] device (vlan44): carrier: link connected
Dec 05 07:28:09 np0005546420.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 07:28:11 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919691.9784] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23038 uid=0 result="success"
Dec 05 07:28:12 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919692.0275] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23053 uid=0 result="success"
Dec 05 07:28:12 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919692.0933] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23074 uid=0 result="success"
Dec 05 07:28:12 np0005546420.novalocal ifup[23075]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:28:12 np0005546420.novalocal ifup[23076]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:28:12 np0005546420.novalocal ifup[23077]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:28:12 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919692.1254] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23083 uid=0 result="success"
Dec 05 07:28:12 np0005546420.novalocal ovs-vsctl[23086]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal
Dec 05 07:28:12 np0005546420.novalocal kernel: device vlan23 entered promiscuous mode
Dec 05 07:28:12 np0005546420.novalocal systemd-udevd[23088]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 07:28:12 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919692.1614] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/9)
Dec 05 07:28:12 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919692.1878] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23098 uid=0 result="success"
Dec 05 07:28:12 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919692.2111] device (vlan23): carrier: link connected
Dec 05 07:28:15 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919695.2774] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23128 uid=0 result="success"
Dec 05 07:28:15 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919695.3257] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23143 uid=0 result="success"
Dec 05 07:28:15 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919695.3871] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23164 uid=0 result="success"
Dec 05 07:28:15 np0005546420.novalocal ifup[23165]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:28:15 np0005546420.novalocal ifup[23166]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:28:15 np0005546420.novalocal ifup[23167]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:28:15 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919695.4194] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23173 uid=0 result="success"
Dec 05 07:28:15 np0005546420.novalocal ovs-vsctl[23176]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal
Dec 05 07:28:15 np0005546420.novalocal kernel: device vlan21 entered promiscuous mode
Dec 05 07:28:15 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919695.4604] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/10)
Dec 05 07:28:15 np0005546420.novalocal systemd-udevd[23178]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 07:28:15 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919695.4832] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23188 uid=0 result="success"
Dec 05 07:28:15 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919695.5028] device (vlan21): carrier: link connected
Dec 05 07:28:18 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919698.5796] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23218 uid=0 result="success"
Dec 05 07:28:18 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919698.6338] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23233 uid=0 result="success"
Dec 05 07:28:18 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919698.7033] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23254 uid=0 result="success"
Dec 05 07:28:18 np0005546420.novalocal ifup[23255]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:28:18 np0005546420.novalocal ifup[23256]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:28:18 np0005546420.novalocal ifup[23257]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:28:18 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919698.7389] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23263 uid=0 result="success"
Dec 05 07:28:18 np0005546420.novalocal ovs-vsctl[23266]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal
Dec 05 07:28:18 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919698.8037] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23273 uid=0 result="success"
Dec 05 07:28:19 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919699.8715] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23300 uid=0 result="success"
Dec 05 07:28:19 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919699.9221] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23315 uid=0 result="success"
Dec 05 07:28:19 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919699.9867] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23336 uid=0 result="success"
Dec 05 07:28:19 np0005546420.novalocal ifup[23337]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:28:19 np0005546420.novalocal ifup[23338]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:28:19 np0005546420.novalocal ifup[23339]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:28:20 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919700.0239] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23345 uid=0 result="success"
Dec 05 07:28:20 np0005546420.novalocal ovs-vsctl[23348]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal
Dec 05 07:28:20 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919700.0833] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23355 uid=0 result="success"
Dec 05 07:28:21 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919701.1522] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23383 uid=0 result="success"
Dec 05 07:28:21 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919701.2028] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23398 uid=0 result="success"
Dec 05 07:28:21 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919701.2688] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23419 uid=0 result="success"
Dec 05 07:28:21 np0005546420.novalocal ifup[23420]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:28:21 np0005546420.novalocal ifup[23421]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:28:21 np0005546420.novalocal ifup[23422]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:28:21 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919701.3016] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23428 uid=0 result="success"
Dec 05 07:28:21 np0005546420.novalocal ovs-vsctl[23431]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal
Dec 05 07:28:21 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919701.3971] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23438 uid=0 result="success"
Dec 05 07:28:22 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919702.4628] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23466 uid=0 result="success"
Dec 05 07:28:22 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919702.5115] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23481 uid=0 result="success"
Dec 05 07:28:22 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919702.5793] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23502 uid=0 result="success"
Dec 05 07:28:22 np0005546420.novalocal ifup[23503]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:28:22 np0005546420.novalocal ifup[23504]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:28:22 np0005546420.novalocal ifup[23505]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:28:22 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919702.6193] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23511 uid=0 result="success"
Dec 05 07:28:22 np0005546420.novalocal ovs-vsctl[23514]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal
Dec 05 07:28:22 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919702.7284] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23521 uid=0 result="success"
Dec 05 07:28:23 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919703.7925] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23549 uid=0 result="success"
Dec 05 07:28:23 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919703.8434] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23564 uid=0 result="success"
Dec 05 07:28:23 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919703.9062] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23585 uid=0 result="success"
Dec 05 07:28:23 np0005546420.novalocal ifup[23586]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 05 07:28:23 np0005546420.novalocal ifup[23587]: 'network-scripts' will be removed from distribution in near future.
Dec 05 07:28:23 np0005546420.novalocal ifup[23588]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 05 07:28:23 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919703.9390] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23594 uid=0 result="success"
Dec 05 07:28:24 np0005546420.novalocal ovs-vsctl[23597]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal
Dec 05 07:28:24 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919704.0562] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23604 uid=0 result="success"
Dec 05 07:28:25 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919705.1215] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23632 uid=0 result="success"
Dec 05 07:28:25 np0005546420.novalocal NetworkManager[5963]: <info>  [1764919705.1730] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23647 uid=0 result="success"
Dec 05 07:28:25 np0005546420.novalocal sudo[22424]: pam_unix(sudo:session): session closed for user root
Dec 05 07:29:17 np0005546420.novalocal python3[23679]: ansible-ansible.legacy.command Invoked with _raw_params=ip a
                                                       ping -c 2 -W 2 192.168.122.10
                                                       ping -c 2 -W 2 192.168.122.11
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-00000000001b-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:29:23 np0005546420.novalocal python3[23698]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 07:29:24 np0005546420.novalocal sudo[23712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xceklrinqvpgbgxbilhzroevcavaziti ; /usr/bin/python3
Dec 05 07:29:24 np0005546420.novalocal sudo[23712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:29:24 np0005546420.novalocal python3[23714]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 07:29:24 np0005546420.novalocal sudo[23712]: pam_unix(sudo:session): session closed for user root
Dec 05 07:29:25 np0005546420.novalocal python3[23728]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 07:29:26 np0005546420.novalocal sudo[23742]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-octokpqwhzqixjdebxoncnvixnukwutu ; /usr/bin/python3
Dec 05 07:29:26 np0005546420.novalocal sudo[23742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:29:26 np0005546420.novalocal python3[23744]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 07:29:26 np0005546420.novalocal sudo[23742]: pam_unix(sudo:session): session closed for user root
Dec 05 07:29:27 np0005546420.novalocal python3[23758]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname
Dec 05 07:29:27 np0005546420.novalocal python3[23773]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005546420.novalocal"
                                                       hostname_str_array=(${hostname//./ })
                                                       echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-000000000022-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:29:28 np0005546420.novalocal sudo[23791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qzkbbvypodzvojmswucgxcfqdtztgdop ; /usr/bin/python3
Dec 05 07:29:28 np0005546420.novalocal sudo[23791]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:29:28 np0005546420.novalocal python3[23793]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)
                                                       hostnamectl hostname "$hostname.localdomain"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-000000000023-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:29:28 np0005546420.novalocal systemd[1]: Starting Hostname Service...
Dec 05 07:29:28 np0005546420.novalocal systemd[1]: Started Hostname Service.
Dec 05 07:29:28 np0005546420.localdomain systemd-hostnamed[23797]: Hostname set to <np0005546420.localdomain> (static)
Dec 05 07:29:28 np0005546420.localdomain NetworkManager[5963]: <info>  [1764919768.8982] hostname: static hostname changed from "np0005546420.novalocal" to "np0005546420.localdomain"
Dec 05 07:29:28 np0005546420.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 07:29:28 np0005546420.localdomain systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 07:29:28 np0005546420.localdomain sudo[23791]: pam_unix(sudo:session): session closed for user root
Dec 05 07:29:30 np0005546420.localdomain sshd[19077]: pam_unix(sshd:session): session closed for user zuul
Dec 05 07:29:30 np0005546420.localdomain systemd[1]: session-10.scope: Deactivated successfully.
Dec 05 07:29:30 np0005546420.localdomain systemd[1]: session-10.scope: Consumed 1min 49.898s CPU time.
Dec 05 07:29:30 np0005546420.localdomain systemd-logind[762]: Session 10 logged out. Waiting for processes to exit.
Dec 05 07:29:30 np0005546420.localdomain systemd-logind[762]: Removed session 10.
Dec 05 07:29:33 np0005546420.localdomain sshd[23809]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:29:33 np0005546420.localdomain sshd[23809]: Accepted publickey for zuul from 38.102.83.114 port 59004 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 07:29:33 np0005546420.localdomain systemd-logind[762]: New session 11 of user zuul.
Dec 05 07:29:33 np0005546420.localdomain systemd[1]: Started Session 11 of User zuul.
Dec 05 07:29:33 np0005546420.localdomain sshd[23809]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 07:29:33 np0005546420.localdomain python3[23826]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname
Dec 05 07:29:35 np0005546420.localdomain sshd[23809]: pam_unix(sshd:session): session closed for user zuul
Dec 05 07:29:35 np0005546420.localdomain systemd[1]: session-11.scope: Deactivated successfully.
Dec 05 07:29:35 np0005546420.localdomain systemd-logind[762]: Session 11 logged out. Waiting for processes to exit.
Dec 05 07:29:35 np0005546420.localdomain systemd-logind[762]: Removed session 11.
Dec 05 07:29:38 np0005546420.localdomain systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 07:29:58 np0005546420.localdomain sshd[23828]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:29:58 np0005546420.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 07:30:00 np0005546420.localdomain sshd[23828]: Connection reset by authenticating user root 91.202.233.33 port 52214 [preauth]
Dec 05 07:30:01 np0005546420.localdomain sshd[23833]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:30:04 np0005546420.localdomain sshd[23833]: Connection reset by authenticating user root 91.202.233.33 port 50152 [preauth]
Dec 05 07:30:04 np0005546420.localdomain sshd[23835]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:30:06 np0005546420.localdomain sshd[23835]: Connection reset by authenticating user root 91.202.233.33 port 50180 [preauth]
Dec 05 07:30:06 np0005546420.localdomain sshd[23837]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:30:08 np0005546420.localdomain sshd[23837]: Invalid user ftpuser from 91.202.233.33 port 50196
Dec 05 07:30:08 np0005546420.localdomain sshd[23837]: Connection reset by invalid user ftpuser 91.202.233.33 port 50196 [preauth]
Dec 05 07:30:08 np0005546420.localdomain sshd[23839]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:30:18 np0005546420.localdomain sshd[23839]: Connection reset by 91.202.233.33 port 50210 [preauth]
Dec 05 07:30:19 np0005546420.localdomain sshd[23841]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:30:19 np0005546420.localdomain sshd[23841]: Accepted publickey for zuul from 38.102.83.114 port 37374 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 07:30:19 np0005546420.localdomain systemd-logind[762]: New session 12 of user zuul.
Dec 05 07:30:19 np0005546420.localdomain systemd[1]: Started Session 12 of User zuul.
Dec 05 07:30:19 np0005546420.localdomain sshd[23841]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 07:30:19 np0005546420.localdomain sudo[23858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kkzhzktqqdpnucmhylmcjdqdbgmhtzgr ; /usr/bin/python3
Dec 05 07:30:19 np0005546420.localdomain sudo[23858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:30:20 np0005546420.localdomain python3[23860]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 07:30:23 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:30:23 np0005546420.localdomain systemd-rc-local-generator[23901]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:30:23 np0005546420.localdomain systemd-sysv-generator[23907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:30:23 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:30:23 np0005546420.localdomain systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 05 07:30:24 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:30:24 np0005546420.localdomain systemd-rc-local-generator[23942]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:30:24 np0005546420.localdomain systemd-sysv-generator[23945]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:30:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:30:24 np0005546420.localdomain systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 05 07:30:24 np0005546420.localdomain systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 05 07:30:24 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:30:24 np0005546420.localdomain systemd-sysv-generator[23987]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:30:24 np0005546420.localdomain systemd-rc-local-generator[23983]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:30:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:30:24 np0005546420.localdomain systemd[1]: Listening on LVM2 poll daemon socket.
Dec 05 07:30:25 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 07:30:25 np0005546420.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 05 07:30:25 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:30:25 np0005546420.localdomain systemd-rc-local-generator[24040]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:30:25 np0005546420.localdomain systemd-sysv-generator[24047]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:30:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:30:25 np0005546420.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 07:30:25 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 07:30:25 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 07:30:25 np0005546420.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 05 07:30:25 np0005546420.localdomain systemd[1]: run-r0b26582cfa5d4582a89c3d834c9740c8.service: Deactivated successfully.
Dec 05 07:30:25 np0005546420.localdomain systemd[1]: run-rdd32b015ad414bf1901514f0a1fa1db3.service: Deactivated successfully.
Dec 05 07:30:26 np0005546420.localdomain sudo[23858]: pam_unix(sudo:session): session closed for user root
Dec 05 07:31:26 np0005546420.localdomain sshd[23844]: Received disconnect from 38.102.83.114 port 37374:11: disconnected by user
Dec 05 07:31:26 np0005546420.localdomain sshd[23844]: Disconnected from user zuul 38.102.83.114 port 37374
Dec 05 07:31:26 np0005546420.localdomain sshd[23841]: pam_unix(sshd:session): session closed for user zuul
Dec 05 07:31:26 np0005546420.localdomain systemd[1]: session-12.scope: Deactivated successfully.
Dec 05 07:31:26 np0005546420.localdomain systemd[1]: session-12.scope: Consumed 4.736s CPU time.
Dec 05 07:31:26 np0005546420.localdomain systemd-logind[762]: Session 12 logged out. Waiting for processes to exit.
Dec 05 07:31:26 np0005546420.localdomain systemd-logind[762]: Removed session 12.
Dec 05 07:36:03 np0005546420.localdomain sshd[24633]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:36:04 np0005546420.localdomain sshd[24633]: Invalid user  from 2.57.121.112 port 63127
Dec 05 07:36:04 np0005546420.localdomain sshd[24633]: Received disconnect from 2.57.121.112 port 63127:11: Bye [preauth]
Dec 05 07:36:04 np0005546420.localdomain sshd[24633]: Disconnected from invalid user  2.57.121.112 port 63127 [preauth]
Dec 05 07:43:04 np0005546420.localdomain sshd[24637]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:43:06 np0005546420.localdomain sshd[24637]: Connection reset by authenticating user root 45.135.232.92 port 32112 [preauth]
Dec 05 07:43:06 np0005546420.localdomain sshd[24639]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:43:08 np0005546420.localdomain sshd[24639]: Connection reset by authenticating user root 45.135.232.92 port 32114 [preauth]
Dec 05 07:43:08 np0005546420.localdomain sshd[24641]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:43:10 np0005546420.localdomain sshd[24641]: Invalid user admin from 45.135.232.92 port 32120
Dec 05 07:43:11 np0005546420.localdomain sshd[24641]: Connection reset by invalid user admin 45.135.232.92 port 32120 [preauth]
Dec 05 07:43:11 np0005546420.localdomain sshd[24644]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:43:12 np0005546420.localdomain sshd[24644]: Invalid user admin from 45.135.232.92 port 32138
Dec 05 07:43:13 np0005546420.localdomain sshd[24644]: Connection reset by invalid user admin 45.135.232.92 port 32138 [preauth]
Dec 05 07:43:13 np0005546420.localdomain sshd[24646]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:43:14 np0005546420.localdomain sshd[24646]: Invalid user kodi from 45.135.232.92 port 32142
Dec 05 07:43:15 np0005546420.localdomain sshd[24646]: Connection reset by invalid user kodi 45.135.232.92 port 32142 [preauth]
Dec 05 07:43:52 np0005546420.localdomain rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 07:44:01 np0005546420.localdomain anacron[6196]: Job `cron.weekly' started
Dec 05 07:44:01 np0005546420.localdomain anacron[6196]: Job `cron.weekly' terminated
Dec 05 07:45:16 np0005546420.localdomain sshd[24828]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:45:18 np0005546420.localdomain sshd[24828]: Connection reset by authenticating user root 45.140.17.124 port 64274 [preauth]
Dec 05 07:45:18 np0005546420.localdomain sshd[24830]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:45:20 np0005546420.localdomain sshd[24830]: Invalid user ftpuser from 45.140.17.124 port 64280
Dec 05 07:45:21 np0005546420.localdomain sshd[24830]: Connection reset by invalid user ftpuser 45.140.17.124 port 64280 [preauth]
Dec 05 07:45:21 np0005546420.localdomain sshd[24832]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:45:23 np0005546420.localdomain sshd[24832]: Connection reset by authenticating user root 45.140.17.124 port 33704 [preauth]
Dec 05 07:45:23 np0005546420.localdomain sshd[24834]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:45:24 np0005546420.localdomain sshd[24834]: Invalid user user from 45.140.17.124 port 33712
Dec 05 07:45:25 np0005546420.localdomain sshd[24834]: Connection reset by invalid user user 45.140.17.124 port 33712 [preauth]
Dec 05 07:45:25 np0005546420.localdomain sshd[24836]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:45:27 np0005546420.localdomain sshd[24836]: Connection reset by authenticating user root 45.140.17.124 port 33718 [preauth]
Dec 05 07:47:18 np0005546420.localdomain sshd[24839]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:47:18 np0005546420.localdomain sshd[24839]: Accepted publickey for zuul from 192.168.122.100 port 51680 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 07:47:18 np0005546420.localdomain systemd-logind[762]: New session 13 of user zuul.
Dec 05 07:47:18 np0005546420.localdomain systemd[1]: Started Session 13 of User zuul.
Dec 05 07:47:18 np0005546420.localdomain sshd[24839]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 07:47:18 np0005546420.localdomain sudo[24885]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qotbyilndchmybnvjppucyrtvagblecp ; /usr/bin/python3
Dec 05 07:47:18 np0005546420.localdomain sudo[24885]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:18 np0005546420.localdomain python3[24887]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 07:47:19 np0005546420.localdomain sudo[24885]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:20 np0005546420.localdomain sudo[24972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ypbmyijcfmhaxbisboczwrkrhihifxra ; /usr/bin/python3
Dec 05 07:47:20 np0005546420.localdomain sudo[24972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:20 np0005546420.localdomain python3[24974]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 07:47:23 np0005546420.localdomain sudo[24972]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:24 np0005546420.localdomain sudo[24990]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqhsshwbouyyvqdciwxuippmvxlzbgyz ; /usr/bin/python3
Dec 05 07:47:24 np0005546420.localdomain sudo[24990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:24 np0005546420.localdomain python3[24992]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 07:47:24 np0005546420.localdomain sudo[24990]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:24 np0005546420.localdomain sudo[25006]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdbzgvvwbblsiovvqijaksyouiujmhnv ; /usr/bin/python3
Dec 05 07:47:24 np0005546420.localdomain sudo[25006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:24 np0005546420.localdomain python3[25008]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                                         losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                                         lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:47:24 np0005546420.localdomain kernel: loop: module loaded
Dec 05 07:47:24 np0005546420.localdomain kernel: loop3: detected capacity change from 0 to 14680064
Dec 05 07:47:24 np0005546420.localdomain sudo[25006]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:25 np0005546420.localdomain sudo[25031]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jiqfmfoxvfinizuyunboborepypielzr ; /usr/bin/python3
Dec 05 07:47:25 np0005546420.localdomain sudo[25031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:25 np0005546420.localdomain python3[25033]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                                         vgcreate ceph_vg0 /dev/loop3
                                                         lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                                         lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:47:25 np0005546420.localdomain lvm[25036]: PV /dev/loop3 not used.
Dec 05 07:47:25 np0005546420.localdomain lvm[25038]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 07:47:25 np0005546420.localdomain systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 05 07:47:25 np0005546420.localdomain lvm[25045]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 05 07:47:25 np0005546420.localdomain lvm[25048]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 07:47:25 np0005546420.localdomain lvm[25048]: VG ceph_vg0 finished
Dec 05 07:47:25 np0005546420.localdomain systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 05 07:47:25 np0005546420.localdomain sudo[25031]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:26 np0005546420.localdomain sudo[25094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xcwaazmjdawcjhbpwvjkyksxcmmjkban ; /usr/bin/python3
Dec 05 07:47:26 np0005546420.localdomain sudo[25094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:26 np0005546420.localdomain python3[25096]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:47:26 np0005546420.localdomain sudo[25094]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:26 np0005546420.localdomain sudo[25137]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ogclkhksdpxgsgfuijybpafapmdkkziz ; /usr/bin/python3
Dec 05 07:47:26 np0005546420.localdomain sudo[25137]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:26 np0005546420.localdomain python3[25139]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764920845.9124253-55039-87436413414940/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:47:26 np0005546420.localdomain sudo[25137]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:27 np0005546420.localdomain sudo[25167]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-giwvajeqvyapufjgezkvdcsxpwbsshex ; /usr/bin/python3
Dec 05 07:47:27 np0005546420.localdomain sudo[25167]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:27 np0005546420.localdomain python3[25169]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 07:47:27 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:47:27 np0005546420.localdomain systemd-rc-local-generator[25198]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:47:27 np0005546420.localdomain systemd-sysv-generator[25202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:47:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:47:27 np0005546420.localdomain systemd[1]: Starting Ceph OSD losetup...
Dec 05 07:47:27 np0005546420.localdomain bash[25210]: /dev/loop3: [64516]:8400144 (/var/lib/ceph-osd-0.img)
Dec 05 07:47:27 np0005546420.localdomain systemd[1]: Finished Ceph OSD losetup.
Dec 05 07:47:27 np0005546420.localdomain lvm[25211]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 07:47:27 np0005546420.localdomain lvm[25211]: VG ceph_vg0 finished
Dec 05 07:47:27 np0005546420.localdomain sudo[25167]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:28 np0005546420.localdomain sudo[25225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wgjtbuhieiaraghdtijlicetfnlfoyti ; /usr/bin/python3
Dec 05 07:47:28 np0005546420.localdomain sudo[25225]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:28 np0005546420.localdomain python3[25227]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 07:47:30 np0005546420.localdomain sudo[25225]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:31 np0005546420.localdomain sudo[25242]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dumduimkyurzrhfskhtuvkocimizplus ; /usr/bin/python3
Dec 05 07:47:31 np0005546420.localdomain sudo[25242]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:31 np0005546420.localdomain python3[25244]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 07:47:31 np0005546420.localdomain sudo[25242]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:32 np0005546420.localdomain sudo[25258]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvjgjmnfztrjuaixjguxvysbydenqqth ; /usr/bin/python3
Dec 05 07:47:32 np0005546420.localdomain sudo[25258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:32 np0005546420.localdomain python3[25260]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G
                                                         losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                                         lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:47:32 np0005546420.localdomain kernel: loop4: detected capacity change from 0 to 14680064
Dec 05 07:47:32 np0005546420.localdomain sudo[25258]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:32 np0005546420.localdomain sudo[25280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttfhnvjcjueidxzaktzcwbdtawtmintv ; /usr/bin/python3
Dec 05 07:47:32 np0005546420.localdomain sudo[25280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:32 np0005546420.localdomain python3[25282]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                                         vgcreate ceph_vg1 /dev/loop4
                                                         lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                                         lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:47:32 np0005546420.localdomain lvm[25285]: PV /dev/loop4 not used.
Dec 05 07:47:32 np0005546420.localdomain lvm[25287]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 07:47:32 np0005546420.localdomain systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 05 07:47:32 np0005546420.localdomain lvm[25295]:   1 logical volume(s) in volume group "ceph_vg1" now active
Dec 05 07:47:32 np0005546420.localdomain lvm[25298]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 07:47:32 np0005546420.localdomain lvm[25298]: VG ceph_vg1 finished
Dec 05 07:47:32 np0005546420.localdomain systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 05 07:47:32 np0005546420.localdomain sudo[25280]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:33 np0005546420.localdomain sudo[25344]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nnzltnzdrggqvizlemfnroqzdbjmcwok ; /usr/bin/python3
Dec 05 07:47:33 np0005546420.localdomain sudo[25344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:33 np0005546420.localdomain python3[25346]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:47:33 np0005546420.localdomain sudo[25344]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:33 np0005546420.localdomain sudo[25387]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lzsinxfykufcvwsavlonxlocwxqrrvbz ; /usr/bin/python3
Dec 05 07:47:33 np0005546420.localdomain sudo[25387]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:33 np0005546420.localdomain python3[25389]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764920853.187955-55226-28248278035559/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:47:33 np0005546420.localdomain sudo[25387]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:34 np0005546420.localdomain sudo[25417]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aylzsmivlpvaatcyeqnalnvqaxcfveuu ; /usr/bin/python3
Dec 05 07:47:34 np0005546420.localdomain sudo[25417]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:34 np0005546420.localdomain python3[25419]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 07:47:34 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:47:34 np0005546420.localdomain systemd-rc-local-generator[25446]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:47:34 np0005546420.localdomain systemd-sysv-generator[25451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:47:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:47:34 np0005546420.localdomain systemd[1]: Starting Ceph OSD losetup...
Dec 05 07:47:34 np0005546420.localdomain bash[25460]: /dev/loop4: [64516]:9172193 (/var/lib/ceph-osd-1.img)
Dec 05 07:47:34 np0005546420.localdomain systemd[1]: Finished Ceph OSD losetup.
Dec 05 07:47:34 np0005546420.localdomain lvm[25461]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 07:47:34 np0005546420.localdomain lvm[25461]: VG ceph_vg1 finished
Dec 05 07:47:34 np0005546420.localdomain sudo[25417]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:43 np0005546420.localdomain sudo[25504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpurfojdoeginmkjmssjxqxkykghkixn ; /usr/bin/python3
Dec 05 07:47:43 np0005546420.localdomain sudo[25504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:43 np0005546420.localdomain python3[25506]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 07:47:43 np0005546420.localdomain sudo[25504]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:44 np0005546420.localdomain sudo[25524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjrztivsoybnoywqpofuzoiunjohzxac ; /usr/bin/python3
Dec 05 07:47:44 np0005546420.localdomain sudo[25524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:44 np0005546420.localdomain python3[25526]: ansible-hostname Invoked with name=np0005546420.localdomain use=None
Dec 05 07:47:44 np0005546420.localdomain systemd[1]: Starting Hostname Service...
Dec 05 07:47:45 np0005546420.localdomain systemd[1]: Started Hostname Service.
Dec 05 07:47:45 np0005546420.localdomain sudo[25524]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:46 np0005546420.localdomain sudo[25547]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rednlvkyybsfeirknfutstcwndzihyod ; /usr/bin/python3
Dec 05 07:47:46 np0005546420.localdomain sudo[25547]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:47 np0005546420.localdomain python3[25549]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Dec 05 07:47:47 np0005546420.localdomain sudo[25547]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:47 np0005546420.localdomain sudo[25595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-elbxglstqkuzhnvrfhwanxcalwbyrppa ; /usr/bin/python3
Dec 05 07:47:47 np0005546420.localdomain sudo[25595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:47 np0005546420.localdomain python3[25597]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.u9m5rd5jtmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:47:47 np0005546420.localdomain sudo[25595]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:47 np0005546420.localdomain sudo[25625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dbxsbkxrtsszgjbdkbarvjamoduikhqh ; /usr/bin/python3
Dec 05 07:47:47 np0005546420.localdomain sudo[25625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:48 np0005546420.localdomain python3[25627]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.u9m5rd5jtmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:47:48 np0005546420.localdomain sudo[25625]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:48 np0005546420.localdomain sudo[25641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-weknpbweoimporzlcxovaxjrwecptlyd ; /usr/bin/python3
Dec 05 07:47:48 np0005546420.localdomain sudo[25641]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:48 np0005546420.localdomain python3[25643]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.u9m5rd5jtmphosts insertbefore=BOF block=192.168.122.106 np0005546419.localdomain np0005546419
                                                         192.168.122.106 np0005546419.ctlplane.localdomain np0005546419.ctlplane
                                                         192.168.122.107 np0005546420.localdomain np0005546420
                                                         192.168.122.107 np0005546420.ctlplane.localdomain np0005546420.ctlplane
                                                         192.168.122.108 np0005546421.localdomain np0005546421
                                                         192.168.122.108 np0005546421.ctlplane.localdomain np0005546421.ctlplane
                                                         192.168.122.103 np0005546415.localdomain np0005546415
                                                         192.168.122.103 np0005546415.ctlplane.localdomain np0005546415.ctlplane
                                                         192.168.122.104 np0005546416.localdomain np0005546416
                                                         192.168.122.104 np0005546416.ctlplane.localdomain np0005546416.ctlplane
                                                         192.168.122.105 np0005546418.localdomain np0005546418
                                                         192.168.122.105 np0005546418.ctlplane.localdomain np0005546418.ctlplane
                                                         
                                                         192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane
                                                          marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:47:48 np0005546420.localdomain sudo[25641]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:49 np0005546420.localdomain sudo[25657]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wdligogxhbtmvxnulosqembjxrzwvcss ; /usr/bin/python3
Dec 05 07:47:49 np0005546420.localdomain sudo[25657]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:49 np0005546420.localdomain python3[25659]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.u9m5rd5jtmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:47:49 np0005546420.localdomain sudo[25657]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:49 np0005546420.localdomain sudo[25674]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gmprlwgjsitqdqsmcexzjmyaqedhgjoo ; /usr/bin/python3
Dec 05 07:47:49 np0005546420.localdomain sudo[25674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:49 np0005546420.localdomain python3[25676]: ansible-file Invoked with path=/tmp/ansible.u9m5rd5jtmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:47:49 np0005546420.localdomain sudo[25674]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:51 np0005546420.localdomain sudo[25690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-luqjwrvlwrgbkyevyybmdkvdxrdydxsa ; /usr/bin/python3
Dec 05 07:47:51 np0005546420.localdomain sudo[25690]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:52 np0005546420.localdomain python3[25692]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:47:52 np0005546420.localdomain sudo[25690]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:52 np0005546420.localdomain sudo[25708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mjuvhnaterjlkjmxtxpflvxlreksorvv ; /usr/bin/python3
Dec 05 07:47:52 np0005546420.localdomain sudo[25708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:52 np0005546420.localdomain python3[25710]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 07:47:55 np0005546420.localdomain sudo[25708]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:56 np0005546420.localdomain sudo[25757]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dnvfvthivckifwmoxaxlotnkdctmtacb ; /usr/bin/python3
Dec 05 07:47:56 np0005546420.localdomain sudo[25757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:56 np0005546420.localdomain python3[25759]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:47:56 np0005546420.localdomain sudo[25757]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:57 np0005546420.localdomain sudo[25802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-idncbqegfadegylyykffxniojakhstxp ; /usr/bin/python3
Dec 05 07:47:57 np0005546420.localdomain sudo[25802]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:57 np0005546420.localdomain python3[25804]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764920876.3058898-56050-166053974245110/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:47:57 np0005546420.localdomain sudo[25802]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:58 np0005546420.localdomain sudo[25832]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxqaycopjfdrfnbboeiykifjinpijtgt ; /usr/bin/python3
Dec 05 07:47:58 np0005546420.localdomain sudo[25832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:47:58 np0005546420.localdomain python3[25834]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 07:47:59 np0005546420.localdomain sudo[25832]: pam_unix(sudo:session): session closed for user root
Dec 05 07:47:59 np0005546420.localdomain sudo[25850]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bucbktyahdlmuzgwosopaylgqgrfpbnl ; /usr/bin/python3
Dec 05 07:47:59 np0005546420.localdomain sudo[25850]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:00 np0005546420.localdomain python3[25852]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 07:48:00 np0005546420.localdomain chronyd[767]: chronyd exiting
Dec 05 07:48:00 np0005546420.localdomain systemd[1]: Stopping NTP client/server...
Dec 05 07:48:00 np0005546420.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Dec 05 07:48:00 np0005546420.localdomain systemd[1]: Stopped NTP client/server.
Dec 05 07:48:00 np0005546420.localdomain systemd[1]: chronyd.service: Consumed 118ms CPU time, read 1.9M from disk, written 4.0K to disk.
Dec 05 07:48:00 np0005546420.localdomain systemd[1]: Starting NTP client/server...
Dec 05 07:48:00 np0005546420.localdomain chronyd[25859]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 05 07:48:00 np0005546420.localdomain chronyd[25859]: Frequency -30.651 +/- 0.271 ppm read from /var/lib/chrony/drift
Dec 05 07:48:00 np0005546420.localdomain chronyd[25859]: Loaded seccomp filter (level 2)
Dec 05 07:48:00 np0005546420.localdomain systemd[1]: Started NTP client/server.
Dec 05 07:48:00 np0005546420.localdomain sudo[25850]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:00 np0005546420.localdomain sudo[25906]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbakpjxmyvdqstgcfimiimlzrbvjmprd ; /usr/bin/python3
Dec 05 07:48:00 np0005546420.localdomain sudo[25906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:01 np0005546420.localdomain python3[25908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:48:01 np0005546420.localdomain sudo[25906]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:01 np0005546420.localdomain sudo[25949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbqdvtatjsarnffaqfyzuiigkhlaqulg ; /usr/bin/python3
Dec 05 07:48:01 np0005546420.localdomain sudo[25949]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:01 np0005546420.localdomain python3[25951]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764920880.8015144-56194-28717715406617/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:48:01 np0005546420.localdomain sudo[25949]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:01 np0005546420.localdomain sudo[25979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fiagzcogkbvlpcgkpxwezqqhtgsapcee ; /usr/bin/python3
Dec 05 07:48:01 np0005546420.localdomain sudo[25979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:02 np0005546420.localdomain python3[25981]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 07:48:02 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:48:02 np0005546420.localdomain systemd-sysv-generator[26012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:48:02 np0005546420.localdomain systemd-rc-local-generator[26009]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:48:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:48:02 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:48:02 np0005546420.localdomain systemd-rc-local-generator[26047]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:48:02 np0005546420.localdomain systemd-sysv-generator[26050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:48:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:48:02 np0005546420.localdomain systemd[1]: Starting chronyd online sources service...
Dec 05 07:48:02 np0005546420.localdomain chronyc[26058]: 200 OK
Dec 05 07:48:02 np0005546420.localdomain systemd[1]: chrony-online.service: Deactivated successfully.
Dec 05 07:48:02 np0005546420.localdomain systemd[1]: Finished chronyd online sources service.
Dec 05 07:48:02 np0005546420.localdomain sudo[25979]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:03 np0005546420.localdomain sudo[26073]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jonabtnvmbjnslzbxvnpurfgyzhvfruk ; /usr/bin/python3
Dec 05 07:48:03 np0005546420.localdomain sudo[26073]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:03 np0005546420.localdomain python3[26075]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:48:03 np0005546420.localdomain chronyd[25859]: System clock was stepped by 0.000000 seconds
Dec 05 07:48:03 np0005546420.localdomain sudo[26073]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:03 np0005546420.localdomain sudo[26090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qgorjutgtdqnvzdkooujopcqdnnuekxk ; /usr/bin/python3
Dec 05 07:48:03 np0005546420.localdomain sudo[26090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:03 np0005546420.localdomain python3[26092]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:48:04 np0005546420.localdomain chronyd[25859]: Selected source 162.159.200.123 (pool.ntp.org)
Dec 05 07:48:13 np0005546420.localdomain sudo[26090]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:13 np0005546420.localdomain sudo[26107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xgnyskbzdfusiwhkdgrjkrotjmmzrziq ; /usr/bin/python3
Dec 05 07:48:13 np0005546420.localdomain sudo[26107]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:14 np0005546420.localdomain python3[26109]: ansible-timezone Invoked with name=UTC hwclock=None
Dec 05 07:48:14 np0005546420.localdomain systemd[1]: Starting Time & Date Service...
Dec 05 07:48:14 np0005546420.localdomain systemd[1]: Started Time & Date Service.
Dec 05 07:48:14 np0005546420.localdomain sudo[26107]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:15 np0005546420.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 07:48:15 np0005546420.localdomain sudo[26131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cuszvbiegejhdcacgcsagxqlxkpettic ; /usr/bin/python3
Dec 05 07:48:15 np0005546420.localdomain sudo[26131]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:15 np0005546420.localdomain python3[26133]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 07:48:16 np0005546420.localdomain chronyd[25859]: chronyd exiting
Dec 05 07:48:16 np0005546420.localdomain systemd[1]: Stopping NTP client/server...
Dec 05 07:48:16 np0005546420.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Dec 05 07:48:16 np0005546420.localdomain systemd[1]: Stopped NTP client/server.
Dec 05 07:48:16 np0005546420.localdomain systemd[1]: Starting NTP client/server...
Dec 05 07:48:16 np0005546420.localdomain chronyd[26140]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 05 07:48:16 np0005546420.localdomain chronyd[26140]: Frequency -30.651 +/- 0.321 ppm read from /var/lib/chrony/drift
Dec 05 07:48:16 np0005546420.localdomain chronyd[26140]: Loaded seccomp filter (level 2)
Dec 05 07:48:16 np0005546420.localdomain systemd[1]: Started NTP client/server.
Dec 05 07:48:16 np0005546420.localdomain sudo[26131]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:20 np0005546420.localdomain chronyd[26140]: Selected source 162.159.200.123 (pool.ntp.org)
Dec 05 07:48:31 np0005546420.localdomain sudo[26155]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pvxntkucnfzrxnwivtswckllxrnxritj ; /usr/bin/python3
Dec 05 07:48:31 np0005546420.localdomain sudo[26155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:31 np0005546420.localdomain useradd[26159]: new group: name=ceph-admin, GID=1002
Dec 05 07:48:31 np0005546420.localdomain useradd[26159]: new user: name=ceph-admin, UID=1002, GID=1002, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 05 07:48:31 np0005546420.localdomain sudo[26155]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:31 np0005546420.localdomain sudo[26211]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utzvpvyguyvfkfyyzqroelajlsndbgdq ; /usr/bin/python3
Dec 05 07:48:31 np0005546420.localdomain sudo[26211]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:31 np0005546420.localdomain sudo[26211]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:32 np0005546420.localdomain sudo[26254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dlwikvlmxvxeqokrwlmkvbamcezqonfg ; /usr/bin/python3
Dec 05 07:48:32 np0005546420.localdomain sudo[26254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:32 np0005546420.localdomain sudo[26254]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:32 np0005546420.localdomain sudo[26284]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hngivcyinbzesmldyksodzwsmlcabwys ; /usr/bin/python3
Dec 05 07:48:32 np0005546420.localdomain sudo[26284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:32 np0005546420.localdomain sudo[26284]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:32 np0005546420.localdomain sudo[26300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xewzauhqqcidxdybrgeibaayrzajqmxa ; /usr/bin/python3
Dec 05 07:48:32 np0005546420.localdomain sudo[26300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:33 np0005546420.localdomain sudo[26300]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:33 np0005546420.localdomain sudo[26316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thqmkxgltadcoypvukwobutqjstompnu ; /usr/bin/python3
Dec 05 07:48:33 np0005546420.localdomain sudo[26316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:33 np0005546420.localdomain sudo[26316]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:33 np0005546420.localdomain sudo[26332]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdmluyxffuwnotprbuvulimvldylrxbg ; /usr/bin/python3
Dec 05 07:48:33 np0005546420.localdomain sudo[26332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:48:34 np0005546420.localdomain sudo[26332]: pam_unix(sudo:session): session closed for user root
Dec 05 07:48:44 np0005546420.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 07:50:18 np0005546420.localdomain sshd[26337]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:18 np0005546420.localdomain sshd[26337]: Accepted publickey for ceph-admin from 192.168.122.103 port 42418 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:18 np0005546420.localdomain systemd[1]: Created slice User Slice of UID 1002.
Dec 05 07:50:18 np0005546420.localdomain systemd[1]: Starting User Runtime Directory /run/user/1002...
Dec 05 07:50:18 np0005546420.localdomain systemd-logind[762]: New session 14 of user ceph-admin.
Dec 05 07:50:18 np0005546420.localdomain systemd[1]: Finished User Runtime Directory /run/user/1002.
Dec 05 07:50:18 np0005546420.localdomain systemd[1]: Starting User Manager for UID 1002...
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Queued start job for default target Main User Target.
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Created slice User Application Slice.
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Reached target Paths.
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Reached target Timers.
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Starting D-Bus User Message Bus Socket...
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Starting Create User's Volatile Files and Directories...
Dec 05 07:50:18 np0005546420.localdomain sshd[26354]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Listening on D-Bus User Message Bus Socket.
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Finished Create User's Volatile Files and Directories.
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Reached target Sockets.
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Reached target Basic System.
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Reached target Main User Target.
Dec 05 07:50:18 np0005546420.localdomain systemd[26341]: Startup finished in 103ms.
Dec 05 07:50:18 np0005546420.localdomain systemd[1]: Started User Manager for UID 1002.
Dec 05 07:50:18 np0005546420.localdomain systemd[1]: Started Session 14 of User ceph-admin.
Dec 05 07:50:18 np0005546420.localdomain sshd[26337]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:18 np0005546420.localdomain sshd[26354]: Accepted publickey for ceph-admin from 192.168.122.103 port 42424 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:18 np0005546420.localdomain systemd-logind[762]: New session 16 of user ceph-admin.
Dec 05 07:50:18 np0005546420.localdomain systemd[1]: Started Session 16 of User ceph-admin.
Dec 05 07:50:18 np0005546420.localdomain sshd[26354]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:18 np0005546420.localdomain sudo[26361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:50:18 np0005546420.localdomain sudo[26361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:18 np0005546420.localdomain sudo[26361]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:18 np0005546420.localdomain sshd[26376]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:19 np0005546420.localdomain sshd[26376]: Accepted publickey for ceph-admin from 192.168.122.103 port 42434 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:19 np0005546420.localdomain systemd-logind[762]: New session 17 of user ceph-admin.
Dec 05 07:50:19 np0005546420.localdomain systemd[1]: Started Session 17 of User ceph-admin.
Dec 05 07:50:19 np0005546420.localdomain sshd[26376]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:19 np0005546420.localdomain sudo[26380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host --expect-hostname np0005546420.localdomain
Dec 05 07:50:19 np0005546420.localdomain sudo[26380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:19 np0005546420.localdomain sudo[26380]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:19 np0005546420.localdomain sshd[26395]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:19 np0005546420.localdomain sshd[26395]: Accepted publickey for ceph-admin from 192.168.122.103 port 42444 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:19 np0005546420.localdomain systemd-logind[762]: New session 18 of user ceph-admin.
Dec 05 07:50:19 np0005546420.localdomain systemd[1]: Started Session 18 of User ceph-admin.
Dec 05 07:50:19 np0005546420.localdomain sshd[26395]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:19 np0005546420.localdomain sudo[26399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3
Dec 05 07:50:19 np0005546420.localdomain sudo[26399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:19 np0005546420.localdomain sudo[26399]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:19 np0005546420.localdomain sshd[26414]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:19 np0005546420.localdomain sshd[26414]: Accepted publickey for ceph-admin from 192.168.122.103 port 42458 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:19 np0005546420.localdomain systemd-logind[762]: New session 19 of user ceph-admin.
Dec 05 07:50:19 np0005546420.localdomain systemd[1]: Started Session 19 of User ceph-admin.
Dec 05 07:50:19 np0005546420.localdomain sshd[26414]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:19 np0005546420.localdomain sudo[26418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 07:50:19 np0005546420.localdomain sudo[26418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:19 np0005546420.localdomain sudo[26418]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:20 np0005546420.localdomain sshd[26433]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:20 np0005546420.localdomain sshd[26433]: Accepted publickey for ceph-admin from 192.168.122.103 port 42464 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:20 np0005546420.localdomain systemd-logind[762]: New session 20 of user ceph-admin.
Dec 05 07:50:20 np0005546420.localdomain systemd[1]: Started Session 20 of User ceph-admin.
Dec 05 07:50:20 np0005546420.localdomain sshd[26433]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:20 np0005546420.localdomain sudo[26437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 07:50:20 np0005546420.localdomain sudo[26437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:20 np0005546420.localdomain sudo[26437]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:20 np0005546420.localdomain sshd[26452]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:20 np0005546420.localdomain sshd[26452]: Accepted publickey for ceph-admin from 192.168.122.103 port 42470 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:20 np0005546420.localdomain systemd-logind[762]: New session 21 of user ceph-admin.
Dec 05 07:50:20 np0005546420.localdomain systemd[1]: Started Session 21 of User ceph-admin.
Dec 05 07:50:20 np0005546420.localdomain sshd[26452]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:20 np0005546420.localdomain sudo[26456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new
Dec 05 07:50:20 np0005546420.localdomain sudo[26456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:20 np0005546420.localdomain sudo[26456]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:20 np0005546420.localdomain sshd[26471]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:21 np0005546420.localdomain sshd[26471]: Accepted publickey for ceph-admin from 192.168.122.103 port 42472 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:21 np0005546420.localdomain systemd-logind[762]: New session 22 of user ceph-admin.
Dec 05 07:50:21 np0005546420.localdomain systemd[1]: Started Session 22 of User ceph-admin.
Dec 05 07:50:21 np0005546420.localdomain sshd[26471]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:21 np0005546420.localdomain sudo[26475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 07:50:21 np0005546420.localdomain sudo[26475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:21 np0005546420.localdomain sudo[26475]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:21 np0005546420.localdomain sshd[26490]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:21 np0005546420.localdomain sshd[26490]: Accepted publickey for ceph-admin from 192.168.122.103 port 42480 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:21 np0005546420.localdomain systemd-logind[762]: New session 23 of user ceph-admin.
Dec 05 07:50:21 np0005546420.localdomain systemd[1]: Started Session 23 of User ceph-admin.
Dec 05 07:50:21 np0005546420.localdomain sshd[26490]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:21 np0005546420.localdomain sudo[26494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new
Dec 05 07:50:21 np0005546420.localdomain sudo[26494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:21 np0005546420.localdomain sudo[26494]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:21 np0005546420.localdomain sshd[26509]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:21 np0005546420.localdomain sshd[26509]: Accepted publickey for ceph-admin from 192.168.122.103 port 42486 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:21 np0005546420.localdomain systemd-logind[762]: New session 24 of user ceph-admin.
Dec 05 07:50:21 np0005546420.localdomain systemd[1]: Started Session 24 of User ceph-admin.
Dec 05 07:50:21 np0005546420.localdomain sshd[26509]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:22 np0005546420.localdomain sshd[26526]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:22 np0005546420.localdomain sshd[26526]: Accepted publickey for ceph-admin from 192.168.122.103 port 42490 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:22 np0005546420.localdomain systemd-logind[762]: New session 25 of user ceph-admin.
Dec 05 07:50:22 np0005546420.localdomain systemd[1]: Started Session 25 of User ceph-admin.
Dec 05 07:50:22 np0005546420.localdomain sshd[26526]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:22 np0005546420.localdomain sudo[26530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3
Dec 05 07:50:22 np0005546420.localdomain sudo[26530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:22 np0005546420.localdomain sudo[26530]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:22 np0005546420.localdomain sshd[26545]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:50:22 np0005546420.localdomain sshd[26545]: Accepted publickey for ceph-admin from 192.168.122.103 port 42496 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 07:50:22 np0005546420.localdomain systemd-logind[762]: New session 26 of user ceph-admin.
Dec 05 07:50:22 np0005546420.localdomain systemd[1]: Started Session 26 of User ceph-admin.
Dec 05 07:50:22 np0005546420.localdomain sshd[26545]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 07:50:22 np0005546420.localdomain sudo[26549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host --expect-hostname np0005546420.localdomain
Dec 05 07:50:22 np0005546420.localdomain sudo[26549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:50:23 np0005546420.localdomain sudo[26549]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:43 np0005546420.localdomain sudo[26585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 07:50:43 np0005546420.localdomain sudo[26585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:43 np0005546420.localdomain sudo[26585]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:43 np0005546420.localdomain sudo[26600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:50:43 np0005546420.localdomain sudo[26600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:43 np0005546420.localdomain sudo[26600]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:43 np0005546420.localdomain sudo[26615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 07:50:43 np0005546420.localdomain sudo[26615]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:50:44 np0005546420.localdomain sudo[26615]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:44 np0005546420.localdomain sudo[26651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:50:44 np0005546420.localdomain sudo[26651]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:44 np0005546420.localdomain sudo[26651]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:44 np0005546420.localdomain sudo[26666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 07:50:44 np0005546420.localdomain sudo[26666]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:50:44 np0005546420.localdomain sudo[26666]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:44 np0005546420.localdomain sudo[26720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:50:44 np0005546420.localdomain sudo[26720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:44 np0005546420.localdomain sudo[26720]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:44 np0005546420.localdomain sudo[26735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 07:50:44 np0005546420.localdomain sudo[26735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:50:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:50:45 np0005546420.localdomain systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26763 (sysctl)
Dec 05 07:50:45 np0005546420.localdomain systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 05 07:50:45 np0005546420.localdomain systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 05 07:50:45 np0005546420.localdomain sudo[26735]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:45 np0005546420.localdomain sudo[26785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:50:45 np0005546420.localdomain sudo[26785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:45 np0005546420.localdomain sudo[26785]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:45 np0005546420.localdomain sudo[26800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 05 07:50:45 np0005546420.localdomain sudo[26800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:50:46 np0005546420.localdomain sudo[26800]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:46 np0005546420.localdomain sudo[26834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:50:46 np0005546420.localdomain sudo[26834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:46 np0005546420.localdomain sudo[26834]: pam_unix(sudo:session): session closed for user root
Dec 05 07:50:46 np0005546420.localdomain sudo[26849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b -- inventory --format=json-pretty --filter-for-batch
Dec 05 07:50:46 np0005546420.localdomain sudo[26849]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:50:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:50:49 np0005546420.localdomain kernel: VFS: idmapped mount is not enabled.
Dec 05 07:51:10 np0005546420.localdomain podman[26902]: 
Dec 05 07:51:10 np0005546420.localdomain podman[26902]: 2025-12-05 07:51:10.3024604 +0000 UTC m=+23.581582711 container create 6fea8c997ca4b852ee1f5e79a207d20286e275bc7bd10cee3c919a146eac74f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mcclintock, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, release=1763362218, vendor=Red Hat, Inc.)
Dec 05 07:51:10 np0005546420.localdomain systemd[1]: Created slice Slice /machine.
Dec 05 07:51:10 np0005546420.localdomain systemd[1]: Started libpod-conmon-6fea8c997ca4b852ee1f5e79a207d20286e275bc7bd10cee3c919a146eac74f3.scope.
Dec 05 07:51:10 np0005546420.localdomain podman[26902]: 2025-12-05 07:50:46.763802736 +0000 UTC m=+0.042925077 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:51:10 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:51:10 np0005546420.localdomain podman[26902]: 2025-12-05 07:51:10.408919649 +0000 UTC m=+23.688042000 container init 6fea8c997ca4b852ee1f5e79a207d20286e275bc7bd10cee3c919a146eac74f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mcclintock, vcs-type=git, release=1763362218, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4)
Dec 05 07:51:10 np0005546420.localdomain podman[26902]: 2025-12-05 07:51:10.424368546 +0000 UTC m=+23.703490877 container start 6fea8c997ca4b852ee1f5e79a207d20286e275bc7bd10cee3c919a146eac74f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mcclintock, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=)
Dec 05 07:51:10 np0005546420.localdomain podman[26902]: 2025-12-05 07:51:10.424561803 +0000 UTC m=+23.703684134 container attach 6fea8c997ca4b852ee1f5e79a207d20286e275bc7bd10cee3c919a146eac74f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mcclintock, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, name=rhceph, CEPH_POINT_RELEASE=)
Dec 05 07:51:10 np0005546420.localdomain quirky_mcclintock[27065]: 167 167
Dec 05 07:51:10 np0005546420.localdomain systemd[1]: libpod-6fea8c997ca4b852ee1f5e79a207d20286e275bc7bd10cee3c919a146eac74f3.scope: Deactivated successfully.
Dec 05 07:51:10 np0005546420.localdomain podman[26902]: 2025-12-05 07:51:10.429282475 +0000 UTC m=+23.708404836 container died 6fea8c997ca4b852ee1f5e79a207d20286e275bc7bd10cee3c919a146eac74f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mcclintock, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vcs-type=git, distribution-scope=public, RELEASE=main, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=)
Dec 05 07:51:10 np0005546420.localdomain podman[27070]: 2025-12-05 07:51:10.51516733 +0000 UTC m=+0.077093906 container remove 6fea8c997ca4b852ee1f5e79a207d20286e275bc7bd10cee3c919a146eac74f3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mcclintock, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, name=rhceph)
Dec 05 07:51:10 np0005546420.localdomain systemd[1]: libpod-conmon-6fea8c997ca4b852ee1f5e79a207d20286e275bc7bd10cee3c919a146eac74f3.scope: Deactivated successfully.
Dec 05 07:51:10 np0005546420.localdomain podman[27092]: 
Dec 05 07:51:10 np0005546420.localdomain podman[27092]: 2025-12-05 07:51:10.762877836 +0000 UTC m=+0.087412478 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:51:11 np0005546420.localdomain systemd[1]: tmp-crun.hasSFX.mount: Deactivated successfully.
Dec 05 07:51:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-999055ae13a302a0552bf032f8c54c0bfabb352b2f05f98e24332f49f14ad927-merged.mount: Deactivated successfully.
Dec 05 07:51:15 np0005546420.localdomain podman[27092]: 2025-12-05 07:51:15.53196842 +0000 UTC m=+4.856503052 container create 1790d6477483dd91e6d460aa8c4a4eb2802b9ac3d5608e0f1e4bc44560688974 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_mayer, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, ceph=True, release=1763362218, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 07:51:15 np0005546420.localdomain systemd[1]: Started libpod-conmon-1790d6477483dd91e6d460aa8c4a4eb2802b9ac3d5608e0f1e4bc44560688974.scope.
Dec 05 07:51:15 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:51:15 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfe3a3d984bbb1f8a7f34e042bc348578527cd4ce816efadaf178f9fc9e83e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:15 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adfe3a3d984bbb1f8a7f34e042bc348578527cd4ce816efadaf178f9fc9e83e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:15 np0005546420.localdomain podman[27092]: 2025-12-05 07:51:15.647930744 +0000 UTC m=+4.972465376 container init 1790d6477483dd91e6d460aa8c4a4eb2802b9ac3d5608e0f1e4bc44560688974 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_mayer, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, architecture=x86_64)
Dec 05 07:51:15 np0005546420.localdomain podman[27092]: 2025-12-05 07:51:15.658472504 +0000 UTC m=+4.983007146 container start 1790d6477483dd91e6d460aa8c4a4eb2802b9ac3d5608e0f1e4bc44560688974 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_mayer, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, name=rhceph, release=1763362218, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 07:51:15 np0005546420.localdomain podman[27092]: 2025-12-05 07:51:15.658677801 +0000 UTC m=+4.983212443 container attach 1790d6477483dd91e6d460aa8c4a4eb2802b9ac3d5608e0f1e4bc44560688974 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_mayer, architecture=x86_64, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1763362218, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]: [
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:     {
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:         "available": false,
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:         "ceph_device": false,
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:         "lsm_data": {},
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:         "lvs": [],
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:         "path": "/dev/sr0",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:         "rejected_reasons": [
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "Insufficient space (<5GB)",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "Has a FileSystem"
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:         ],
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:         "sys_api": {
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "actuators": null,
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "device_nodes": "sr0",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "human_readable_size": "482.00 KB",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "id_bus": "ata",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "model": "QEMU DVD-ROM",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "nr_requests": "2",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "partitions": {},
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "path": "/dev/sr0",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "removable": "1",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "rev": "2.5+",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "ro": "0",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "rotational": "1",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "sas_address": "",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "sas_device_handle": "",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "scheduler_mode": "mq-deadline",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "sectors": 0,
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "sectorsize": "2048",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "size": 493568.0,
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "support_discard": "0",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "type": "disk",
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:             "vendor": "QEMU"
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:         }
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]:     }
Dec 05 07:51:16 np0005546420.localdomain condescending_mayer[27364]: ]
Dec 05 07:51:16 np0005546420.localdomain systemd[1]: libpod-1790d6477483dd91e6d460aa8c4a4eb2802b9ac3d5608e0f1e4bc44560688974.scope: Deactivated successfully.
Dec 05 07:51:16 np0005546420.localdomain podman[27092]: 2025-12-05 07:51:16.503115773 +0000 UTC m=+5.827650405 container died 1790d6477483dd91e6d460aa8c4a4eb2802b9ac3d5608e0f1e4bc44560688974 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_mayer, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, name=rhceph)
Dec 05 07:51:16 np0005546420.localdomain systemd[1]: tmp-crun.B2XbGM.mount: Deactivated successfully.
Dec 05 07:51:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-adfe3a3d984bbb1f8a7f34e042bc348578527cd4ce816efadaf178f9fc9e83e5-merged.mount: Deactivated successfully.
Dec 05 07:51:16 np0005546420.localdomain podman[28749]: 2025-12-05 07:51:16.598182022 +0000 UTC m=+0.083628359 container remove 1790d6477483dd91e6d460aa8c4a4eb2802b9ac3d5608e0f1e4bc44560688974 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_mayer, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 07:51:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:51:16 np0005546420.localdomain systemd[1]: libpod-conmon-1790d6477483dd91e6d460aa8c4a4eb2802b9ac3d5608e0f1e4bc44560688974.scope: Deactivated successfully.
Dec 05 07:51:16 np0005546420.localdomain sudo[26849]: pam_unix(sudo:session): session closed for user root
Dec 05 07:51:16 np0005546420.localdomain sudo[28763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:51:16 np0005546420.localdomain sudo[28763]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:51:16 np0005546420.localdomain sudo[28763]: pam_unix(sudo:session): session closed for user root
Dec 05 07:51:16 np0005546420.localdomain sudo[28778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 _orch set-coredump-overrides --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b --coredump-max-size=32G
Dec 05 07:51:16 np0005546420.localdomain sudo[28778]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:51:17 np0005546420.localdomain systemd[1]: systemd-coredump.socket: Deactivated successfully.
Dec 05 07:51:17 np0005546420.localdomain systemd[1]: Closed Process Core Dump Socket.
Dec 05 07:51:17 np0005546420.localdomain systemd[1]: Stopping Process Core Dump Socket...
Dec 05 07:51:17 np0005546420.localdomain systemd[1]: Listening on Process Core Dump Socket.
Dec 05 07:51:17 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:51:17 np0005546420.localdomain systemd-rc-local-generator[28831]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:51:17 np0005546420.localdomain systemd-sysv-generator[28835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:51:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:51:17 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:51:17 np0005546420.localdomain systemd-rc-local-generator[28873]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:51:17 np0005546420.localdomain systemd-sysv-generator[28876]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:51:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:51:17 np0005546420.localdomain sudo[28778]: pam_unix(sudo:session): session closed for user root
Dec 05 07:51:46 np0005546420.localdomain sudo[28882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:51:46 np0005546420.localdomain sudo[28882]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:51:46 np0005546420.localdomain sudo[28882]: pam_unix(sudo:session): session closed for user root
Dec 05 07:51:46 np0005546420.localdomain sudo[28897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 07:51:46 np0005546420.localdomain sudo[28897]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:51:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:51:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:51:46 np0005546420.localdomain podman[28953]: 
Dec 05 07:51:46 np0005546420.localdomain podman[28953]: 2025-12-05 07:51:46.828572385 +0000 UTC m=+0.048898030 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:51:46 np0005546420.localdomain podman[28953]: 2025-12-05 07:51:46.928832993 +0000 UTC m=+0.149158598 container create 3bfb281bed5614bc9da786c4f8d32d8a31c84ac04474cf446d37c8b1699c15e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_antonelli, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218)
Dec 05 07:51:46 np0005546420.localdomain systemd[1]: Started libpod-conmon-3bfb281bed5614bc9da786c4f8d32d8a31c84ac04474cf446d37c8b1699c15e3.scope.
Dec 05 07:51:46 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:51:47 np0005546420.localdomain podman[28953]: 2025-12-05 07:51:47.003174638 +0000 UTC m=+0.223500243 container init 3bfb281bed5614bc9da786c4f8d32d8a31c84ac04474cf446d37c8b1699c15e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_antonelli, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 07:51:47 np0005546420.localdomain podman[28953]: 2025-12-05 07:51:47.014296689 +0000 UTC m=+0.234622294 container start 3bfb281bed5614bc9da786c4f8d32d8a31c84ac04474cf446d37c8b1699c15e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_antonelli, version=7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, release=1763362218, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True)
Dec 05 07:51:47 np0005546420.localdomain podman[28953]: 2025-12-05 07:51:47.014519536 +0000 UTC m=+0.234845141 container attach 3bfb281bed5614bc9da786c4f8d32d8a31c84ac04474cf446d37c8b1699c15e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_antonelli, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, name=rhceph, RELEASE=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 07:51:47 np0005546420.localdomain hungry_antonelli[28969]: 167 167
Dec 05 07:51:47 np0005546420.localdomain systemd[1]: libpod-3bfb281bed5614bc9da786c4f8d32d8a31c84ac04474cf446d37c8b1699c15e3.scope: Deactivated successfully.
Dec 05 07:51:47 np0005546420.localdomain podman[28953]: 2025-12-05 07:51:47.018621459 +0000 UTC m=+0.238947064 container died 3bfb281bed5614bc9da786c4f8d32d8a31c84ac04474cf446d37c8b1699c15e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_antonelli, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=)
Dec 05 07:51:47 np0005546420.localdomain podman[28974]: 2025-12-05 07:51:47.114626928 +0000 UTC m=+0.083042608 container remove 3bfb281bed5614bc9da786c4f8d32d8a31c84ac04474cf446d37c8b1699c15e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_antonelli, architecture=x86_64, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True)
Dec 05 07:51:47 np0005546420.localdomain systemd[1]: libpod-conmon-3bfb281bed5614bc9da786c4f8d32d8a31c84ac04474cf446d37c8b1699c15e3.scope: Deactivated successfully.
Dec 05 07:51:47 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:51:47 np0005546420.localdomain systemd-sysv-generator[29019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:51:47 np0005546420.localdomain systemd-rc-local-generator[29012]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:51:47 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:51:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:51:47 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:51:47 np0005546420.localdomain systemd-sysv-generator[29056]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:51:47 np0005546420.localdomain systemd-rc-local-generator[29051]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:51:47 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:51:47 np0005546420.localdomain systemd[1]: Reached target All Ceph clusters and services.
Dec 05 07:51:47 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:51:47 np0005546420.localdomain systemd-rc-local-generator[29090]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:51:47 np0005546420.localdomain systemd-sysv-generator[29093]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:51:47 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:51:47 np0005546420.localdomain systemd[1]: Reached target Ceph cluster 79feddb1-4bfc-557f-83b9-0d57c9f66c1b.
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:51:48 np0005546420.localdomain systemd-rc-local-generator[29131]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:51:48 np0005546420.localdomain systemd-sysv-generator[29135]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:51:48 np0005546420.localdomain systemd-rc-local-generator[29172]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:51:48 np0005546420.localdomain systemd-sysv-generator[29176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: Created slice Slice /system/ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b.
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: Reached target System Time Set.
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: Reached target System Time Synchronized.
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: Starting Ceph crash.np0005546420 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b...
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:51:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 07:51:48 np0005546420.localdomain podman[29233]: 
Dec 05 07:51:48 np0005546420.localdomain podman[29233]: 2025-12-05 07:51:48.889390219 +0000 UTC m=+0.075047929 container create 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, name=rhceph, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main)
Dec 05 07:51:48 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f37c68e92c16a5782747cca48d85cb22340f8ad170fca8046f30546f3e5d000/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:48 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f37c68e92c16a5782747cca48d85cb22340f8ad170fca8046f30546f3e5d000/merged/etc/ceph/ceph.client.crash.np0005546420.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:48 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f37c68e92c16a5782747cca48d85cb22340f8ad170fca8046f30546f3e5d000/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:48 np0005546420.localdomain podman[29233]: 2025-12-05 07:51:48.861215984 +0000 UTC m=+0.046873714 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:51:48 np0005546420.localdomain podman[29233]: 2025-12-05 07:51:48.984068146 +0000 UTC m=+0.169725856 container init 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, ceph=True, distribution-scope=public, name=rhceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=)
Dec 05 07:51:48 np0005546420.localdomain podman[29233]: 2025-12-05 07:51:48.994431862 +0000 UTC m=+0.180089562 container start 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7)
Dec 05 07:51:48 np0005546420.localdomain bash[29233]: 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18
Dec 05 07:51:49 np0005546420.localdomain systemd[1]: Started Ceph crash.np0005546420 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b.
Dec 05 07:51:49 np0005546420.localdomain sudo[28897]: pam_unix(sudo:session): session closed for user root
Dec 05 07:51:49 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420[29248]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 05 07:51:49 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420[29248]: 2025-12-05T07:51:49.165+0000 7f79d7a96640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 05 07:51:49 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420[29248]: 2025-12-05T07:51:49.165+0000 7f79d7a96640 -1 AuthRegistry(0x7f79d0067c70) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 05 07:51:49 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420[29248]: 2025-12-05T07:51:49.166+0000 7f79d7a96640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 05 07:51:49 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420[29248]: 2025-12-05T07:51:49.166+0000 7f79d7a96640 -1 AuthRegistry(0x7f79d7a95000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 05 07:51:49 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420[29248]: 2025-12-05T07:51:49.174+0000 7f79d600c640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 05 07:51:49 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420[29248]: 2025-12-05T07:51:49.176+0000 7f79d580b640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 05 07:51:49 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420[29248]: 2025-12-05T07:51:49.176+0000 7f79d500a640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 05 07:51:49 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420[29248]: 2025-12-05T07:51:49.176+0000 7f79d7a96640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 05 07:51:49 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420[29248]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 05 07:51:49 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420[29248]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 05 07:51:49 np0005546420.localdomain sudo[29255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:51:49 np0005546420.localdomain sudo[29255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:51:49 np0005546420.localdomain sudo[29255]: pam_unix(sudo:session): session closed for user root
Dec 05 07:51:49 np0005546420.localdomain sudo[29280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 --yes --no-systemd
Dec 05 07:51:49 np0005546420.localdomain sudo[29280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:51:49 np0005546420.localdomain podman[29332]: 
Dec 05 07:51:49 np0005546420.localdomain podman[29332]: 2025-12-05 07:51:49.856143629 +0000 UTC m=+0.073850420 container create d289fb6782358b5d9c656985a235bc0fd9bfa68032be3563d04c59063d42b873 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_lamarr, version=7, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1763362218, ceph=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 07:51:49 np0005546420.localdomain systemd[1]: Started libpod-conmon-d289fb6782358b5d9c656985a235bc0fd9bfa68032be3563d04c59063d42b873.scope.
Dec 05 07:51:49 np0005546420.localdomain podman[29332]: 2025-12-05 07:51:49.828254853 +0000 UTC m=+0.045961644 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:51:49 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:51:49 np0005546420.localdomain podman[29332]: 2025-12-05 07:51:49.948549231 +0000 UTC m=+0.166256022 container init d289fb6782358b5d9c656985a235bc0fd9bfa68032be3563d04c59063d42b873 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_lamarr, GIT_CLEAN=True, com.redhat.component=rhceph-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph)
Dec 05 07:51:49 np0005546420.localdomain podman[29332]: 2025-12-05 07:51:49.959417804 +0000 UTC m=+0.177124645 container start d289fb6782358b5d9c656985a235bc0fd9bfa68032be3563d04c59063d42b873 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_lamarr, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True)
Dec 05 07:51:49 np0005546420.localdomain podman[29332]: 2025-12-05 07:51:49.96021453 +0000 UTC m=+0.177921361 container attach d289fb6782358b5d9c656985a235bc0fd9bfa68032be3563d04c59063d42b873 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_lamarr, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, release=1763362218)
Dec 05 07:51:49 np0005546420.localdomain adoring_lamarr[29348]: 167 167
Dec 05 07:51:49 np0005546420.localdomain systemd[1]: libpod-d289fb6782358b5d9c656985a235bc0fd9bfa68032be3563d04c59063d42b873.scope: Deactivated successfully.
Dec 05 07:51:49 np0005546420.localdomain podman[29332]: 2025-12-05 07:51:49.964002683 +0000 UTC m=+0.181709494 container died d289fb6782358b5d9c656985a235bc0fd9bfa68032be3563d04c59063d42b873 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_lamarr, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, architecture=x86_64, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 07:51:50 np0005546420.localdomain podman[29353]: 2025-12-05 07:51:50.066980178 +0000 UTC m=+0.087209834 container remove d289fb6782358b5d9c656985a235bc0fd9bfa68032be3563d04c59063d42b873 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_lamarr, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 07:51:50 np0005546420.localdomain systemd[1]: libpod-conmon-d289fb6782358b5d9c656985a235bc0fd9bfa68032be3563d04c59063d42b873.scope: Deactivated successfully.
Dec 05 07:51:50 np0005546420.localdomain podman[29372]: 
Dec 05 07:51:50 np0005546420.localdomain podman[29372]: 2025-12-05 07:51:50.296687471 +0000 UTC m=+0.081098256 container create 0c0bd04b3e491de411043068cf9aa0197c7c44039581a0516da89713fa77aab9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_davinci, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, version=7, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 07:51:50 np0005546420.localdomain systemd[1]: Started libpod-conmon-0c0bd04b3e491de411043068cf9aa0197c7c44039581a0516da89713fa77aab9.scope.
Dec 05 07:51:50 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:51:50 np0005546420.localdomain podman[29372]: 2025-12-05 07:51:50.265490188 +0000 UTC m=+0.049900983 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:51:50 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3043896d6407ec50831762f311ca7b2aaf19d5f58599f188d205bb1896d1cfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:50 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3043896d6407ec50831762f311ca7b2aaf19d5f58599f188d205bb1896d1cfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:50 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3043896d6407ec50831762f311ca7b2aaf19d5f58599f188d205bb1896d1cfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:50 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3043896d6407ec50831762f311ca7b2aaf19d5f58599f188d205bb1896d1cfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:50 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3043896d6407ec50831762f311ca7b2aaf19d5f58599f188d205bb1896d1cfc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:50 np0005546420.localdomain podman[29372]: 2025-12-05 07:51:50.428260266 +0000 UTC m=+0.212671051 container init 0c0bd04b3e491de411043068cf9aa0197c7c44039581a0516da89713fa77aab9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_davinci, com.redhat.component=rhceph-container, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, version=7, release=1763362218, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 07:51:50 np0005546420.localdomain podman[29372]: 2025-12-05 07:51:50.437535618 +0000 UTC m=+0.221946403 container start 0c0bd04b3e491de411043068cf9aa0197c7c44039581a0516da89713fa77aab9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_davinci, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, version=7, build-date=2025-11-26T19:44:28Z, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 07:51:50 np0005546420.localdomain podman[29372]: 2025-12-05 07:51:50.437804047 +0000 UTC m=+0.222214882 container attach 0c0bd04b3e491de411043068cf9aa0197c7c44039581a0516da89713fa77aab9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_davinci, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, version=7)
Dec 05 07:51:50 np0005546420.localdomain systemd[1]: tmp-crun.vZeZ4u.mount: Deactivated successfully.
Dec 05 07:51:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f04568170f072f0d7af20c03ef20acf942f105f2089a24354b0ef6cdac9e8ed5-merged.mount: Deactivated successfully.
Dec 05 07:51:50 np0005546420.localdomain beautiful_davinci[29388]: --> passed data devices: 0 physical, 2 LVM
Dec 05 07:51:50 np0005546420.localdomain beautiful_davinci[29388]: --> relative data size: 1.0
Dec 05 07:51:50 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 07:51:51 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 5de3622b-c6b4-45a6-8ef6-d7ebe58a162b
Dec 05 07:51:51 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 07:51:51 np0005546420.localdomain lvm[29442]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 07:51:51 np0005546420.localdomain lvm[29442]: VG ceph_vg0 finished
Dec 05 07:51:51 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 05 07:51:51 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 05 07:51:51 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 07:51:51 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 05 07:51:51 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 05 07:51:52 np0005546420.localdomain sshd[29470]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:51:52 np0005546420.localdomain sshd[29470]: error: kex_exchange_identification: banner line contains invalid characters
Dec 05 07:51:52 np0005546420.localdomain sshd[29470]: banner exchange: Connection from 64.62.197.32 port 20472: invalid format
Dec 05 07:51:52 np0005546420.localdomain beautiful_davinci[29388]:  stderr: got monmap epoch 3
Dec 05 07:51:52 np0005546420.localdomain beautiful_davinci[29388]: --> Creating keyring file for osd.1
Dec 05 07:51:52 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 05 07:51:52 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 05 07:51:52 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 5de3622b-c6b4-45a6-8ef6-d7ebe58a162b --setuser ceph --setgroup ceph
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]:  stderr: 2025-12-05T07:51:52.134+0000 7f71881eda80 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]:  stderr: 2025-12-05T07:51:52.134+0000 7f71881eda80 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 07:51:54 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d4943a67-6268-48a0-b84a-a9a49f3de9c5
Dec 05 07:51:55 np0005546420.localdomain lvm[30377]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 07:51:55 np0005546420.localdomain lvm[30377]: VG ceph_vg1 finished
Dec 05 07:51:55 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 07:51:55 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4
Dec 05 07:51:55 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 05 07:51:55 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 05 07:51:55 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Dec 05 07:51:55 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap
Dec 05 07:51:55 np0005546420.localdomain beautiful_davinci[29388]:  stderr: got monmap epoch 3
Dec 05 07:51:55 np0005546420.localdomain beautiful_davinci[29388]: --> Creating keyring file for osd.4
Dec 05 07:51:55 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring
Dec 05 07:51:55 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/
Dec 05 07:51:55 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid d4943a67-6268-48a0-b84a-a9a49f3de9c5 --setuser ceph --setgroup ceph
Dec 05 07:51:58 np0005546420.localdomain beautiful_davinci[29388]:  stderr: 2025-12-05T07:51:55.970+0000 7fe148f30a80 -1 bluestore(/var/lib/ceph/osd/ceph-4//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 05 07:51:58 np0005546420.localdomain beautiful_davinci[29388]:  stderr: 2025-12-05T07:51:55.970+0000 7fe148f30a80 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid
Dec 05 07:51:58 np0005546420.localdomain beautiful_davinci[29388]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 05 07:51:58 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 05 07:51:58 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-4 --no-mon-config
Dec 05 07:51:58 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Dec 05 07:51:58 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block
Dec 05 07:51:58 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 05 07:51:58 np0005546420.localdomain beautiful_davinci[29388]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 05 07:51:58 np0005546420.localdomain beautiful_davinci[29388]: --> ceph-volume lvm activate successful for osd ID: 4
Dec 05 07:51:58 np0005546420.localdomain beautiful_davinci[29388]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 05 07:51:58 np0005546420.localdomain systemd[1]: libpod-0c0bd04b3e491de411043068cf9aa0197c7c44039581a0516da89713fa77aab9.scope: Deactivated successfully.
Dec 05 07:51:58 np0005546420.localdomain systemd[1]: libpod-0c0bd04b3e491de411043068cf9aa0197c7c44039581a0516da89713fa77aab9.scope: Consumed 3.864s CPU time.
Dec 05 07:51:58 np0005546420.localdomain podman[29372]: 2025-12-05 07:51:58.592384314 +0000 UTC m=+8.376795179 container died 0c0bd04b3e491de411043068cf9aa0197c7c44039581a0516da89713fa77aab9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_davinci, io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, release=1763362218, io.openshift.expose-services=, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True)
Dec 05 07:51:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f3043896d6407ec50831762f311ca7b2aaf19d5f58599f188d205bb1896d1cfc-merged.mount: Deactivated successfully.
Dec 05 07:51:58 np0005546420.localdomain podman[31280]: 2025-12-05 07:51:58.677594352 +0000 UTC m=+0.075716201 container remove 0c0bd04b3e491de411043068cf9aa0197c7c44039581a0516da89713fa77aab9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_davinci, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, distribution-scope=public, version=7, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main)
Dec 05 07:51:58 np0005546420.localdomain systemd[1]: libpod-conmon-0c0bd04b3e491de411043068cf9aa0197c7c44039581a0516da89713fa77aab9.scope: Deactivated successfully.
Dec 05 07:51:58 np0005546420.localdomain sudo[29280]: pam_unix(sudo:session): session closed for user root
Dec 05 07:51:58 np0005546420.localdomain sudo[31297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:51:58 np0005546420.localdomain sudo[31297]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:51:58 np0005546420.localdomain sudo[31297]: pam_unix(sudo:session): session closed for user root
Dec 05 07:51:58 np0005546420.localdomain sudo[31312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b -- lvm list --format json
Dec 05 07:51:58 np0005546420.localdomain sudo[31312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:51:59 np0005546420.localdomain podman[31367]: 
Dec 05 07:51:59 np0005546420.localdomain podman[31367]: 2025-12-05 07:51:59.43005359 +0000 UTC m=+0.068041263 container create 89101ad76459c6dcd53b57a692b568a78c5d53eed64e0900ce8ad689b503576a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_haslett, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, release=1763362218, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, ceph=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 07:51:59 np0005546420.localdomain systemd[1]: Started libpod-conmon-89101ad76459c6dcd53b57a692b568a78c5d53eed64e0900ce8ad689b503576a.scope.
Dec 05 07:51:59 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:51:59 np0005546420.localdomain podman[31367]: 2025-12-05 07:51:59.492604062 +0000 UTC m=+0.130591745 container init 89101ad76459c6dcd53b57a692b568a78c5d53eed64e0900ce8ad689b503576a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_haslett, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, ceph=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 07:51:59 np0005546420.localdomain podman[31367]: 2025-12-05 07:51:59.403995182 +0000 UTC m=+0.041982915 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:51:59 np0005546420.localdomain podman[31367]: 2025-12-05 07:51:59.504173837 +0000 UTC m=+0.142161520 container start 89101ad76459c6dcd53b57a692b568a78c5d53eed64e0900ce8ad689b503576a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_haslett, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, vendor=Red Hat, Inc., version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git)
Dec 05 07:51:59 np0005546420.localdomain podman[31367]: 2025-12-05 07:51:59.504427525 +0000 UTC m=+0.142415198 container attach 89101ad76459c6dcd53b57a692b568a78c5d53eed64e0900ce8ad689b503576a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_haslett, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, name=rhceph, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 07:51:59 np0005546420.localdomain sweet_haslett[31382]: 167 167
Dec 05 07:51:59 np0005546420.localdomain systemd[1]: libpod-89101ad76459c6dcd53b57a692b568a78c5d53eed64e0900ce8ad689b503576a.scope: Deactivated successfully.
Dec 05 07:51:59 np0005546420.localdomain podman[31367]: 2025-12-05 07:51:59.507514356 +0000 UTC m=+0.145502059 container died 89101ad76459c6dcd53b57a692b568a78c5d53eed64e0900ce8ad689b503576a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_haslett, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, ceph=True, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 07:51:59 np0005546420.localdomain podman[31387]: 2025-12-05 07:51:59.600127975 +0000 UTC m=+0.079419231 container remove 89101ad76459c6dcd53b57a692b568a78c5d53eed64e0900ce8ad689b503576a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_haslett, com.redhat.component=rhceph-container, io.openshift.expose-services=, release=1763362218, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vendor=Red Hat, Inc., RELEASE=main, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.buildah.version=1.41.4, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True)
Dec 05 07:51:59 np0005546420.localdomain systemd[1]: libpod-conmon-89101ad76459c6dcd53b57a692b568a78c5d53eed64e0900ce8ad689b503576a.scope: Deactivated successfully.
Dec 05 07:51:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-39e577fa480d85571d4c93087cb3941f996a876a096385e030fcdad7a0e0b479-merged.mount: Deactivated successfully.
Dec 05 07:51:59 np0005546420.localdomain podman[31408]: 
Dec 05 07:51:59 np0005546420.localdomain podman[31408]: 2025-12-05 07:51:59.813950702 +0000 UTC m=+0.070686438 container create 06f9bdcf4cd3337b76364e09d5f808bf5b6c54646ef3e274dd9895f04407e979 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_wing, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, version=7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, ceph=True, release=1763362218, vcs-type=git, name=rhceph)
Dec 05 07:51:59 np0005546420.localdomain systemd[1]: Started libpod-conmon-06f9bdcf4cd3337b76364e09d5f808bf5b6c54646ef3e274dd9895f04407e979.scope.
Dec 05 07:51:59 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:51:59 np0005546420.localdomain podman[31408]: 2025-12-05 07:51:59.787069968 +0000 UTC m=+0.043805734 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:51:59 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1fc0bd99ad86d0330b18747af901397e0f1c218d45953924710802cd16d259f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:59 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1fc0bd99ad86d0330b18747af901397e0f1c218d45953924710802cd16d259f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:59 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1fc0bd99ad86d0330b18747af901397e0f1c218d45953924710802cd16d259f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 07:51:59 np0005546420.localdomain podman[31408]: 2025-12-05 07:51:59.922308042 +0000 UTC m=+0.179043788 container init 06f9bdcf4cd3337b76364e09d5f808bf5b6c54646ef3e274dd9895f04407e979 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_wing, release=1763362218, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, build-date=2025-11-26T19:44:28Z, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 07:51:59 np0005546420.localdomain podman[31408]: 2025-12-05 07:51:59.932887746 +0000 UTC m=+0.189623482 container start 06f9bdcf4cd3337b76364e09d5f808bf5b6c54646ef3e274dd9895f04407e979 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_wing, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, name=rhceph, build-date=2025-11-26T19:44:28Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, com.redhat.component=rhceph-container)
Dec 05 07:51:59 np0005546420.localdomain podman[31408]: 2025-12-05 07:51:59.933116533 +0000 UTC m=+0.189852269 container attach 06f9bdcf4cd3337b76364e09d5f808bf5b6c54646ef3e274dd9895f04407e979 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_wing, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1763362218, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]: {
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:     "1": [
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:         {
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "devices": [
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "/dev/loop3"
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             ],
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "lv_name": "ceph_lv0",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "lv_size": "7511998464",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=65sdRd-a7fN-YJRh-pFpO-8iBY-yKo2-t9wEgN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=79feddb1-4bfc-557f-83b9-0d57c9f66c1b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5de3622b-c6b4-45a6-8ef6-d7ebe58a162b,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "lv_uuid": "65sdRd-a7fN-YJRh-pFpO-8iBY-yKo2-t9wEgN",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "name": "ceph_lv0",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "tags": {
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.block_uuid": "65sdRd-a7fN-YJRh-pFpO-8iBY-yKo2-t9wEgN",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.cluster_fsid": "79feddb1-4bfc-557f-83b9-0d57c9f66c1b",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.cluster_name": "ceph",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.crush_device_class": "",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.encrypted": "0",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.osd_fsid": "5de3622b-c6b4-45a6-8ef6-d7ebe58a162b",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.osd_id": "1",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.type": "block",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.vdo": "0"
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             },
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "type": "block",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "vg_name": "ceph_vg0"
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:         }
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:     ],
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:     "4": [
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:         {
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "devices": [
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "/dev/loop4"
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             ],
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "lv_name": "ceph_lv1",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "lv_size": "7511998464",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=RXiSPk-waHH-XCdn-e871-GRdB-BQum-RnOF3u,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=79feddb1-4bfc-557f-83b9-0d57c9f66c1b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d4943a67-6268-48a0-b84a-a9a49f3de9c5,ceph.osd_id=4,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "lv_uuid": "RXiSPk-waHH-XCdn-e871-GRdB-BQum-RnOF3u",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "name": "ceph_lv1",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "tags": {
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.block_uuid": "RXiSPk-waHH-XCdn-e871-GRdB-BQum-RnOF3u",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.cluster_fsid": "79feddb1-4bfc-557f-83b9-0d57c9f66c1b",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.cluster_name": "ceph",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.crush_device_class": "",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.encrypted": "0",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.osd_fsid": "d4943a67-6268-48a0-b84a-a9a49f3de9c5",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.osd_id": "4",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.type": "block",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:                 "ceph.vdo": "0"
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             },
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "type": "block",
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:             "vg_name": "ceph_vg1"
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:         }
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]:     ]
Dec 05 07:52:00 np0005546420.localdomain quirky_wing[31424]: }
Dec 05 07:52:00 np0005546420.localdomain systemd[1]: libpod-06f9bdcf4cd3337b76364e09d5f808bf5b6c54646ef3e274dd9895f04407e979.scope: Deactivated successfully.
Dec 05 07:52:00 np0005546420.localdomain podman[31408]: 2025-12-05 07:52:00.314208785 +0000 UTC m=+0.570944581 container died 06f9bdcf4cd3337b76364e09d5f808bf5b6c54646ef3e274dd9895f04407e979 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_wing, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 07:52:00 np0005546420.localdomain podman[31433]: 2025-12-05 07:52:00.414758212 +0000 UTC m=+0.085331603 container remove 06f9bdcf4cd3337b76364e09d5f808bf5b6c54646ef3e274dd9895f04407e979 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_wing, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218)
Dec 05 07:52:00 np0005546420.localdomain systemd[1]: libpod-conmon-06f9bdcf4cd3337b76364e09d5f808bf5b6c54646ef3e274dd9895f04407e979.scope: Deactivated successfully.
Dec 05 07:52:00 np0005546420.localdomain sudo[31312]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:00 np0005546420.localdomain sudo[31449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:52:00 np0005546420.localdomain sudo[31449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:00 np0005546420.localdomain sudo[31449]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:00 np0005546420.localdomain sudo[31464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 07:52:00 np0005546420.localdomain sudo[31464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f1fc0bd99ad86d0330b18747af901397e0f1c218d45953924710802cd16d259f-merged.mount: Deactivated successfully.
Dec 05 07:52:01 np0005546420.localdomain podman[31521]: 
Dec 05 07:52:01 np0005546420.localdomain podman[31521]: 2025-12-05 07:52:01.166752724 +0000 UTC m=+0.068639321 container create cc294fd0701f41a6c6c2792e78273538f117b87c20813c486545fa0b2dc3ff4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_ishizaka, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64)
Dec 05 07:52:01 np0005546420.localdomain systemd[1]: Started libpod-conmon-cc294fd0701f41a6c6c2792e78273538f117b87c20813c486545fa0b2dc3ff4e.scope.
Dec 05 07:52:01 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:52:01 np0005546420.localdomain podman[31521]: 2025-12-05 07:52:01.230872346 +0000 UTC m=+0.132758943 container init cc294fd0701f41a6c6c2792e78273538f117b87c20813c486545fa0b2dc3ff4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_ishizaka, io.buildah.version=1.41.4, release=1763362218, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main)
Dec 05 07:52:01 np0005546420.localdomain podman[31521]: 2025-12-05 07:52:01.139820619 +0000 UTC m=+0.041707246 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:01 np0005546420.localdomain podman[31521]: 2025-12-05 07:52:01.242457794 +0000 UTC m=+0.144344381 container start cc294fd0701f41a6c6c2792e78273538f117b87c20813c486545fa0b2dc3ff4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_ishizaka, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 07:52:01 np0005546420.localdomain podman[31521]: 2025-12-05 07:52:01.242672771 +0000 UTC m=+0.144559358 container attach cc294fd0701f41a6c6c2792e78273538f117b87c20813c486545fa0b2dc3ff4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_ishizaka, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True)
Dec 05 07:52:01 np0005546420.localdomain fervent_ishizaka[31538]: 167 167
Dec 05 07:52:01 np0005546420.localdomain systemd[1]: libpod-cc294fd0701f41a6c6c2792e78273538f117b87c20813c486545fa0b2dc3ff4e.scope: Deactivated successfully.
Dec 05 07:52:01 np0005546420.localdomain podman[31521]: 2025-12-05 07:52:01.245686948 +0000 UTC m=+0.147573585 container died cc294fd0701f41a6c6c2792e78273538f117b87c20813c486545fa0b2dc3ff4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_ishizaka, version=7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=1763362218, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph)
Dec 05 07:52:01 np0005546420.localdomain podman[31543]: 2025-12-05 07:52:01.334479393 +0000 UTC m=+0.077519559 container remove cc294fd0701f41a6c6c2792e78273538f117b87c20813c486545fa0b2dc3ff4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_ishizaka, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public)
Dec 05 07:52:01 np0005546420.localdomain systemd[1]: libpod-conmon-cc294fd0701f41a6c6c2792e78273538f117b87c20813c486545fa0b2dc3ff4e.scope: Deactivated successfully.
Dec 05 07:52:01 np0005546420.localdomain podman[31570]: 
Dec 05 07:52:01 np0005546420.localdomain systemd[1]: tmp-crun.8f5M9L.mount: Deactivated successfully.
Dec 05 07:52:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a3f8b63051556bd2dc5046b3c9d44d12dc2ca802ad23216f44040b6b942aed59-merged.mount: Deactivated successfully.
Dec 05 07:52:01 np0005546420.localdomain podman[31570]: 2025-12-05 07:52:01.663574535 +0000 UTC m=+0.071684600 container create 082ed5ab4decbd365ec94bd0516e52fcc603ef781c885448e47860b260d97510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate-test, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, ceph=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z)
Dec 05 07:52:01 np0005546420.localdomain systemd[1]: Started libpod-conmon-082ed5ab4decbd365ec94bd0516e52fcc603ef781c885448e47860b260d97510.scope.
Dec 05 07:52:01 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:52:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62fc6f5be4af2682ebd724b3d315c8208443b669ccfb4612fa4e178f4d134d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:01 np0005546420.localdomain podman[31570]: 2025-12-05 07:52:01.635024338 +0000 UTC m=+0.043134403 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62fc6f5be4af2682ebd724b3d315c8208443b669ccfb4612fa4e178f4d134d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62fc6f5be4af2682ebd724b3d315c8208443b669ccfb4612fa4e178f4d134d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62fc6f5be4af2682ebd724b3d315c8208443b669ccfb4612fa4e178f4d134d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62fc6f5be4af2682ebd724b3d315c8208443b669ccfb4612fa4e178f4d134d2/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:01 np0005546420.localdomain podman[31570]: 2025-12-05 07:52:01.765700113 +0000 UTC m=+0.173810168 container init 082ed5ab4decbd365ec94bd0516e52fcc603ef781c885448e47860b260d97510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate-test, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, release=1763362218)
Dec 05 07:52:01 np0005546420.localdomain podman[31570]: 2025-12-05 07:52:01.776349659 +0000 UTC m=+0.184459724 container start 082ed5ab4decbd365ec94bd0516e52fcc603ef781c885448e47860b260d97510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate-test, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, release=1763362218)
Dec 05 07:52:01 np0005546420.localdomain podman[31570]: 2025-12-05 07:52:01.776614437 +0000 UTC m=+0.184724552 container attach 082ed5ab4decbd365ec94bd0516e52fcc603ef781c885448e47860b260d97510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate-test, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.41.4, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 07:52:01 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate-test[31585]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 05 07:52:01 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate-test[31585]:                             [--no-systemd] [--no-tmpfs]
Dec 05 07:52:01 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate-test[31585]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 05 07:52:01 np0005546420.localdomain systemd[1]: libpod-082ed5ab4decbd365ec94bd0516e52fcc603ef781c885448e47860b260d97510.scope: Deactivated successfully.
Dec 05 07:52:02 np0005546420.localdomain podman[31590]: 2025-12-05 07:52:02.035779628 +0000 UTC m=+0.034232543 container died 082ed5ab4decbd365ec94bd0516e52fcc603ef781c885448e47860b260d97510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate-test, vcs-type=git, name=rhceph, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 07:52:02 np0005546420.localdomain podman[31590]: 2025-12-05 07:52:02.068058886 +0000 UTC m=+0.066511751 container remove 082ed5ab4decbd365ec94bd0516e52fcc603ef781c885448e47860b260d97510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate-test, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-type=git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, version=7, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, description=Red Hat Ceph Storage 7)
Dec 05 07:52:02 np0005546420.localdomain systemd-journald[619]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation.
Dec 05 07:52:02 np0005546420.localdomain systemd-journald[619]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 05 07:52:02 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 07:52:02 np0005546420.localdomain systemd[1]: libpod-conmon-082ed5ab4decbd365ec94bd0516e52fcc603ef781c885448e47860b260d97510.scope: Deactivated successfully.
Dec 05 07:52:02 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 07:52:02 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:52:02 np0005546420.localdomain systemd-rc-local-generator[31648]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:52:02 np0005546420.localdomain systemd-sysv-generator[31652]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:52:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:52:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f62fc6f5be4af2682ebd724b3d315c8208443b669ccfb4612fa4e178f4d134d2-merged.mount: Deactivated successfully.
Dec 05 07:52:02 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:52:02 np0005546420.localdomain systemd-rc-local-generator[31689]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:52:02 np0005546420.localdomain systemd-sysv-generator[31693]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:52:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:52:02 np0005546420.localdomain systemd[1]: Starting Ceph osd.1 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b...
Dec 05 07:52:03 np0005546420.localdomain podman[31754]: 
Dec 05 07:52:03 np0005546420.localdomain podman[31754]: 2025-12-05 07:52:03.243539617 +0000 UTC m=+0.074536943 container create 942bb7c3fb9d168cb3ea20d313e632d7ed9538ab9fccfae981acffc6799eff53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main)
Dec 05 07:52:03 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:52:03 np0005546420.localdomain podman[31754]: 2025-12-05 07:52:03.212396435 +0000 UTC m=+0.043393791 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48c4865d5be5f9fe2e8336af36413bcf6311b88c49795a74058cf442e831720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48c4865d5be5f9fe2e8336af36413bcf6311b88c49795a74058cf442e831720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48c4865d5be5f9fe2e8336af36413bcf6311b88c49795a74058cf442e831720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48c4865d5be5f9fe2e8336af36413bcf6311b88c49795a74058cf442e831720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48c4865d5be5f9fe2e8336af36413bcf6311b88c49795a74058cf442e831720/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:03 np0005546420.localdomain podman[31754]: 2025-12-05 07:52:03.368823738 +0000 UTC m=+0.199821074 container init 942bb7c3fb9d168cb3ea20d313e632d7ed9538ab9fccfae981acffc6799eff53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph, ceph=True)
Dec 05 07:52:03 np0005546420.localdomain podman[31754]: 2025-12-05 07:52:03.378197282 +0000 UTC m=+0.209194618 container start 942bb7c3fb9d168cb3ea20d313e632d7ed9538ab9fccfae981acffc6799eff53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, version=7, vcs-type=git, release=1763362218, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=)
Dec 05 07:52:03 np0005546420.localdomain podman[31754]: 2025-12-05 07:52:03.378513982 +0000 UTC m=+0.209511308 container attach 942bb7c3fb9d168cb3ea20d313e632d7ed9538ab9fccfae981acffc6799eff53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, architecture=x86_64, release=1763362218, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7)
Dec 05 07:52:03 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate[31768]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 07:52:03 np0005546420.localdomain bash[31754]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 07:52:03 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate[31768]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 05 07:52:03 np0005546420.localdomain bash[31754]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 05 07:52:04 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate[31768]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 05 07:52:04 np0005546420.localdomain bash[31754]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 05 07:52:04 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate[31768]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 07:52:04 np0005546420.localdomain bash[31754]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 07:52:04 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate[31768]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 05 07:52:04 np0005546420.localdomain bash[31754]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 05 07:52:04 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate[31768]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 07:52:04 np0005546420.localdomain bash[31754]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 07:52:04 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate[31768]: --> ceph-volume raw activate successful for osd ID: 1
Dec 05 07:52:04 np0005546420.localdomain bash[31754]: --> ceph-volume raw activate successful for osd ID: 1
Dec 05 07:52:04 np0005546420.localdomain systemd[1]: libpod-942bb7c3fb9d168cb3ea20d313e632d7ed9538ab9fccfae981acffc6799eff53.scope: Deactivated successfully.
Dec 05 07:52:04 np0005546420.localdomain podman[31754]: 2025-12-05 07:52:04.083144396 +0000 UTC m=+0.914141702 container died 942bb7c3fb9d168cb3ea20d313e632d7ed9538ab9fccfae981acffc6799eff53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True, build-date=2025-11-26T19:44:28Z, version=7, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 07:52:04 np0005546420.localdomain systemd[1]: tmp-crun.3Hiu4c.mount: Deactivated successfully.
Dec 05 07:52:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c48c4865d5be5f9fe2e8336af36413bcf6311b88c49795a74058cf442e831720-merged.mount: Deactivated successfully.
Dec 05 07:52:04 np0005546420.localdomain podman[31883]: 2025-12-05 07:52:04.186003187 +0000 UTC m=+0.091187883 container remove 942bb7c3fb9d168cb3ea20d313e632d7ed9538ab9fccfae981acffc6799eff53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1-activate, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 07:52:04 np0005546420.localdomain podman[31943]: 
Dec 05 07:52:04 np0005546420.localdomain podman[31943]: 2025-12-05 07:52:04.510179669 +0000 UTC m=+0.073843559 container create 0a2b57e71e8cd9c1e9576225381c106fb7c0acba8c07cfc2bd0cc57ab7b33232 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64)
Dec 05 07:52:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f3940e2f9897ecc62f72c6422f7fa990e778dba733c1f77336f8b8b0025383/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:04 np0005546420.localdomain podman[31943]: 2025-12-05 07:52:04.479376119 +0000 UTC m=+0.043039999 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f3940e2f9897ecc62f72c6422f7fa990e778dba733c1f77336f8b8b0025383/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f3940e2f9897ecc62f72c6422f7fa990e778dba733c1f77336f8b8b0025383/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f3940e2f9897ecc62f72c6422f7fa990e778dba733c1f77336f8b8b0025383/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f3940e2f9897ecc62f72c6422f7fa990e778dba733c1f77336f8b8b0025383/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:04 np0005546420.localdomain podman[31943]: 2025-12-05 07:52:04.622158988 +0000 UTC m=+0.185822878 container init 0a2b57e71e8cd9c1e9576225381c106fb7c0acba8c07cfc2bd0cc57ab7b33232 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, ceph=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1763362218, GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=)
Dec 05 07:52:04 np0005546420.localdomain podman[31943]: 2025-12-05 07:52:04.630672504 +0000 UTC m=+0.194336394 container start 0a2b57e71e8cd9c1e9576225381c106fb7c0acba8c07cfc2bd0cc57ab7b33232 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, RELEASE=main, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, ceph=True, release=1763362218, GIT_CLEAN=True)
Dec 05 07:52:04 np0005546420.localdomain bash[31943]: 0a2b57e71e8cd9c1e9576225381c106fb7c0acba8c07cfc2bd0cc57ab7b33232
Dec 05 07:52:04 np0005546420.localdomain systemd[1]: Started Ceph osd.1 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b.
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 07:52:04 np0005546420.localdomain sudo[31464]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: pidfile_write: ignore empty --pid-file
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a1180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a1180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a1180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a1180 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 07:52:04 np0005546420.localdomain sudo[31974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:52:04 np0005546420.localdomain sudo[31974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:04 np0005546420.localdomain sudo[31974]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:04 np0005546420.localdomain sudo[31989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 07:52:04 np0005546420.localdomain sudo[31989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:04 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: load: jerasure load: lrc 
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 07:52:05 np0005546420.localdomain podman[32053]: 
Dec 05 07:52:05 np0005546420.localdomain podman[32053]: 2025-12-05 07:52:05.510962044 +0000 UTC m=+0.072172375 container create d2110c3cfb4d4036d4fe91490ae80ce7b3d852315f68d87d922e550bd564a737 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_haibt, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, release=1763362218, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, RELEASE=main, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, name=rhceph, distribution-scope=public, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 07:52:05 np0005546420.localdomain systemd[1]: Started libpod-conmon-d2110c3cfb4d4036d4fe91490ae80ce7b3d852315f68d87d922e550bd564a737.scope.
Dec 05 07:52:05 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:52:05 np0005546420.localdomain podman[32053]: 2025-12-05 07:52:05.579941295 +0000 UTC m=+0.141151626 container init d2110c3cfb4d4036d4fe91490ae80ce7b3d852315f68d87d922e550bd564a737 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_haibt, vcs-type=git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2025-11-26T19:44:28Z)
Dec 05 07:52:05 np0005546420.localdomain podman[32053]: 2025-12-05 07:52:05.481935151 +0000 UTC m=+0.043145482 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:05 np0005546420.localdomain podman[32053]: 2025-12-05 07:52:05.595229832 +0000 UTC m=+0.156440153 container start d2110c3cfb4d4036d4fe91490ae80ce7b3d852315f68d87d922e550bd564a737 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_haibt, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, version=7, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 07:52:05 np0005546420.localdomain podman[32053]: 2025-12-05 07:52:05.595930715 +0000 UTC m=+0.157141096 container attach d2110c3cfb4d4036d4fe91490ae80ce7b3d852315f68d87d922e550bd564a737 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_haibt, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 07:52:05 np0005546420.localdomain ecstatic_haibt[32072]: 167 167
Dec 05 07:52:05 np0005546420.localdomain systemd[1]: libpod-d2110c3cfb4d4036d4fe91490ae80ce7b3d852315f68d87d922e550bd564a737.scope: Deactivated successfully.
Dec 05 07:52:05 np0005546420.localdomain podman[32053]: 2025-12-05 07:52:05.599143499 +0000 UTC m=+0.160353870 container died d2110c3cfb4d4036d4fe91490ae80ce7b3d852315f68d87d922e550bd564a737 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_haibt, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, GIT_CLEAN=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 07:52:05 np0005546420.localdomain podman[32077]: 2025-12-05 07:52:05.68689371 +0000 UTC m=+0.078553443 container remove d2110c3cfb4d4036d4fe91490ae80ce7b3d852315f68d87d922e550bd564a737 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_haibt, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main)
Dec 05 07:52:05 np0005546420.localdomain systemd[1]: libpod-conmon-d2110c3cfb4d4036d4fe91490ae80ce7b3d852315f68d87d922e550bd564a737.scope: Deactivated successfully.
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a0e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a1180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a1180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a1180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluefs mount
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluefs mount shared_bdev_used = 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: RocksDB version: 7.9.2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Git sha 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: DB SUMMARY
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: DB Session ID:  K34U2OPI38XVHQIU7UOO
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: CURRENT file:  CURRENT
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                         Options.error_if_exists: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.create_if_missing: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                                     Options.env: 0x55ee52434cb0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                                Options.info_log: 0x55ee5313c7a0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                              Options.statistics: (nil)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.use_fsync: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                              Options.db_log_dir: 
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.write_buffer_manager: 0x55ee5218a140
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.unordered_write: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.row_cache: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                              Options.wal_filter: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.two_write_queues: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.wal_compression: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.atomic_flush: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.max_background_jobs: 4
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.max_background_compactions: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.max_subcompactions: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.max_open_files: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Compression algorithms supported:
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kZSTD supported: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kXpressCompression supported: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kBZip2Compression supported: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kLZ4Compression supported: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kZlibCompression supported: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kSnappyCompression supported: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee5313c960)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee52178850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: 
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee5313c960)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee52178850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee5313c960)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee52178850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee5313c960)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee52178850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee5313c960)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee52178850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee5313c960)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee52178850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee5313c960)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee52178850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee5313cb80)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee521782d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee5313cb80)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee521782d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee5313cb80)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee521782d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 49088708-df3c-47d5-b56d-2597a1d0b08c
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921125787501, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921125787742, "job": 1, "event": "recovery_finished"}
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: freelist init
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: freelist _read_cfg
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bluefs umount
Dec 05 07:52:05 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a1180 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 07:52:06 np0005546420.localdomain podman[32301]: 
Dec 05 07:52:06 np0005546420.localdomain podman[32301]: 2025-12-05 07:52:06.015214577 +0000 UTC m=+0.076782166 container create 7f8670ec93f71ef4dda167ec5b9c10a627281ec2e3fb35273a52ebb1555fbeb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate-test, distribution-scope=public, architecture=x86_64, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, CEPH_POINT_RELEASE=, release=1763362218, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.expose-services=)
Dec 05 07:52:06 np0005546420.localdomain systemd[1]: Started libpod-conmon-7f8670ec93f71ef4dda167ec5b9c10a627281ec2e3fb35273a52ebb1555fbeb7.scope.
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a1180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a1180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: bdev(0x55ee521a1180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: bluefs mount
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: bluefs mount shared_bdev_used = 4718592
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: RocksDB version: 7.9.2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Git sha 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: DB SUMMARY
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: DB Session ID:  K34U2OPI38XVHQIU7UOP
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: CURRENT file:  CURRENT
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                         Options.error_if_exists: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.create_if_missing: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                                     Options.env: 0x55ee5222c690
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                                Options.info_log: 0x55ee531f83a0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                              Options.statistics: (nil)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.use_fsync: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                              Options.db_log_dir: 
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.write_buffer_manager: 0x55ee5218b5e0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.unordered_write: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.row_cache: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                              Options.wal_filter: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.two_write_queues: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.wal_compression: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.atomic_flush: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.max_background_jobs: 4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.max_background_compactions: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.max_subcompactions: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.max_open_files: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Compression algorithms supported:
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kZSTD supported: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kXpressCompression supported: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kBZip2Compression supported: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kLZ4Compression supported: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kZlibCompression supported: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         kSnappyCompression supported: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee531f8600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee521782d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: 
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee531f8600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee521782d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079e2089c31841ac75bfaf75fe36ee0d922654ceb52dcee7c963cc630f696306/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee531f8600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee521782d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee531f8600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee521782d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:52:06 np0005546420.localdomain podman[32301]: 2025-12-05 07:52:05.987441934 +0000 UTC m=+0.049009523 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee531f8600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee521782d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:06 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079e2089c31841ac75bfaf75fe36ee0d922654ceb52dcee7c963cc630f696306/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee531f8600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee521782d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee531f8600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee521782d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee531f9800)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee52179610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee531f9800)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee52179610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:06 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079e2089c31841ac75bfaf75fe36ee0d922654ceb52dcee7c963cc630f696306/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ee531f9800)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55ee52179610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 49088708-df3c-47d5-b56d-2597a1d0b08c
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921126083271, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921126091460, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921126, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "49088708-df3c-47d5-b56d-2597a1d0b08c", "db_session_id": "K34U2OPI38XVHQIU7UOP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921126095193, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921126, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "49088708-df3c-47d5-b56d-2597a1d0b08c", "db_session_id": "K34U2OPI38XVHQIU7UOP", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921126101167, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921126, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "49088708-df3c-47d5-b56d-2597a1d0b08c", "db_session_id": "K34U2OPI38XVHQIU7UOP", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921126104761, "job": 1, "event": "recovery_finished"}
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 05 07:52:06 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079e2089c31841ac75bfaf75fe36ee0d922654ceb52dcee7c963cc630f696306/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:06 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/079e2089c31841ac75bfaf75fe36ee0d922654ceb52dcee7c963cc630f696306/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:06 np0005546420.localdomain podman[32301]: 2025-12-05 07:52:06.126656717 +0000 UTC m=+0.188224316 container init 7f8670ec93f71ef4dda167ec5b9c10a627281ec2e3fb35273a52ebb1555fbeb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate-test, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, release=1763362218, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, ceph=True, name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc.)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ee521e4700
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: DB pointer 0x55ee53091a00
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: _get_class not permitted to load lua
Dec 05 07:52:06 np0005546420.localdomain podman[32301]: 2025-12-05 07:52:06.136181647 +0000 UTC m=+0.197749246 container start 7f8670ec93f71ef4dda167ec5b9c10a627281ec2e3fb35273a52ebb1555fbeb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate-test, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True)
Dec 05 07:52:06 np0005546420.localdomain podman[32301]: 2025-12-05 07:52:06.136465307 +0000 UTC m=+0.198032956 container attach 7f8670ec93f71ef4dda167ec5b9c10a627281ec2e3fb35273a52ebb1555fbeb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate-test, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, version=7, release=1763362218, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: _get_class not permitted to load sdk
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: _get_class not permitted to load test_remote_reads
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: osd.1 0 load_pgs
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: osd.1 0 load_pgs opened 0 pgs
Dec 05 07:52:06 np0005546420.localdomain ceph-osd[31961]: osd.1 0 log_to_monitors true
Dec 05 07:52:06 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1[31957]: 2025-12-05T07:52:06.136+0000 7f332cdb8a80 -1 osd.1 0 log_to_monitors true
Dec 05 07:52:06 np0005546420.localdomain systemd[1]: tmp-crun.WJQFR7.mount: Deactivated successfully.
Dec 05 07:52:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-130d277d4989dfe95ecc5d9c937ba08f3666345d22cad8c9bc64fb7d276994f7-merged.mount: Deactivated successfully.
Dec 05 07:52:06 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate-test[32318]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 05 07:52:06 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate-test[32318]:                             [--no-systemd] [--no-tmpfs]
Dec 05 07:52:06 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate-test[32318]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 05 07:52:06 np0005546420.localdomain systemd[1]: libpod-7f8670ec93f71ef4dda167ec5b9c10a627281ec2e3fb35273a52ebb1555fbeb7.scope: Deactivated successfully.
Dec 05 07:52:06 np0005546420.localdomain podman[32301]: 2025-12-05 07:52:06.351584765 +0000 UTC m=+0.413152424 container died 7f8670ec93f71ef4dda167ec5b9c10a627281ec2e3fb35273a52ebb1555fbeb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate-test, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=1763362218, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, name=rhceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git)
Dec 05 07:52:06 np0005546420.localdomain systemd[1]: tmp-crun.dnctPu.mount: Deactivated successfully.
Dec 05 07:52:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-079e2089c31841ac75bfaf75fe36ee0d922654ceb52dcee7c963cc630f696306-merged.mount: Deactivated successfully.
Dec 05 07:52:06 np0005546420.localdomain podman[32537]: 2025-12-05 07:52:06.428790454 +0000 UTC m=+0.070310545 container remove 7f8670ec93f71ef4dda167ec5b9c10a627281ec2e3fb35273a52ebb1555fbeb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate-test, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, vcs-type=git, name=rhceph, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 07:52:06 np0005546420.localdomain systemd[1]: libpod-conmon-7f8670ec93f71ef4dda167ec5b9c10a627281ec2e3fb35273a52ebb1555fbeb7.scope: Deactivated successfully.
Dec 05 07:52:06 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:52:06 np0005546420.localdomain systemd-sysv-generator[32599]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:52:06 np0005546420.localdomain systemd-rc-local-generator[32596]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:52:06 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:52:07 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:52:07 np0005546420.localdomain systemd-sysv-generator[32639]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:52:07 np0005546420.localdomain systemd-rc-local-generator[32635]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:52:07 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 05 07:52:07 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 05 07:52:07 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:52:07 np0005546420.localdomain systemd[1]: Starting Ceph osd.4 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b...
Dec 05 07:52:07 np0005546420.localdomain podman[32698]: 
Dec 05 07:52:07 np0005546420.localdomain podman[32698]: 2025-12-05 07:52:07.661119372 +0000 UTC m=+0.081329014 container create 8e54c6b3597f179ccc311a0bc4e63a4f13a60fc5fb4b209d8175bdc9e3bfcdb3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, release=1763362218)
Dec 05 07:52:07 np0005546420.localdomain systemd[1]: tmp-crun.DrfqIv.mount: Deactivated successfully.
Dec 05 07:52:07 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:52:07 np0005546420.localdomain podman[32698]: 2025-12-05 07:52:07.626171606 +0000 UTC m=+0.046381238 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:07 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917238a9e19ab8a7731638563c11bc52c978a60dd6b2dfa0e6afa137a0ed8925/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:07 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917238a9e19ab8a7731638563c11bc52c978a60dd6b2dfa0e6afa137a0ed8925/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:07 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917238a9e19ab8a7731638563c11bc52c978a60dd6b2dfa0e6afa137a0ed8925/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:07 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917238a9e19ab8a7731638563c11bc52c978a60dd6b2dfa0e6afa137a0ed8925/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:07 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917238a9e19ab8a7731638563c11bc52c978a60dd6b2dfa0e6afa137a0ed8925/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:07 np0005546420.localdomain podman[32698]: 2025-12-05 07:52:07.793927586 +0000 UTC m=+0.214137198 container init 8e54c6b3597f179ccc311a0bc4e63a4f13a60fc5fb4b209d8175bdc9e3bfcdb3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1763362218)
Dec 05 07:52:07 np0005546420.localdomain podman[32698]: 2025-12-05 07:52:07.806290628 +0000 UTC m=+0.226500230 container start 8e54c6b3597f179ccc311a0bc4e63a4f13a60fc5fb4b209d8175bdc9e3bfcdb3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7)
Dec 05 07:52:07 np0005546420.localdomain podman[32698]: 2025-12-05 07:52:07.806619409 +0000 UTC m=+0.226829011 container attach 8e54c6b3597f179ccc311a0bc4e63a4f13a60fc5fb4b209d8175bdc9e3bfcdb3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate, version=7, vcs-type=git, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1763362218, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 07:52:08 np0005546420.localdomain ceph-osd[31961]: osd.1 0 done with init, starting boot process
Dec 05 07:52:08 np0005546420.localdomain ceph-osd[31961]: osd.1 0 start_boot
Dec 05 07:52:08 np0005546420.localdomain ceph-osd[31961]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 05 07:52:08 np0005546420.localdomain ceph-osd[31961]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 05 07:52:08 np0005546420.localdomain ceph-osd[31961]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 05 07:52:08 np0005546420.localdomain ceph-osd[31961]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 05 07:52:08 np0005546420.localdomain ceph-osd[31961]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 05 07:52:08 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate[32713]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 05 07:52:08 np0005546420.localdomain bash[32698]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 05 07:52:08 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate[32713]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec 05 07:52:08 np0005546420.localdomain bash[32698]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec 05 07:52:08 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate[32713]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec 05 07:52:08 np0005546420.localdomain bash[32698]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec 05 07:52:08 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate[32713]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 05 07:52:08 np0005546420.localdomain bash[32698]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 05 07:52:08 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate[32713]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Dec 05 07:52:08 np0005546420.localdomain bash[32698]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Dec 05 07:52:08 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate[32713]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 05 07:52:08 np0005546420.localdomain bash[32698]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 05 07:52:08 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate[32713]: --> ceph-volume raw activate successful for osd ID: 4
Dec 05 07:52:08 np0005546420.localdomain bash[32698]: --> ceph-volume raw activate successful for osd ID: 4
Dec 05 07:52:08 np0005546420.localdomain systemd[1]: libpod-8e54c6b3597f179ccc311a0bc4e63a4f13a60fc5fb4b209d8175bdc9e3bfcdb3.scope: Deactivated successfully.
Dec 05 07:52:08 np0005546420.localdomain podman[32698]: 2025-12-05 07:52:08.583996135 +0000 UTC m=+1.004205717 container died 8e54c6b3597f179ccc311a0bc4e63a4f13a60fc5fb4b209d8175bdc9e3bfcdb3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, vcs-type=git, distribution-scope=public, RELEASE=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 07:52:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-917238a9e19ab8a7731638563c11bc52c978a60dd6b2dfa0e6afa137a0ed8925-merged.mount: Deactivated successfully.
Dec 05 07:52:08 np0005546420.localdomain podman[32830]: 2025-12-05 07:52:08.730003368 +0000 UTC m=+0.131114690 container remove 8e54c6b3597f179ccc311a0bc4e63a4f13a60fc5fb4b209d8175bdc9e3bfcdb3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4-activate, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 07:52:09 np0005546420.localdomain podman[32889]: 
Dec 05 07:52:09 np0005546420.localdomain podman[32889]: 2025-12-05 07:52:09.077315752 +0000 UTC m=+0.080342160 container create a8936b740a4e506d1381bb670fa4bc3c7a6ea910d3fa8fd897ae2165b62d05d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph)
Dec 05 07:52:09 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c4dbb66468418255572103a79f0ed411d4aafaf0d9587ed7334d0437a36f42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:09 np0005546420.localdomain podman[32889]: 2025-12-05 07:52:09.048365492 +0000 UTC m=+0.051391920 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:09 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c4dbb66468418255572103a79f0ed411d4aafaf0d9587ed7334d0437a36f42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:09 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c4dbb66468418255572103a79f0ed411d4aafaf0d9587ed7334d0437a36f42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:09 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c4dbb66468418255572103a79f0ed411d4aafaf0d9587ed7334d0437a36f42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:09 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c4dbb66468418255572103a79f0ed411d4aafaf0d9587ed7334d0437a36f42/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:09 np0005546420.localdomain podman[32889]: 2025-12-05 07:52:09.210108877 +0000 UTC m=+0.213135245 container init a8936b740a4e506d1381bb670fa4bc3c7a6ea910d3fa8fd897ae2165b62d05d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, release=1763362218, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7)
Dec 05 07:52:09 np0005546420.localdomain podman[32889]: 2025-12-05 07:52:09.218882642 +0000 UTC m=+0.221909040 container start a8936b740a4e506d1381bb670fa4bc3c7a6ea910d3fa8fd897ae2165b62d05d8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 07:52:09 np0005546420.localdomain bash[32889]: a8936b740a4e506d1381bb670fa4bc3c7a6ea910d3fa8fd897ae2165b62d05d8
Dec 05 07:52:09 np0005546420.localdomain systemd[1]: Started Ceph osd.4 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b.
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: pidfile_write: ignore empty --pid-file
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86b180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86b180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86b180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86b180 /var/lib/ceph/osd/ceph-4/block) close
Dec 05 07:52:09 np0005546420.localdomain sudo[31989]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:09 np0005546420.localdomain sudo[32920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:52:09 np0005546420.localdomain sudo[32920]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:09 np0005546420.localdomain sudo[32920]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:09 np0005546420.localdomain sudo[32935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b -- raw list --format json
Dec 05 07:52:09 np0005546420.localdomain sudo[32935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) close
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: starting osd.4 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: load: jerasure load: lrc 
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 05 07:52:09 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) close
Dec 05 07:52:09 np0005546420.localdomain podman[32998]: 
Dec 05 07:52:10 np0005546420.localdomain podman[32998]: 2025-12-05 07:52:10.014520162 +0000 UTC m=+0.091661069 container create 1c50cfdb972adcfa86db8f30111783f7e16a1b80c1d46b05cc2ec3c2a0c6e8a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_pare, architecture=x86_64, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.41.4, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True)
Dec 05 07:52:10 np0005546420.localdomain systemd[1]: Started libpod-conmon-1c50cfdb972adcfa86db8f30111783f7e16a1b80c1d46b05cc2ec3c2a0c6e8a3.scope.
Dec 05 07:52:10 np0005546420.localdomain podman[32998]: 2025-12-05 07:52:09.973154028 +0000 UTC m=+0.050294965 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:10 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) close
Dec 05 07:52:10 np0005546420.localdomain podman[32998]: 2025-12-05 07:52:10.108748274 +0000 UTC m=+0.185889181 container init 1c50cfdb972adcfa86db8f30111783f7e16a1b80c1d46b05cc2ec3c2a0c6e8a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_pare, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, version=7, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git)
Dec 05 07:52:10 np0005546420.localdomain podman[32998]: 2025-12-05 07:52:10.118534021 +0000 UTC m=+0.195674928 container start 1c50cfdb972adcfa86db8f30111783f7e16a1b80c1d46b05cc2ec3c2a0c6e8a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_pare, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_BRANCH=main)
Dec 05 07:52:10 np0005546420.localdomain podman[32998]: 2025-12-05 07:52:10.118733958 +0000 UTC m=+0.195874865 container attach 1c50cfdb972adcfa86db8f30111783f7e16a1b80c1d46b05cc2ec3c2a0c6e8a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_pare, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_CLEAN=True, io.buildah.version=1.41.4)
Dec 05 07:52:10 np0005546420.localdomain jovial_pare[33013]: 167 167
Dec 05 07:52:10 np0005546420.localdomain systemd[1]: libpod-1c50cfdb972adcfa86db8f30111783f7e16a1b80c1d46b05cc2ec3c2a0c6e8a3.scope: Deactivated successfully.
Dec 05 07:52:10 np0005546420.localdomain podman[32998]: 2025-12-05 07:52:10.123232944 +0000 UTC m=+0.200373881 container died 1c50cfdb972adcfa86db8f30111783f7e16a1b80c1d46b05cc2ec3c2a0c6e8a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_pare, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-type=git, ceph=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, architecture=x86_64, release=1763362218, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 07:52:10 np0005546420.localdomain podman[33022]: 2025-12-05 07:52:10.253568838 +0000 UTC m=+0.116186095 container remove 1c50cfdb972adcfa86db8f30111783f7e16a1b80c1d46b05cc2ec3c2a0c6e8a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_pare, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=)
Dec 05 07:52:10 np0005546420.localdomain systemd[1]: libpod-conmon-1c50cfdb972adcfa86db8f30111783f7e16a1b80c1d46b05cc2ec3c2a0c6e8a3.scope: Deactivated successfully.
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: osd.4:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86ae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86b180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86b180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86b180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluefs mount
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluefs mount shared_bdev_used = 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: RocksDB version: 7.9.2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Git sha 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: DB SUMMARY
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: DB Session ID:  3OEBIA309I35S1BXYXNE
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: CURRENT file:  CURRENT
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                         Options.error_if_exists: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.create_if_missing: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                                     Options.env: 0x56455dafecb0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                                Options.info_log: 0x56455e800b80
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                              Options.statistics: (nil)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.use_fsync: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                              Options.db_log_dir: 
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.write_buffer_manager: 0x56455d854140
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.unordered_write: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.row_cache: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                              Options.wal_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.two_write_queues: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.wal_compression: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.atomic_flush: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.max_background_jobs: 4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.max_background_compactions: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.max_subcompactions: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.max_open_files: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Compression algorithms supported:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kZSTD supported: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kXpressCompression supported: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kBZip2Compression supported: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kLZ4Compression supported: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kZlibCompression supported: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kSnappyCompression supported: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e800d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d842850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: 
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e800d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d842850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e800d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d842850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e800d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d842850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e800d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d842850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e800d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d842850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e800d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d842850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e800f60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d8422d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e800f60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d8422d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e800f60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d8422d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d8b9e70e-32ff-4871-aa5d-f8d801d348bf
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921130401221, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921130401524, "job": 1, "event": "recovery_finished"}
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old nid_max 1025
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old blobid_max 10240
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta min_alloc_size 0x1000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: freelist init
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: freelist _read_cfg
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluefs umount
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86b180 /var/lib/ceph/osd/ceph-4/block) close
Dec 05 07:52:10 np0005546420.localdomain podman[33052]: 
Dec 05 07:52:10 np0005546420.localdomain podman[33052]: 2025-12-05 07:52:10.47306298 +0000 UTC m=+0.087337599 container create 19c32d352e9a7bbe9aeeb12f19e511fa46c2519cef837e49e805c5ba48eede49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_babbage, RELEASE=main, ceph=True, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, name=rhceph, CEPH_POINT_RELEASE=, distribution-scope=public, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Dec 05 07:52:10 np0005546420.localdomain podman[33052]: 2025-12-05 07:52:10.421818865 +0000 UTC m=+0.036093504 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:10 np0005546420.localdomain systemd[1]: Started libpod-conmon-19c32d352e9a7bbe9aeeb12f19e511fa46c2519cef837e49e805c5ba48eede49.scope.
Dec 05 07:52:10 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:52:10 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c326e0d51e3f60b8bc2e44b0903473472bd2763c5f5521fc345acd5d47daf507/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:10 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c326e0d51e3f60b8bc2e44b0903473472bd2763c5f5521fc345acd5d47daf507/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:10 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c326e0d51e3f60b8bc2e44b0903473472bd2763c5f5521fc345acd5d47daf507/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:10 np0005546420.localdomain podman[33052]: 2025-12-05 07:52:10.591061114 +0000 UTC m=+0.205335723 container init 19c32d352e9a7bbe9aeeb12f19e511fa46c2519cef837e49e805c5ba48eede49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_babbage, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, version=7, com.redhat.component=rhceph-container, release=1763362218, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 07:52:10 np0005546420.localdomain podman[33052]: 2025-12-05 07:52:10.62080705 +0000 UTC m=+0.235081659 container start 19c32d352e9a7bbe9aeeb12f19e511fa46c2519cef837e49e805c5ba48eede49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_babbage, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, version=7, release=1763362218, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 07:52:10 np0005546420.localdomain podman[33052]: 2025-12-05 07:52:10.621053807 +0000 UTC m=+0.235328426 container attach 19c32d352e9a7bbe9aeeb12f19e511fa46c2519cef837e49e805c5ba48eede49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_babbage, io.openshift.expose-services=, release=1763362218, ceph=True, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, architecture=x86_64, RELEASE=main, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86b180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86b180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bdev(0x56455d86b180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluefs mount
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluefs mount shared_bdev_used = 4718592
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: RocksDB version: 7.9.2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Git sha 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: DB SUMMARY
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: DB Session ID:  3OEBIA309I35S1BXYXNF
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: CURRENT file:  CURRENT
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                         Options.error_if_exists: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.create_if_missing: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                                     Options.env: 0x56455d8ae620
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                                Options.info_log: 0x56455d90a880
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                              Options.statistics: (nil)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.use_fsync: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                              Options.db_log_dir: 
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.write_buffer_manager: 0x56455d8555e0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.unordered_write: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.row_cache: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                              Options.wal_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.two_write_queues: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.wal_compression: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.atomic_flush: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.max_background_jobs: 4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.max_background_compactions: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.max_subcompactions: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.max_open_files: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Compression algorithms supported:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kZSTD supported: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kXpressCompression supported: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kBZip2Compression supported: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kLZ4Compression supported: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kZlibCompression supported: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         kSnappyCompression supported: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e801120)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d8422d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: 
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e801120)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d8422d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e801120)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d8422d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e801120)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d8422d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e801120)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d8422d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e801120)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d8422d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455e801120)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d8422d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455d90aa00)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d843610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455d90aa00)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d843610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:           Options.merge_operator: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56455d90aa00)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56455d843610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.compression: LZ4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.num_levels: 7
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.bloom_locality: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                               Options.ttl: 2592000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                       Options.enable_blob_files: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                           Options.min_blob_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d8b9e70e-32ff-4871-aa5d-f8d801d348bf
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921130672014, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921130699137, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921130, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d8b9e70e-32ff-4871-aa5d-f8d801d348bf", "db_session_id": "3OEBIA309I35S1BXYXNF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921130703264, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921130, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d8b9e70e-32ff-4871-aa5d-f8d801d348bf", "db_session_id": "3OEBIA309I35S1BXYXNF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921130725069, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921130, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d8b9e70e-32ff-4871-aa5d-f8d801d348bf", "db_session_id": "3OEBIA309I35S1BXYXNF", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921130750318, "job": 1, "event": "recovery_finished"}
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56455d904700
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: DB pointer 0x56455e75da00
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super from 4, latest 4
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super done
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: _get_class not permitted to load lua
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: _get_class not permitted to load sdk
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: _get_class not permitted to load test_remote_reads
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: osd.4 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: osd.4 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: osd.4 0 load_pgs
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: osd.4 0 load_pgs opened 0 pgs
Dec 05 07:52:10 np0005546420.localdomain ceph-osd[32907]: osd.4 0 log_to_monitors true
Dec 05 07:52:10 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4[32903]: 2025-12-05T07:52:10.810+0000 7f79161a1a80 -1 osd.4 0 log_to_monitors true
Dec 05 07:52:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-1986b5f708875e84f29f83d8b7f344fdf5c45547c120d1871b6c976a4185d0f9-merged.mount: Deactivated successfully.
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]: {
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:     "5de3622b-c6b4-45a6-8ef6-d7ebe58a162b": {
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:         "ceph_fsid": "79feddb1-4bfc-557f-83b9-0d57c9f66c1b",
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:         "osd_id": 1,
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:         "osd_uuid": "5de3622b-c6b4-45a6-8ef6-d7ebe58a162b",
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:         "type": "bluestore"
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:     },
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:     "d4943a67-6268-48a0-b84a-a9a49f3de9c5": {
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:         "ceph_fsid": "79feddb1-4bfc-557f-83b9-0d57c9f66c1b",
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:         "osd_id": 4,
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:         "osd_uuid": "d4943a67-6268-48a0-b84a-a9a49f3de9c5",
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:         "type": "bluestore"
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]:     }
Dec 05 07:52:11 np0005546420.localdomain adoring_babbage[33254]: }
Dec 05 07:52:11 np0005546420.localdomain systemd[1]: libpod-19c32d352e9a7bbe9aeeb12f19e511fa46c2519cef837e49e805c5ba48eede49.scope: Deactivated successfully.
Dec 05 07:52:11 np0005546420.localdomain podman[33052]: 2025-12-05 07:52:11.171537443 +0000 UTC m=+0.785812082 container died 19c32d352e9a7bbe9aeeb12f19e511fa46c2519cef837e49e805c5ba48eede49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_babbage, version=7, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, release=1763362218, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4)
Dec 05 07:52:11 np0005546420.localdomain systemd[1]: tmp-crun.AVGcaq.mount: Deactivated successfully.
Dec 05 07:52:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c326e0d51e3f60b8bc2e44b0903473472bd2763c5f5521fc345acd5d47daf507-merged.mount: Deactivated successfully.
Dec 05 07:52:11 np0005546420.localdomain podman[33505]: 2025-12-05 07:52:11.267378396 +0000 UTC m=+0.084565278 container remove 19c32d352e9a7bbe9aeeb12f19e511fa46c2519cef837e49e805c5ba48eede49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_babbage, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, version=7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 07:52:11 np0005546420.localdomain systemd[1]: libpod-conmon-19c32d352e9a7bbe9aeeb12f19e511fa46c2519cef837e49e805c5ba48eede49.scope: Deactivated successfully.
Dec 05 07:52:11 np0005546420.localdomain sudo[32935]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[31961]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 22.377 iops: 5728.521 elapsed_sec: 0.524
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [WRN] : OSD bench result of 5728.520881 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[31961]: osd.1 0 waiting for initial osdmap
Dec 05 07:52:11 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1[31957]: 2025-12-05T07:52:11.911+0000 7f3328d37640 -1 osd.1 0 waiting for initial osdmap
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[31961]: osd.1 14 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[31961]: osd.1 14 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[31961]: osd.1 14 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[31961]: osd.1 14 check_osdmap_features require_osd_release unknown -> reef
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[31961]: osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[31961]: osd.1 14 set_numa_affinity not setting numa affinity
Dec 05 07:52:11 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-1[31957]: 2025-12-05T07:52:11.930+0000 7f3324361640 -1 osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 07:52:11 np0005546420.localdomain ceph-osd[31961]: osd.1 14 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec 05 07:52:12 np0005546420.localdomain sudo[33521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 07:52:12 np0005546420.localdomain sudo[33521]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:12 np0005546420.localdomain sudo[33521]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:12 np0005546420.localdomain sudo[33536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:52:12 np0005546420.localdomain sudo[33536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:12 np0005546420.localdomain sudo[33536]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:12 np0005546420.localdomain sudo[33551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 07:52:12 np0005546420.localdomain sudo[33551]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:12 np0005546420.localdomain ceph-osd[32907]: osd.4 0 done with init, starting boot process
Dec 05 07:52:12 np0005546420.localdomain ceph-osd[32907]: osd.4 0 start_boot
Dec 05 07:52:12 np0005546420.localdomain ceph-osd[32907]: osd.4 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 05 07:52:12 np0005546420.localdomain ceph-osd[32907]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 05 07:52:12 np0005546420.localdomain ceph-osd[32907]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 05 07:52:12 np0005546420.localdomain ceph-osd[32907]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 05 07:52:12 np0005546420.localdomain ceph-osd[32907]: osd.4 0  bench count 12288000 bsize 4 KiB
Dec 05 07:52:12 np0005546420.localdomain ceph-osd[31961]: osd.1 15 state: booting -> active
Dec 05 07:52:13 np0005546420.localdomain podman[33635]: 2025-12-05 07:52:13.67659236 +0000 UTC m=+0.114898354 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, release=1763362218, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 07:52:13 np0005546420.localdomain podman[33635]: 2025-12-05 07:52:13.806441939 +0000 UTC m=+0.244747903 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, release=1763362218, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 07:52:14 np0005546420.localdomain sudo[33551]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:15 np0005546420.localdomain sudo[33700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:52:15 np0005546420.localdomain sudo[33700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:15 np0005546420.localdomain sudo[33700]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:15 np0005546420.localdomain sudo[33715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 07:52:15 np0005546420.localdomain sudo[33715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:15 np0005546420.localdomain ceph-osd[31961]: osd.1 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 05 07:52:15 np0005546420.localdomain ceph-osd[31961]: osd.1 17 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 05 07:52:15 np0005546420.localdomain ceph-osd[31961]: osd.1 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 05 07:52:15 np0005546420.localdomain sudo[33715]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:15 np0005546420.localdomain sudo[33762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:52:15 np0005546420.localdomain sudo[33762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:15 np0005546420.localdomain sudo[33762]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:15 np0005546420.localdomain sudo[33777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b -- inventory --format=json-pretty --filter-for-batch
Dec 05 07:52:15 np0005546420.localdomain sudo[33777]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:16 np0005546420.localdomain podman[33829]: 
Dec 05 07:52:16 np0005546420.localdomain podman[33829]: 2025-12-05 07:52:16.49934491 +0000 UTC m=+0.088133165 container create 39b84c884cbd55ceb2379e9586441c17d327ad0d21ef3436cb510ef4a09ff97d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_babbage, vcs-type=git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph)
Dec 05 07:52:16 np0005546420.localdomain systemd[1]: Started libpod-conmon-39b84c884cbd55ceb2379e9586441c17d327ad0d21ef3436cb510ef4a09ff97d.scope.
Dec 05 07:52:16 np0005546420.localdomain podman[33829]: 2025-12-05 07:52:16.453841761 +0000 UTC m=+0.042630036 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:16 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:52:16 np0005546420.localdomain podman[33829]: 2025-12-05 07:52:16.575772823 +0000 UTC m=+0.164561048 container init 39b84c884cbd55ceb2379e9586441c17d327ad0d21ef3436cb510ef4a09ff97d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_babbage, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_CLEAN=True, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, ceph=True, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=1763362218)
Dec 05 07:52:16 np0005546420.localdomain systemd[1]: tmp-crun.lQhRjF.mount: Deactivated successfully.
Dec 05 07:52:16 np0005546420.localdomain zealous_babbage[33845]: 167 167
Dec 05 07:52:16 np0005546420.localdomain systemd[1]: libpod-39b84c884cbd55ceb2379e9586441c17d327ad0d21ef3436cb510ef4a09ff97d.scope: Deactivated successfully.
Dec 05 07:52:16 np0005546420.localdomain podman[33829]: 2025-12-05 07:52:16.600156425 +0000 UTC m=+0.188944670 container start 39b84c884cbd55ceb2379e9586441c17d327ad0d21ef3436cb510ef4a09ff97d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_babbage, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, name=rhceph, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 07:52:16 np0005546420.localdomain podman[33829]: 2025-12-05 07:52:16.600538868 +0000 UTC m=+0.189327093 container attach 39b84c884cbd55ceb2379e9586441c17d327ad0d21ef3436cb510ef4a09ff97d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_babbage, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main)
Dec 05 07:52:16 np0005546420.localdomain podman[33829]: 2025-12-05 07:52:16.603025899 +0000 UTC m=+0.191814134 container died 39b84c884cbd55ceb2379e9586441c17d327ad0d21ef3436cb510ef4a09ff97d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_babbage, GIT_CLEAN=True, name=rhceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=)
Dec 05 07:52:16 np0005546420.localdomain podman[33850]: 2025-12-05 07:52:16.72586762 +0000 UTC m=+0.115110461 container remove 39b84c884cbd55ceb2379e9586441c17d327ad0d21ef3436cb510ef4a09ff97d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_babbage, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.component=rhceph-container, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, name=rhceph, build-date=2025-11-26T19:44:28Z)
Dec 05 07:52:16 np0005546420.localdomain systemd[1]: libpod-conmon-39b84c884cbd55ceb2379e9586441c17d327ad0d21ef3436cb510ef4a09ff97d.scope: Deactivated successfully.
Dec 05 07:52:16 np0005546420.localdomain podman[33872]: 
Dec 05 07:52:16 np0005546420.localdomain podman[33872]: 2025-12-05 07:52:16.954180367 +0000 UTC m=+0.084815386 container create 265a65268a2fbe94b0cdd5f88fee456ec7ad3de81f84754de257864357631264 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_moser, ceph=True, GIT_CLEAN=True, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z)
Dec 05 07:52:16 np0005546420.localdomain systemd[1]: Started libpod-conmon-265a65268a2fbe94b0cdd5f88fee456ec7ad3de81f84754de257864357631264.scope.
Dec 05 07:52:17 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 07:52:17 np0005546420.localdomain podman[33872]: 2025-12-05 07:52:16.913681842 +0000 UTC m=+0.044316861 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 07:52:17 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea191ba359b0a94a33d84a289a8329127228a5d9aa00014f51def46400afa02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:17 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea191ba359b0a94a33d84a289a8329127228a5d9aa00014f51def46400afa02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:17 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea191ba359b0a94a33d84a289a8329127228a5d9aa00014f51def46400afa02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 07:52:17 np0005546420.localdomain podman[33872]: 2025-12-05 07:52:17.077533485 +0000 UTC m=+0.208168514 container init 265a65268a2fbe94b0cdd5f88fee456ec7ad3de81f84754de257864357631264 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_moser, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64)
Dec 05 07:52:17 np0005546420.localdomain podman[33872]: 2025-12-05 07:52:17.097161292 +0000 UTC m=+0.227796311 container start 265a65268a2fbe94b0cdd5f88fee456ec7ad3de81f84754de257864357631264 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_moser, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 07:52:17 np0005546420.localdomain podman[33872]: 2025-12-05 07:52:17.097434851 +0000 UTC m=+0.228069880 container attach 265a65268a2fbe94b0cdd5f88fee456ec7ad3de81f84754de257864357631264 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_moser, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container)
Dec 05 07:52:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5b9039f2ac1fe8ba139b93be472bf95948dcace7afcc283b8492c13afe2b3a0f-merged.mount: Deactivated successfully.
Dec 05 07:52:17 np0005546420.localdomain ceph-osd[32907]: osd.4 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 21.892 iops: 5604.343 elapsed_sec: 0.535
Dec 05 07:52:17 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [WRN] : OSD bench result of 5604.342698 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.4. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 07:52:17 np0005546420.localdomain ceph-osd[32907]: osd.4 0 waiting for initial osdmap
Dec 05 07:52:17 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4[32903]: 2025-12-05T07:52:17.548+0000 7f7912120640 -1 osd.4 0 waiting for initial osdmap
Dec 05 07:52:17 np0005546420.localdomain ceph-osd[32907]: osd.4 19 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 05 07:52:17 np0005546420.localdomain ceph-osd[32907]: osd.4 19 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 05 07:52:17 np0005546420.localdomain ceph-osd[32907]: osd.4 19 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 05 07:52:17 np0005546420.localdomain ceph-osd[32907]: osd.4 19 check_osdmap_features require_osd_release unknown -> reef
Dec 05 07:52:17 np0005546420.localdomain ceph-osd[32907]: osd.4 19 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 07:52:17 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-4[32903]: 2025-12-05T07:52:17.570+0000 7f790d74a640 -1 osd.4 19 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 07:52:17 np0005546420.localdomain ceph-osd[32907]: osd.4 19 set_numa_affinity not setting numa affinity
Dec 05 07:52:17 np0005546420.localdomain ceph-osd[32907]: osd.4 19 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]: [
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:     {
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:         "available": false,
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:         "ceph_device": false,
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:         "lsm_data": {},
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:         "lvs": [],
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:         "path": "/dev/sr0",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:         "rejected_reasons": [
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "Has a FileSystem",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "Insufficient space (<5GB)"
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:         ],
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:         "sys_api": {
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "actuators": null,
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "device_nodes": "sr0",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "human_readable_size": "482.00 KB",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "id_bus": "ata",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "model": "QEMU DVD-ROM",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "nr_requests": "2",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "partitions": {},
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "path": "/dev/sr0",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "removable": "1",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "rev": "2.5+",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "ro": "0",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "rotational": "1",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "sas_address": "",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "sas_device_handle": "",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "scheduler_mode": "mq-deadline",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "sectors": 0,
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "sectorsize": "2048",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "size": 493568.0,
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "support_discard": "0",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "type": "disk",
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:             "vendor": "QEMU"
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:         }
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]:     }
Dec 05 07:52:17 np0005546420.localdomain friendly_moser[33887]: ]
Dec 05 07:52:17 np0005546420.localdomain systemd[1]: libpod-265a65268a2fbe94b0cdd5f88fee456ec7ad3de81f84754de257864357631264.scope: Deactivated successfully.
Dec 05 07:52:18 np0005546420.localdomain podman[35362]: 2025-12-05 07:52:18.015412076 +0000 UTC m=+0.036619291 container died 265a65268a2fbe94b0cdd5f88fee456ec7ad3de81f84754de257864357631264 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_moser, version=7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 07:52:18 np0005546420.localdomain systemd[1]: tmp-crun.vvvgBf.mount: Deactivated successfully.
Dec 05 07:52:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-aea191ba359b0a94a33d84a289a8329127228a5d9aa00014f51def46400afa02-merged.mount: Deactivated successfully.
Dec 05 07:52:18 np0005546420.localdomain podman[35362]: 2025-12-05 07:52:18.050874098 +0000 UTC m=+0.072081283 container remove 265a65268a2fbe94b0cdd5f88fee456ec7ad3de81f84754de257864357631264 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_moser, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7)
Dec 05 07:52:18 np0005546420.localdomain systemd[1]: libpod-conmon-265a65268a2fbe94b0cdd5f88fee456ec7ad3de81f84754de257864357631264.scope: Deactivated successfully.
Dec 05 07:52:18 np0005546420.localdomain sudo[33777]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:18 np0005546420.localdomain ceph-osd[32907]: osd.4 20 state: booting -> active
Dec 05 07:52:19 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 20 pg[1.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=17/0 les/c/f=18/0/0 sis=20) [2,4,3] r=1 lpr=20 pi=[17,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 07:52:19 np0005546420.localdomain sudo[35377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 07:52:19 np0005546420.localdomain sudo[35377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:19 np0005546420.localdomain sudo[35377]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:26 np0005546420.localdomain sudo[35392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:52:26 np0005546420.localdomain sudo[35392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:26 np0005546420.localdomain sudo[35392]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:26 np0005546420.localdomain sudo[35407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 07:52:26 np0005546420.localdomain sudo[35407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:26 np0005546420.localdomain systemd[26341]: Starting Mark boot as successful...
Dec 05 07:52:26 np0005546420.localdomain podman[35489]: 2025-12-05 07:52:26.981070235 +0000 UTC m=+0.071650138 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7)
Dec 05 07:52:26 np0005546420.localdomain systemd[26341]: Finished Mark boot as successful.
Dec 05 07:52:27 np0005546420.localdomain podman[35489]: 2025-12-05 07:52:27.099524124 +0000 UTC m=+0.190104027 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=1763362218, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64)
Dec 05 07:52:27 np0005546420.localdomain sudo[35407]: pam_unix(sudo:session): session closed for user root
Dec 05 07:52:27 np0005546420.localdomain sudo[35558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 07:52:27 np0005546420.localdomain sudo[35558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:52:27 np0005546420.localdomain sudo[35558]: pam_unix(sudo:session): session closed for user root
Dec 05 07:53:28 np0005546420.localdomain sudo[35573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:53:28 np0005546420.localdomain sudo[35573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:53:28 np0005546420.localdomain sudo[35573]: pam_unix(sudo:session): session closed for user root
Dec 05 07:53:28 np0005546420.localdomain sudo[35588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 07:53:28 np0005546420.localdomain sudo[35588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:53:28 np0005546420.localdomain podman[35673]: 2025-12-05 07:53:28.962721597 +0000 UTC m=+0.089217650 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, name=rhceph, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, version=7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, io.openshift.tags=rhceph ceph)
Dec 05 07:53:29 np0005546420.localdomain podman[35673]: 2025-12-05 07:53:29.09331913 +0000 UTC m=+0.219815153 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, release=1763362218, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True)
Dec 05 07:53:29 np0005546420.localdomain sudo[35588]: pam_unix(sudo:session): session closed for user root
Dec 05 07:53:29 np0005546420.localdomain sudo[35738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:53:29 np0005546420.localdomain sudo[35738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:53:29 np0005546420.localdomain sudo[35738]: pam_unix(sudo:session): session closed for user root
Dec 05 07:53:29 np0005546420.localdomain sudo[35753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 07:53:29 np0005546420.localdomain sudo[35753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:53:30 np0005546420.localdomain sudo[35753]: pam_unix(sudo:session): session closed for user root
Dec 05 07:53:30 np0005546420.localdomain sudo[35800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 07:53:30 np0005546420.localdomain sudo[35800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:53:30 np0005546420.localdomain sudo[35800]: pam_unix(sudo:session): session closed for user root
Dec 05 07:53:34 np0005546420.localdomain sshd[24842]: Received disconnect from 192.168.122.100 port 51680:11: disconnected by user
Dec 05 07:53:34 np0005546420.localdomain sshd[24842]: Disconnected from user zuul 192.168.122.100 port 51680
Dec 05 07:53:34 np0005546420.localdomain sshd[24839]: pam_unix(sshd:session): session closed for user zuul
Dec 05 07:53:34 np0005546420.localdomain systemd[1]: session-13.scope: Deactivated successfully.
Dec 05 07:53:34 np0005546420.localdomain systemd[1]: session-13.scope: Consumed 21.634s CPU time.
Dec 05 07:53:34 np0005546420.localdomain systemd-logind[762]: Session 13 logged out. Waiting for processes to exit.
Dec 05 07:53:34 np0005546420.localdomain systemd-logind[762]: Removed session 13.
Dec 05 07:54:30 np0005546420.localdomain sudo[35815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:54:30 np0005546420.localdomain sudo[35815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:54:30 np0005546420.localdomain sudo[35815]: pam_unix(sudo:session): session closed for user root
Dec 05 07:54:31 np0005546420.localdomain sudo[35830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 07:54:31 np0005546420.localdomain sudo[35830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:54:31 np0005546420.localdomain sudo[35830]: pam_unix(sudo:session): session closed for user root
Dec 05 07:54:32 np0005546420.localdomain sudo[35877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 07:54:32 np0005546420.localdomain sudo[35877]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:54:32 np0005546420.localdomain sudo[35877]: pam_unix(sudo:session): session closed for user root
Dec 05 07:55:32 np0005546420.localdomain sudo[35892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:55:32 np0005546420.localdomain sudo[35892]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:55:32 np0005546420.localdomain sudo[35892]: pam_unix(sudo:session): session closed for user root
Dec 05 07:55:32 np0005546420.localdomain sudo[35907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 07:55:32 np0005546420.localdomain sudo[35907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:55:33 np0005546420.localdomain sudo[35907]: pam_unix(sudo:session): session closed for user root
Dec 05 07:55:33 np0005546420.localdomain sudo[35956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 07:55:33 np0005546420.localdomain sudo[35956]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:55:33 np0005546420.localdomain sudo[35956]: pam_unix(sudo:session): session closed for user root
Dec 05 07:55:52 np0005546420.localdomain systemd[26341]: Created slice User Background Tasks Slice.
Dec 05 07:55:52 np0005546420.localdomain systemd[26341]: Starting Cleanup of User's Temporary Files and Directories...
Dec 05 07:55:52 np0005546420.localdomain systemd[26341]: Finished Cleanup of User's Temporary Files and Directories.
Dec 05 07:56:34 np0005546420.localdomain sudo[35972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:56:34 np0005546420.localdomain sudo[35972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:56:34 np0005546420.localdomain sudo[35972]: pam_unix(sudo:session): session closed for user root
Dec 05 07:56:34 np0005546420.localdomain sudo[35987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 07:56:34 np0005546420.localdomain sudo[35987]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:56:34 np0005546420.localdomain sudo[35987]: pam_unix(sudo:session): session closed for user root
Dec 05 07:56:35 np0005546420.localdomain sudo[36033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 07:56:35 np0005546420.localdomain sudo[36033]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:56:35 np0005546420.localdomain sudo[36033]: pam_unix(sudo:session): session closed for user root
Dec 05 07:56:58 np0005546420.localdomain sshd[36048]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:56:59 np0005546420.localdomain sshd[36048]: Accepted publickey for zuul from 192.168.122.100 port 36918 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 07:56:59 np0005546420.localdomain systemd-logind[762]: New session 27 of user zuul.
Dec 05 07:56:59 np0005546420.localdomain systemd[1]: Started Session 27 of User zuul.
Dec 05 07:56:59 np0005546420.localdomain sshd[36048]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 07:56:59 np0005546420.localdomain sudo[36094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sbizufekwoghoovfsopvwrszainbokwp ; /usr/bin/python3
Dec 05 07:56:59 np0005546420.localdomain sudo[36094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:56:59 np0005546420.localdomain python3[36096]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 05 07:56:59 np0005546420.localdomain sudo[36094]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:00 np0005546420.localdomain sudo[36139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fokzaojwndmfbdmpifbxsirqngyouwqh ; /usr/bin/python3
Dec 05 07:57:00 np0005546420.localdomain sudo[36139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:57:00 np0005546420.localdomain python3[36141]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 07:57:00 np0005546420.localdomain sudo[36139]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:00 np0005546420.localdomain sudo[36159]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iyeecfeoadiheyplrsyhungcphljeicu ; /usr/bin/python3
Dec 05 07:57:00 np0005546420.localdomain sudo[36159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:57:00 np0005546420.localdomain python3[36161]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546420.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 05 07:57:00 np0005546420.localdomain useradd[36163]: new group: name=tripleo-admin, GID=1003
Dec 05 07:57:00 np0005546420.localdomain useradd[36163]: new user: name=tripleo-admin, UID=1003, GID=1003, home=/home/tripleo-admin, shell=/bin/bash, from=none
Dec 05 07:57:00 np0005546420.localdomain sudo[36159]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:01 np0005546420.localdomain sudo[36215]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nvcaddbohnwvhqsvrjuvovefznynisvu ; /usr/bin/python3
Dec 05 07:57:01 np0005546420.localdomain sudo[36215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:57:01 np0005546420.localdomain python3[36217]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:57:01 np0005546420.localdomain sudo[36215]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:01 np0005546420.localdomain sudo[36258]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efiudgltbatzzoeyxomdohgaynyenvvy ; /usr/bin/python3
Dec 05 07:57:01 np0005546420.localdomain sudo[36258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:57:01 np0005546420.localdomain python3[36260]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764921421.0985522-66669-131979106502363/source _original_basename=tmp_ihvtqkd follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:57:01 np0005546420.localdomain sudo[36258]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:02 np0005546420.localdomain sudo[36288]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktotqbsisepnxdmkwmuwvxqubwwjndly ; /usr/bin/python3
Dec 05 07:57:02 np0005546420.localdomain sudo[36288]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:57:02 np0005546420.localdomain python3[36290]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:57:02 np0005546420.localdomain sudo[36288]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:02 np0005546420.localdomain sudo[36304]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mwhfcqougkabowsunqgjnuidvkrtqyui ; /usr/bin/python3
Dec 05 07:57:02 np0005546420.localdomain sudo[36304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:57:02 np0005546420.localdomain python3[36306]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:57:02 np0005546420.localdomain sudo[36304]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:02 np0005546420.localdomain sudo[36320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udkcgcduwrnqkdhnlzavjocsfhejfswz ; /usr/bin/python3
Dec 05 07:57:02 np0005546420.localdomain sudo[36320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:57:03 np0005546420.localdomain python3[36322]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:57:03 np0005546420.localdomain sudo[36320]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:03 np0005546420.localdomain sudo[36336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tyfxxtsznafrutypypskrllntzcacxjv ; /usr/bin/python3
Dec 05 07:57:03 np0005546420.localdomain sudo[36336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 07:57:03 np0005546420.localdomain python3[36338]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey
                                                          regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:57:03 np0005546420.localdomain sudo[36336]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:04 np0005546420.localdomain python3[36352]: ansible-ping Invoked with data=pong
Dec 05 07:57:15 np0005546420.localdomain sshd[36354]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 07:57:15 np0005546420.localdomain sshd[36354]: Accepted publickey for tripleo-admin from 192.168.122.100 port 38146 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 07:57:15 np0005546420.localdomain systemd-logind[762]: New session 28 of user tripleo-admin.
Dec 05 07:57:15 np0005546420.localdomain systemd[1]: Created slice User Slice of UID 1003.
Dec 05 07:57:15 np0005546420.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Dec 05 07:57:15 np0005546420.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Dec 05 07:57:15 np0005546420.localdomain systemd[1]: Starting User Manager for UID 1003...
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Queued start job for default target Main User Target.
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Created slice User Application Slice.
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Reached target Paths.
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Reached target Timers.
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Starting D-Bus User Message Bus Socket...
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Starting Create User's Volatile Files and Directories...
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Finished Create User's Volatile Files and Directories.
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Listening on D-Bus User Message Bus Socket.
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Reached target Sockets.
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Reached target Basic System.
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Reached target Main User Target.
Dec 05 07:57:15 np0005546420.localdomain systemd[36358]: Startup finished in 120ms.
Dec 05 07:57:15 np0005546420.localdomain systemd[1]: Started User Manager for UID 1003.
Dec 05 07:57:15 np0005546420.localdomain systemd[1]: Started Session 28 of User tripleo-admin.
Dec 05 07:57:15 np0005546420.localdomain sshd[36354]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 05 07:57:16 np0005546420.localdomain sudo[36417]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fneuiswjydrwmksczqmucaaggevxkkds ; /usr/bin/python3
Dec 05 07:57:16 np0005546420.localdomain sudo[36417]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:16 np0005546420.localdomain python3[36419]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 07:57:16 np0005546420.localdomain sudo[36417]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:21 np0005546420.localdomain sudo[36437]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hibskwwzgvkusclelumcwziuliuzzukc ; /usr/bin/python3
Dec 05 07:57:21 np0005546420.localdomain sudo[36437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:21 np0005546420.localdomain python3[36439]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config
Dec 05 07:57:21 np0005546420.localdomain sudo[36437]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:22 np0005546420.localdomain sudo[36453]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hswdxcgmrkgofmunrfwmgsctcyyfvwql ; /usr/bin/python3
Dec 05 07:57:22 np0005546420.localdomain sudo[36453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:22 np0005546420.localdomain python3[36455]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Dec 05 07:57:22 np0005546420.localdomain sudo[36453]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:22 np0005546420.localdomain sudo[36501]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjnntqfefhmpbntwvfucxteryirpqpjy ; /usr/bin/python3
Dec 05 07:57:22 np0005546420.localdomain sudo[36501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:22 np0005546420.localdomain python3[36503]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.mbu41trbtmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:57:22 np0005546420.localdomain sudo[36501]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:23 np0005546420.localdomain sudo[36531]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zxoknnbmnjmrmtxvunfeuodpprxnlzji ; /usr/bin/python3
Dec 05 07:57:23 np0005546420.localdomain sudo[36531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:23 np0005546420.localdomain python3[36533]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.mbu41trbtmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:57:23 np0005546420.localdomain sudo[36531]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:24 np0005546420.localdomain sudo[36547]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdegvogdymdfiwuczmqilotpnepbyscw ; /usr/bin/python3
Dec 05 07:57:24 np0005546420.localdomain sudo[36547]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:24 np0005546420.localdomain python3[36549]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.mbu41trbtmphosts insertbefore=BOF block=172.17.0.106 np0005546419.localdomain np0005546419
                                                         172.18.0.106 np0005546419.storage.localdomain np0005546419.storage
                                                         172.20.0.106 np0005546419.storagemgmt.localdomain np0005546419.storagemgmt
                                                         172.17.0.106 np0005546419.internalapi.localdomain np0005546419.internalapi
                                                         172.19.0.106 np0005546419.tenant.localdomain np0005546419.tenant
                                                         192.168.122.106 np0005546419.ctlplane.localdomain np0005546419.ctlplane
                                                         172.17.0.107 np0005546420.localdomain np0005546420
                                                         172.18.0.107 np0005546420.storage.localdomain np0005546420.storage
                                                         172.20.0.107 np0005546420.storagemgmt.localdomain np0005546420.storagemgmt
                                                         172.17.0.107 np0005546420.internalapi.localdomain np0005546420.internalapi
                                                         172.19.0.107 np0005546420.tenant.localdomain np0005546420.tenant
                                                         192.168.122.107 np0005546420.ctlplane.localdomain np0005546420.ctlplane
                                                         172.17.0.108 np0005546421.localdomain np0005546421
                                                         172.18.0.108 np0005546421.storage.localdomain np0005546421.storage
                                                         172.20.0.108 np0005546421.storagemgmt.localdomain np0005546421.storagemgmt
                                                         172.17.0.108 np0005546421.internalapi.localdomain np0005546421.internalapi
                                                         172.19.0.108 np0005546421.tenant.localdomain np0005546421.tenant
                                                         192.168.122.108 np0005546421.ctlplane.localdomain np0005546421.ctlplane
                                                         172.17.0.103 np0005546415.localdomain np0005546415
                                                         172.18.0.103 np0005546415.storage.localdomain np0005546415.storage
                                                         172.20.0.103 np0005546415.storagemgmt.localdomain np0005546415.storagemgmt
                                                         172.17.0.103 np0005546415.internalapi.localdomain np0005546415.internalapi
                                                         172.19.0.103 np0005546415.tenant.localdomain np0005546415.tenant
                                                         192.168.122.103 np0005546415.ctlplane.localdomain np0005546415.ctlplane
                                                         172.17.0.104 np0005546416.localdomain np0005546416
                                                         172.18.0.104 np0005546416.storage.localdomain np0005546416.storage
                                                         172.20.0.104 np0005546416.storagemgmt.localdomain np0005546416.storagemgmt
                                                         172.17.0.104 np0005546416.internalapi.localdomain np0005546416.internalapi
                                                         172.19.0.104 np0005546416.tenant.localdomain np0005546416.tenant
                                                         192.168.122.104 np0005546416.ctlplane.localdomain np0005546416.ctlplane
                                                         172.17.0.105 np0005546418.localdomain np0005546418
                                                         172.18.0.105 np0005546418.storage.localdomain np0005546418.storage
                                                         172.20.0.105 np0005546418.storagemgmt.localdomain np0005546418.storagemgmt
                                                         172.17.0.105 np0005546418.internalapi.localdomain np0005546418.internalapi
                                                         172.19.0.105 np0005546418.tenant.localdomain np0005546418.tenant
                                                         192.168.122.105 np0005546418.ctlplane.localdomain np0005546418.ctlplane
                                                         
                                                         192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane
                                                         192.168.122.99  overcloud.ctlplane.localdomain
                                                         172.18.0.163  overcloud.storage.localdomain
                                                         172.20.0.242  overcloud.storagemgmt.localdomain
                                                         172.17.0.216  overcloud.internalapi.localdomain
                                                         172.21.0.170  overcloud.localdomain
                                                          marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:57:24 np0005546420.localdomain sudo[36547]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:24 np0005546420.localdomain sudo[36563]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qrselxjlaneahabreameqanvwlpynour ; /usr/bin/python3
Dec 05 07:57:24 np0005546420.localdomain sudo[36563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:24 np0005546420.localdomain python3[36565]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.mbu41trbtmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:57:24 np0005546420.localdomain sudo[36563]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:25 np0005546420.localdomain sudo[36580]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvxohkxjgncfxcthwyakkzwtvnrucarn ; /usr/bin/python3
Dec 05 07:57:25 np0005546420.localdomain sudo[36580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:25 np0005546420.localdomain python3[36582]: ansible-file Invoked with path=/tmp/ansible.mbu41trbtmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:57:25 np0005546420.localdomain sudo[36580]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:26 np0005546420.localdomain sudo[36596]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udgpsvxqwwvlqsjuidgkiwlbgfnhqkqz ; /usr/bin/python3
Dec 05 07:57:26 np0005546420.localdomain sudo[36596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:26 np0005546420.localdomain python3[36598]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:57:26 np0005546420.localdomain sudo[36596]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:26 np0005546420.localdomain sudo[36613]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nospojwzdhlawhnzfcaucuxyynhfvrtg ; /usr/bin/python3
Dec 05 07:57:26 np0005546420.localdomain sudo[36613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:26 np0005546420.localdomain python3[36615]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 07:57:30 np0005546420.localdomain sudo[36613]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:31 np0005546420.localdomain sudo[36633]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-poddnglfraacxfovhhpzqpfdmabktytn ; /usr/bin/python3
Dec 05 07:57:31 np0005546420.localdomain sudo[36633]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:31 np0005546420.localdomain python3[36635]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:57:31 np0005546420.localdomain sudo[36633]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:31 np0005546420.localdomain sudo[36650]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gnogipmudzyyesbahnbqgefebwjxuuaz ; /usr/bin/python3
Dec 05 07:57:31 np0005546420.localdomain sudo[36650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:57:32 np0005546420.localdomain python3[36652]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 07:57:35 np0005546420.localdomain sudo[36657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:57:35 np0005546420.localdomain sudo[36657]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:57:35 np0005546420.localdomain sudo[36657]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:35 np0005546420.localdomain sudo[36672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 07:57:35 np0005546420.localdomain sudo[36672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:57:36 np0005546420.localdomain sudo[36672]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:37 np0005546420.localdomain sudo[36719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 07:57:37 np0005546420.localdomain sudo[36719]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:57:37 np0005546420.localdomain sudo[36719]: pam_unix(sudo:session): session closed for user root
Dec 05 07:57:48 np0005546420.localdomain groupadd[36901]: group added to /etc/group: name=puppet, GID=52
Dec 05 07:57:48 np0005546420.localdomain groupadd[36901]: group added to /etc/gshadow: name=puppet
Dec 05 07:57:48 np0005546420.localdomain groupadd[36901]: new group: name=puppet, GID=52
Dec 05 07:57:48 np0005546420.localdomain useradd[36908]: new user: name=puppet, UID=52, GID=52, home=/var/lib/puppet, shell=/sbin/nologin, from=none
Dec 05 07:58:37 np0005546420.localdomain sudo[37680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:58:37 np0005546420.localdomain sudo[37680]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:58:37 np0005546420.localdomain sudo[37680]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:37 np0005546420.localdomain sudo[37695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 07:58:37 np0005546420.localdomain sudo[37695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:58:38 np0005546420.localdomain sudo[37695]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:38 np0005546420.localdomain sudo[37742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 07:58:38 np0005546420.localdomain sudo[37742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:58:38 np0005546420.localdomain sudo[37742]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:43 np0005546420.localdomain kernel: SELinux:  Converting 2700 SID table entries...
Dec 05 07:58:43 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 07:58:43 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 07:58:43 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 07:58:43 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 07:58:43 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 07:58:43 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 07:58:43 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 07:58:43 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 05 07:58:43 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 07:58:43 np0005546420.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 05 07:58:43 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:58:43 np0005546420.localdomain systemd-rc-local-generator[37910]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:58:43 np0005546420.localdomain systemd-sysv-generator[37915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:58:43 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:58:43 np0005546420.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 07:58:44 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 07:58:44 np0005546420.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 05 07:58:44 np0005546420.localdomain systemd[1]: run-r5f701621eaaf432c9577be82108ef0ef.service: Deactivated successfully.
Dec 05 07:58:45 np0005546420.localdomain sudo[36650]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:48 np0005546420.localdomain sudo[38326]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbajmtkzeiuvmlltgezphhhxqbufpsep ; /usr/bin/python3
Dec 05 07:58:48 np0005546420.localdomain sudo[38326]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:48 np0005546420.localdomain python3[38328]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:58:49 np0005546420.localdomain sudo[38326]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:49 np0005546420.localdomain sudo[38465]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfkvpcwndhiseruzxlqrzzjsthlgdgig ; /usr/bin/python3
Dec 05 07:58:49 np0005546420.localdomain sudo[38465]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:50 np0005546420.localdomain python3[38467]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 07:58:50 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:58:50 np0005546420.localdomain systemd-sysv-generator[38498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:58:50 np0005546420.localdomain systemd-rc-local-generator[38492]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:58:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:58:50 np0005546420.localdomain sudo[38465]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:50 np0005546420.localdomain sudo[38519]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bhwvpyfcdkrlqbgnpaakffsccumzsdnw ; /usr/bin/python3
Dec 05 07:58:50 np0005546420.localdomain sudo[38519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:50 np0005546420.localdomain python3[38521]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:58:50 np0005546420.localdomain sudo[38519]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:51 np0005546420.localdomain sudo[38535]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjztpctmqpihjinmhfvlsjmrclnmylyp ; /usr/bin/python3
Dec 05 07:58:51 np0005546420.localdomain sudo[38535]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:51 np0005546420.localdomain python3[38537]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:58:51 np0005546420.localdomain sudo[38535]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:51 np0005546420.localdomain sudo[38552]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbotlfiuztpugjbzqviurtbpbkanahqt ; /usr/bin/python3
Dec 05 07:58:51 np0005546420.localdomain sudo[38552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:52 np0005546420.localdomain python3[38554]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 07:58:52 np0005546420.localdomain sudo[38552]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:52 np0005546420.localdomain sudo[38570]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubnxyyispghijroprbmbnsdcincmsosu ; /usr/bin/python3
Dec 05 07:58:52 np0005546420.localdomain sudo[38570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:52 np0005546420.localdomain python3[38572]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:58:52 np0005546420.localdomain sudo[38570]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:53 np0005546420.localdomain sudo[38588]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eknggwhtjsyzwxkdyzcnayqqmxqjvpug ; /usr/bin/python3
Dec 05 07:58:53 np0005546420.localdomain sudo[38588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:53 np0005546420.localdomain python3[38590]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:58:53 np0005546420.localdomain sudo[38588]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:53 np0005546420.localdomain sudo[38606]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-viifpganfrxaqvahpavmnbquicwcxtxo ; /usr/bin/python3
Dec 05 07:58:53 np0005546420.localdomain sudo[38606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:53 np0005546420.localdomain python3[38608]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 07:58:53 np0005546420.localdomain systemd[1]: Reloading Network Manager...
Dec 05 07:58:53 np0005546420.localdomain NetworkManager[5963]: <info>  [1764921533.8517] audit: op="reload" arg="0" pid=38611 uid=0 result="success"
Dec 05 07:58:53 np0005546420.localdomain NetworkManager[5963]: <info>  [1764921533.8527] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf))
Dec 05 07:58:53 np0005546420.localdomain NetworkManager[5963]: <info>  [1764921533.8528] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Dec 05 07:58:53 np0005546420.localdomain systemd[1]: Reloaded Network Manager.
Dec 05 07:58:53 np0005546420.localdomain sudo[38606]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:54 np0005546420.localdomain sudo[38625]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxdqdwnjzassunqfvhvmjvtnmjxufjnu ; /usr/bin/python3
Dec 05 07:58:54 np0005546420.localdomain sudo[38625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:54 np0005546420.localdomain python3[38627]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:58:54 np0005546420.localdomain sudo[38625]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:54 np0005546420.localdomain sudo[38642]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dkbwuljackrugwsnvlebrhjziiqdzbje ; /usr/bin/python3
Dec 05 07:58:54 np0005546420.localdomain sudo[38642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:54 np0005546420.localdomain python3[38644]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 07:58:54 np0005546420.localdomain sudo[38642]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:55 np0005546420.localdomain sudo[38660]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhejzqbdbslqewkbtfbzhjoimzwpqhhg ; /usr/bin/python3
Dec 05 07:58:55 np0005546420.localdomain sudo[38660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:55 np0005546420.localdomain python3[38662]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 07:58:55 np0005546420.localdomain sudo[38660]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:55 np0005546420.localdomain sudo[38676]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjvtfbgkmcetutmgntifhkgnoxltovoj ; /usr/bin/python3
Dec 05 07:58:55 np0005546420.localdomain sudo[38676]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:55 np0005546420.localdomain python3[38678]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:58:55 np0005546420.localdomain sudo[38676]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:56 np0005546420.localdomain sudo[38692]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfpfzpcngegovmftxnjryihjjdjdmilj ; /usr/bin/python3
Dec 05 07:58:56 np0005546420.localdomain sudo[38692]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:56 np0005546420.localdomain python3[38694]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 05 07:58:56 np0005546420.localdomain sudo[38692]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:56 np0005546420.localdomain sudo[38708]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkriytajatgjjlkylzsnyjjpiyusfaei ; /usr/bin/python3
Dec 05 07:58:56 np0005546420.localdomain sudo[38708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:56 np0005546420.localdomain python3[38710]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 07:58:56 np0005546420.localdomain sudo[38708]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:57 np0005546420.localdomain sudo[38724]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rkdwkyjhfqtfflgqsxyollvgtksmcfph ; /usr/bin/python3
Dec 05 07:58:57 np0005546420.localdomain sudo[38724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:57 np0005546420.localdomain python3[38726]: ansible-blockinfile Invoked with path=/tmp/ansible.1hyebczp block=[192.168.122.106]*,[np0005546419.ctlplane.localdomain]*,[172.17.0.106]*,[np0005546419.internalapi.localdomain]*,[172.18.0.106]*,[np0005546419.storage.localdomain]*,[172.20.0.106]*,[np0005546419.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005546419.tenant.localdomain]*,[np0005546419.localdomain]*,[np0005546419]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEuRXji6XnIqsABVq0Qqof5KS4SAvlk4RgdtizNBr7m3ROTYSahE5AJNLCTMugmJtGXewQjNvC8Gcrwjha423XMFi1NpQBCu/U72HR15GJ4x0DRTlvDzeuyqmAuTQEBnQcjNlSIQ4FOJnMjeI6JzpCzCvQ8kOvkGMj6A3Hg/syH7t97g6vL8Cua473lHIav6GTZkm3SmFKQ3Xwj9z3cxUxUnrSgES1zowNRjtoEtPZjSgoF5b8nFIjaQf2ZwMcV0lopTVTvmRVyYDvsR8wFpqMebvWZkW7NQNAaUhRwiYfvQM5/uX1R294FSkW4UiMA5xWT6BMUvtJzexoxZwmrJN3E8I5NLL2KsN33G/6CHA5roanPqECSsRgwyhgQ8bARZgymqoTR9u/p8RRwj7J+x+qJCKMrG+inICVI/o3oOAD2Kdc2rFHXCzdC7sNhjF9/0HPZy8Dt2phAaMcAs4ueBT1Qv/WP22vx3lBguSxEC09rfl+zsp5KAd3jOr9hJBn34E=
                                                         [192.168.122.107]*,[np0005546420.ctlplane.localdomain]*,[172.17.0.107]*,[np0005546420.internalapi.localdomain]*,[172.18.0.107]*,[np0005546420.storage.localdomain]*,[172.20.0.107]*,[np0005546420.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005546420.tenant.localdomain]*,[np0005546420.localdomain]*,[np0005546420]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI9Y4hTQymJJTi7lwfGVKCetJ5Q4auPNryuYcUqXhqNAkgJUht3nxbV0LL2zw4tBsorx+hqOtHy6QfyMWc4r5hOjGRUOhC2uarhQho1134qkdAt7Wd1XMZFeslg1Vk7F8G5TciLUUJBsqvfKAsGc9/SQS5rWRQ90ssw6RtnrhuCDasOzJIdPA2tYjLQ2emSbjgfd1OuXSpKpSkko9b1cwE6trMzU8G7508xssCoDz66P8kF4Kf+OGT8iWzM8xKE0cB8b50ltkwnrxsK5Hwc8zz9LoLSU01AS9CNm299lqjPgZZhTOu6zSXvN6p4+CylbKvJO19AnMSzMEJZEPoHNCQ2SM+/LxQ9rIH8MAVrpw9SUndYbtXTvUkEsZRYAkH64dyfn+9kcYTPaf/oqkrvxuc6Nlk/uZ79dbjW0Vc+/XJXX9F7hLsdu3PK0kt4oBXIG9B9jdKXVobNiH7lsArspEnZ13zzspPyojH0UV6v0AfaZgCMP8b7Erg9y9+HPradoE=
                                                         [192.168.122.108]*,[np0005546421.ctlplane.localdomain]*,[172.17.0.108]*,[np0005546421.internalapi.localdomain]*,[172.18.0.108]*,[np0005546421.storage.localdomain]*,[172.20.0.108]*,[np0005546421.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005546421.tenant.localdomain]*,[np0005546421.localdomain]*,[np0005546421]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOnO4FOEzvhvnfZvvg9C7oar+ml2He45IxleHN54kwSVAvs2ltf36WvXeS2XAi7WgRxM+SZhG+GxbHWO/u3KqZQXbOWufPkzZF3oMisaK3ZDVZLqKvlrQZf2+29fCEYI9L5zPC/HNP6jqIyDlBSXGYPLQgUjpxxieUICaQ0fIp4WhlqviONuO0ZTwWQdPf5CYPALkVZ74wN1aGPulFSaGYretHzLaUvZvZQVL4q4PRI+7YpxvT1NyDOyTvw5u8TpzZXKp67nFfFtlbX8BvY9f1FVlgzcPwQvxzYWeJy5j9Cv0xoJ56dXmUueau39rhB/CBpKfhymLq91H1nh+F175gPPt5KZA5cfZg7fWlshSRjozK3Z53WpNGrpQtCIjhxblJ5Z3mxAPGcyYYOXoG/iv/IDwMvhkswL2Cqb6/ww6osSP2EJQIjWsS+CoYjynw+g7e++29qN8QiRLOqOuges85TiZ2vxP5lkvs8V3oAF+k4OsPOGPKzibXNDl5PyGwhVU=
                                                         [192.168.122.103]*,[np0005546415.ctlplane.localdomain]*,[172.17.0.103]*,[np0005546415.internalapi.localdomain]*,[172.18.0.103]*,[np0005546415.storage.localdomain]*,[172.20.0.103]*,[np0005546415.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005546415.tenant.localdomain]*,[np0005546415.localdomain]*,[np0005546415]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkD6dMrlstq/08/i19MSGJhEADExfxigVjJJQ88FcvZHbzGOgQVpolfx1koKTyWN+Arobw6wFmJvZLTo8Bb6WoVTK7S5Ea1OnfJHT61JMRl/WjdLjR5dZtwV62H6dAQuwXtLXjjbx/PIaHGhjGeQ3mAmwEgTU06ey152S+ChTCN3ft7vCFw4DHXAly+guOSgi5JGOb3gMATYrMGVu90ONPr0mfPn6T6oBZQPEWvdKFCulrlj9zVZu7HsSSRQFMxH7KgZJzpkLllA4WVfnGbj38AXD2k/HkyLfYzY27ZsoOL1HyT4ardSL2aUb55JnBNuOxkTcFwxKYlyCL/gWk20rx9nJe7mp5Rl6iK4a8UA5SEKO0sudwL9uZ4JEMNAAViZ+5xpl7M0+YowEMffNSUrVJ8/SSa0beqOu9JTnZ+cEwNCNkJJM/h8ajcjEaAHeRXDkTkujntrvNR6KskZa+g94xtpw1nrG6xl0yzppj6k4nsmcRGGlicsbZEc9SZOW+qaM=
                                                         [192.168.122.104]*,[np0005546416.ctlplane.localdomain]*,[172.17.0.104]*,[np0005546416.internalapi.localdomain]*,[172.18.0.104]*,[np0005546416.storage.localdomain]*,[172.20.0.104]*,[np0005546416.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005546416.tenant.localdomain]*,[np0005546416.localdomain]*,[np0005546416]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBknSonWn2K7oVrigtLeGeXWlaMY1uJqi1743zO2mguB8ceS1WtlyZavpdSnzpqiGiIwguuYuBxNKWaZMI/9XZyZKspYWl5eArdwgxtnKFyHWmHop7/MeX+Y+J7CrfiQ8MajXX1sy1YpxunvdWo7DK3K9DJfTaJ6onr8amsw55w0Pf5HOW0UBGE+AqFmTy/5btxUh4cKFDwRjGeJJps2YFr/p9mdITdZy6sxC+0QCi9XHI7FrpRbYfK0zSSrOBpixOr0sahUWL/3ZUVF5uiJbGTaihxnFrAN3SqoJsWJNJADqmp+E0K3oSw2xsGEvRz02E5n3+GqaYejfpUMdLjvSmTfEKVqlMiL8M0AtBvfeP7KlZCpABiuvopbKIXNsjFfG1HXkFrFHbCgRsfmg7e+8ThU6J66lb2cJhHrtKuP+uePggolCX4bqdv8abdxV9keT+DJCOZ6iMJnDTI8ggTwMTBVwykvMZXIhwiJruh8oACUYaubPkkGSz4VhPIqfSch0=
                                                         [192.168.122.105]*,[np0005546418.ctlplane.localdomain]*,[172.17.0.105]*,[np0005546418.internalapi.localdomain]*,[172.18.0.105]*,[np0005546418.storage.localdomain]*,[172.20.0.105]*,[np0005546418.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005546418.tenant.localdomain]*,[np0005546418.localdomain]*,[np0005546418]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD9S5Z7rzST5j/fEC81CBzjbVnN/b1iPQZ35oKFbDVSZ3xrScwTjVDnymCRMpkG7ZjaGvyyMSy6sRwzcBVzWZGF94EKpFeYMdUdfpsK2dbevK8wHAAm7cfqUZ5sgTKGF4TOZZ08RJZ9Xc1fGGKeE0bg2QCqoKA7YzWR++lzm/LXf8DTXUhBN+xvwQ3rVN4Y8AIlXB2YS/FAkc2s3u95spaTjW0hbNonz/q6QiuuElDTfezQ9IkzHyYOFqIxYRnttkUuXTp5FodFYAlU3VOLHCoI6tZQk2f1Kt1ZZX4Umqd2RA4zu0IBbblyns+2Jy/Jg5MuKEZSC5X2xQ/tUeClu2+ZHxwKRMxnwAgAiYuC5ryGQuyc0vphUN3uE6JIxKd+8YgAscYSYvc7VoWqodvvt8eIxoXCDh1XbIsKKbWqosjwoNWAoNZUh+LcHIDskM+7FNALGudbtKgKRazoMRvGbZPWQr8FB2eTWiqo2TOBwHArzAXZmnKcg+ad9eMQtW6PX0M=
                                                          create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:58:57 np0005546420.localdomain sudo[38724]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:57 np0005546420.localdomain sudo[38740]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ctubjywoffobfopfqmbukhquoevgdbar ; /usr/bin/python3
Dec 05 07:58:57 np0005546420.localdomain sudo[38740]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:58 np0005546420.localdomain python3[38742]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.1hyebczp' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:58:58 np0005546420.localdomain sudo[38740]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:58 np0005546420.localdomain sudo[38758]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ijaowyilrtlwitwqarrokxoqbnicjnyj ; /usr/bin/python3
Dec 05 07:58:58 np0005546420.localdomain sudo[38758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:58 np0005546420.localdomain python3[38760]: ansible-file Invoked with path=/tmp/ansible.1hyebczp state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:58:58 np0005546420.localdomain sudo[38758]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:59 np0005546420.localdomain sudo[38774]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ruypheeqvsnnfggfjsexgfcffzcrbomt ; /usr/bin/python3
Dec 05 07:58:59 np0005546420.localdomain sudo[38774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:59 np0005546420.localdomain python3[38776]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 07:58:59 np0005546420.localdomain sudo[38774]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:59 np0005546420.localdomain sudo[38790]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avehwgxyzklumsqjhpcxoioavselabbf ; /usr/bin/python3
Dec 05 07:58:59 np0005546420.localdomain sudo[38790]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:58:59 np0005546420.localdomain python3[38792]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:58:59 np0005546420.localdomain sudo[38790]: pam_unix(sudo:session): session closed for user root
Dec 05 07:58:59 np0005546420.localdomain sudo[38808]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ypzxuttwvlooirfzvngtzhjexpyjdsuj ; /usr/bin/python3
Dec 05 07:58:59 np0005546420.localdomain sudo[38808]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:00 np0005546420.localdomain python3[38810]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:59:00 np0005546420.localdomain sudo[38808]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:00 np0005546420.localdomain sudo[38827]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nnhjnmcbvzknbbkrnyjdienqpyudpygh ; /usr/bin/python3
Dec 05 07:59:00 np0005546420.localdomain sudo[38827]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:00 np0005546420.localdomain python3[38829]: ansible-community.general.cloud_init_data_facts Invoked with filter=status
Dec 05 07:59:00 np0005546420.localdomain sudo[38827]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:00 np0005546420.localdomain sudo[38843]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajoksnpkwelxvcjgorsqbseohsnremnl ; /usr/bin/python3
Dec 05 07:59:00 np0005546420.localdomain sudo[38843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:00 np0005546420.localdomain sudo[38843]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:01 np0005546420.localdomain sudo[38891]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-joakdsvbdlnwrornywayyuundgywrdoe ; /usr/bin/python3
Dec 05 07:59:01 np0005546420.localdomain sudo[38891]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:01 np0005546420.localdomain sudo[38891]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:01 np0005546420.localdomain sudo[38934]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbkojsenkhtcytejhqjvtpwuerhzoqqf ; /usr/bin/python3
Dec 05 07:59:01 np0005546420.localdomain sudo[38934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:01 np0005546420.localdomain sudo[38934]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:03 np0005546420.localdomain sudo[38964]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uhfcfriezpiecgktcxapeatfsrpdrxrz ; /usr/bin/python3
Dec 05 07:59:03 np0005546420.localdomain sudo[38964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:03 np0005546420.localdomain python3[38966]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:59:03 np0005546420.localdomain sudo[38964]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:03 np0005546420.localdomain sudo[38981]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bfwgiudbiqycbhwwmfbshlbysjaiojmr ; /usr/bin/python3
Dec 05 07:59:03 np0005546420.localdomain sudo[38981]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:03 np0005546420.localdomain python3[38983]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 07:59:06 np0005546420.localdomain dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Dec 05 07:59:06 np0005546420.localdomain dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Dec 05 07:59:06 np0005546420.localdomain dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 07:59:07 np0005546420.localdomain systemd-sysv-generator[39080]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 07:59:07 np0005546420.localdomain systemd-rc-local-generator[39075]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: tuned.service: Deactivated successfully.
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: tuned.service: Consumed 1.947s CPU time.
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 05 07:59:07 np0005546420.localdomain systemd[1]: run-r445e8356b90d4d73a259031100e531c9.service: Deactivated successfully.
Dec 05 07:59:08 np0005546420.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Dec 05 07:59:09 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 07:59:09 np0005546420.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 05 07:59:09 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 07:59:09 np0005546420.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 05 07:59:09 np0005546420.localdomain systemd[1]: run-r05cafa52624e403e91887542e958c97b.service: Deactivated successfully.
Dec 05 07:59:09 np0005546420.localdomain sudo[38981]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:10 np0005546420.localdomain sudo[39421]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cispmqtxzpdbbcdwfdghoamevcqohpjt ; /usr/bin/python3
Dec 05 07:59:10 np0005546420.localdomain sudo[39421]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:10 np0005546420.localdomain python3[39423]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 07:59:11 np0005546420.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 05 07:59:11 np0005546420.localdomain systemd[1]: tuned.service: Deactivated successfully.
Dec 05 07:59:11 np0005546420.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 05 07:59:11 np0005546420.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 05 07:59:12 np0005546420.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Dec 05 07:59:12 np0005546420.localdomain sudo[39421]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:13 np0005546420.localdomain sudo[39616]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dynxyscrjriwvaycowytxmaaghgrcvof ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Dec 05 07:59:13 np0005546420.localdomain sudo[39616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:13 np0005546420.localdomain python3[39618]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:59:13 np0005546420.localdomain sudo[39616]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:13 np0005546420.localdomain sudo[39633]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wqfvtehcfwpkdnpxohxdhkxgccihbveg ; /usr/bin/python3
Dec 05 07:59:13 np0005546420.localdomain sudo[39633]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:13 np0005546420.localdomain python3[39635]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Dec 05 07:59:13 np0005546420.localdomain sudo[39633]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:14 np0005546420.localdomain sudo[39649]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rnomswgxwyavjdopblowmcvrunyvymtu ; /usr/bin/python3
Dec 05 07:59:14 np0005546420.localdomain sudo[39649]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:14 np0005546420.localdomain python3[39651]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 07:59:14 np0005546420.localdomain sudo[39649]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:14 np0005546420.localdomain sudo[39665]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfmibxgjumgnrwlpqlwieasqgyxergbl ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Dec 05 07:59:14 np0005546420.localdomain sudo[39665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:14 np0005546420.localdomain python3[39667]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:59:15 np0005546420.localdomain sudo[39665]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:16 np0005546420.localdomain sudo[39685]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjjicosqlguvfthmxzxsumokadtkzdjb ; /usr/bin/python3
Dec 05 07:59:16 np0005546420.localdomain sudo[39685]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:16 np0005546420.localdomain python3[39687]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:59:16 np0005546420.localdomain sudo[39685]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:16 np0005546420.localdomain sudo[39702]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yyiibyyqipbrzqnployhvpumcjqbrghj ; /usr/bin/python3
Dec 05 07:59:16 np0005546420.localdomain sudo[39702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:16 np0005546420.localdomain python3[39704]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 07:59:16 np0005546420.localdomain sudo[39702]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:19 np0005546420.localdomain sudo[39718]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yiapqkbqnhcjerkictqtzdlgbjocbjfo ; /usr/bin/python3
Dec 05 07:59:19 np0005546420.localdomain sudo[39718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:19 np0005546420.localdomain python3[39720]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:19 np0005546420.localdomain sudo[39718]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:23 np0005546420.localdomain sudo[39734]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjkztnsailfomhtdhtkevctjlyjtljej ; /usr/bin/python3
Dec 05 07:59:23 np0005546420.localdomain sudo[39734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:23 np0005546420.localdomain python3[39736]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:23 np0005546420.localdomain sudo[39734]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:23 np0005546420.localdomain sudo[39782]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wnxiirmijjehkohbnuxftkvmwxoiqiky ; /usr/bin/python3
Dec 05 07:59:23 np0005546420.localdomain sudo[39782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:24 np0005546420.localdomain python3[39784]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:24 np0005546420.localdomain sudo[39782]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:24 np0005546420.localdomain sudo[39827]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omkquexxjsnjokrlguaoykzpwktknjkv ; /usr/bin/python3
Dec 05 07:59:24 np0005546420.localdomain sudo[39827]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:24 np0005546420.localdomain python3[39829]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921563.7136505-71271-245608654650304/source _original_basename=tmpw1elgw4l follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:24 np0005546420.localdomain sudo[39827]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:24 np0005546420.localdomain sudo[39857]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-krqthpmjpaibpbvshwnwkzpickjhpvqq ; /usr/bin/python3
Dec 05 07:59:24 np0005546420.localdomain sudo[39857]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:24 np0005546420.localdomain python3[39859]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:24 np0005546420.localdomain sudo[39857]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:25 np0005546420.localdomain sudo[39905]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bplmcrcrcjdjnqpiynoejrxxeempptlm ; /usr/bin/python3
Dec 05 07:59:25 np0005546420.localdomain sudo[39905]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:25 np0005546420.localdomain python3[39907]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:25 np0005546420.localdomain sudo[39905]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:25 np0005546420.localdomain sudo[39948]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oicrhtnwrpwualqqwisvsjvafhsqpwel ; /usr/bin/python3
Dec 05 07:59:25 np0005546420.localdomain sudo[39948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:25 np0005546420.localdomain python3[39950]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921565.3382049-71543-5570316610636/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=ba6566c0e663d54e816d0362a53167cb9d04e50e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:25 np0005546420.localdomain sudo[39948]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:26 np0005546420.localdomain sudo[40010]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-usjunpyopxryqlutuvokbsbwosulpsho ; /usr/bin/python3
Dec 05 07:59:26 np0005546420.localdomain sudo[40010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:26 np0005546420.localdomain python3[40012]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:26 np0005546420.localdomain sudo[40010]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:26 np0005546420.localdomain sudo[40053]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrkxxqarnrnkntvdlwmnkjxzojktoztr ; /usr/bin/python3
Dec 05 07:59:26 np0005546420.localdomain sudo[40053]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:26 np0005546420.localdomain python3[40055]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921566.1592054-71616-274517198829621/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=df55714b8da9bfb2aa67dcba305bac259217ffd4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:26 np0005546420.localdomain sudo[40053]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:27 np0005546420.localdomain sudo[40115]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxxrstimdgfgfvtjnxcwqixufyxweypz ; /usr/bin/python3
Dec 05 07:59:27 np0005546420.localdomain sudo[40115]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:27 np0005546420.localdomain python3[40117]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:27 np0005546420.localdomain sudo[40115]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:27 np0005546420.localdomain sudo[40158]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-axminkfyvlyjcngvonnadinslszrthkk ; /usr/bin/python3
Dec 05 07:59:27 np0005546420.localdomain sudo[40158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:27 np0005546420.localdomain python3[40160]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921567.025812-71616-103728269648042/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=4939522ff3b438a7e269b4a6e22ebcad88445c95 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:27 np0005546420.localdomain sudo[40158]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:28 np0005546420.localdomain sudo[40220]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xworjtqqnftrrdggkxaligqmwrmzubkb ; /usr/bin/python3
Dec 05 07:59:28 np0005546420.localdomain sudo[40220]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:28 np0005546420.localdomain python3[40222]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:28 np0005546420.localdomain sudo[40220]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:28 np0005546420.localdomain sudo[40263]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vavboryrmxqyyctspnlbvuwxznpnvgae ; /usr/bin/python3
Dec 05 07:59:28 np0005546420.localdomain sudo[40263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:28 np0005546420.localdomain python3[40265]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921567.9016187-71616-171111449956355/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=175c760950d63a47f443f25b58088dba962f090b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:28 np0005546420.localdomain sudo[40263]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:29 np0005546420.localdomain sudo[40325]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jaeddklwbyflbvjyzibvfdqbdstkkrte ; /usr/bin/python3
Dec 05 07:59:29 np0005546420.localdomain sudo[40325]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:29 np0005546420.localdomain python3[40327]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:29 np0005546420.localdomain sudo[40325]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:29 np0005546420.localdomain sudo[40368]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sdcjvijzyufroyvbtjmbaeuzsujzrpjy ; /usr/bin/python3
Dec 05 07:59:29 np0005546420.localdomain sudo[40368]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:29 np0005546420.localdomain python3[40370]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921568.86321-71616-254819975966781/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:29 np0005546420.localdomain sudo[40368]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:29 np0005546420.localdomain sudo[40430]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ngnuyxkxjfjzdijmdxuvsvpajyeyvmoa ; /usr/bin/python3
Dec 05 07:59:29 np0005546420.localdomain sudo[40430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:30 np0005546420.localdomain python3[40432]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:30 np0005546420.localdomain sudo[40430]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:30 np0005546420.localdomain sudo[40473]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tyjjsvmmjfhvgwudueidnmbsnmfigihk ; /usr/bin/python3
Dec 05 07:59:30 np0005546420.localdomain sudo[40473]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:30 np0005546420.localdomain python3[40475]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921569.695857-71616-221322777211221/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=8c966e8486ec8459417f24c96346f975bc00c346 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:30 np0005546420.localdomain sudo[40473]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:30 np0005546420.localdomain sudo[40535]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oonowvaxgnoeqafzxcwkyagqzzkikral ; /usr/bin/python3
Dec 05 07:59:30 np0005546420.localdomain sudo[40535]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:30 np0005546420.localdomain python3[40537]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:30 np0005546420.localdomain sudo[40535]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:31 np0005546420.localdomain sudo[40578]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jdmfnhrmoavwihdfikcftseozhoxijoy ; /usr/bin/python3
Dec 05 07:59:31 np0005546420.localdomain sudo[40578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:31 np0005546420.localdomain python3[40580]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921570.5523484-71616-82469950370491/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:31 np0005546420.localdomain sudo[40578]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:31 np0005546420.localdomain sudo[40640]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylunmmsztulzqssfmzgeodvvibyfcatj ; /usr/bin/python3
Dec 05 07:59:31 np0005546420.localdomain sudo[40640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:31 np0005546420.localdomain python3[40642]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:31 np0005546420.localdomain sudo[40640]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:31 np0005546420.localdomain sudo[40683]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tnvgxihrnundnvoriytlloqxpqkqgssi ; /usr/bin/python3
Dec 05 07:59:31 np0005546420.localdomain sudo[40683]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:32 np0005546420.localdomain python3[40685]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921571.40446-71616-228615444198947/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=85195c4ca9b4af9c68130dd7e72cbd842702429f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:32 np0005546420.localdomain sudo[40683]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:32 np0005546420.localdomain sudo[40745]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-daurzunjzkbgrqgkibkgwhkyzvnblmae ; /usr/bin/python3
Dec 05 07:59:32 np0005546420.localdomain sudo[40745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:32 np0005546420.localdomain python3[40747]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:32 np0005546420.localdomain sudo[40745]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:32 np0005546420.localdomain sudo[40788]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zffzukchuuzyvgrgpccaznphwaqxhkpy ; /usr/bin/python3
Dec 05 07:59:32 np0005546420.localdomain sudo[40788]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:32 np0005546420.localdomain python3[40790]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921572.2335281-71616-163158817184412/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:32 np0005546420.localdomain sudo[40788]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:33 np0005546420.localdomain sudo[40850]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yalgesxharbrsmjnocjjtuzyglihypnz ; /usr/bin/python3
Dec 05 07:59:33 np0005546420.localdomain sudo[40850]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:33 np0005546420.localdomain python3[40852]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:33 np0005546420.localdomain sudo[40850]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:33 np0005546420.localdomain sudo[40893]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkcehkehomweqnebyxdmjfxzsoqqshxb ; /usr/bin/python3
Dec 05 07:59:33 np0005546420.localdomain sudo[40893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:33 np0005546420.localdomain python3[40895]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921573.1018357-71616-83220181479838/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:33 np0005546420.localdomain sudo[40893]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:34 np0005546420.localdomain sudo[40955]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-frocqwvmybxksipbkobodivilyinphxr ; /usr/bin/python3
Dec 05 07:59:34 np0005546420.localdomain sudo[40955]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:34 np0005546420.localdomain python3[40957]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:34 np0005546420.localdomain sudo[40955]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:34 np0005546420.localdomain sudo[40998]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ygoybkmouexqhouphrwovgqdsevckzgr ; /usr/bin/python3
Dec 05 07:59:34 np0005546420.localdomain sudo[40998]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:34 np0005546420.localdomain python3[41000]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921573.9429157-71616-122529153993846/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=fe7968a1b7d668ecc0104da8239c2fc3d481a384 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:34 np0005546420.localdomain sudo[40998]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:34 np0005546420.localdomain sudo[41028]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pcrebbswkzlydjigxnxmaywnwjlanspq ; /usr/bin/python3
Dec 05 07:59:34 np0005546420.localdomain sudo[41028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:35 np0005546420.localdomain python3[41030]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 07:59:35 np0005546420.localdomain sudo[41028]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:35 np0005546420.localdomain sudo[41076]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bikdzeyrilsofvhcfauzgqkevuszlmqh ; /usr/bin/python3
Dec 05 07:59:35 np0005546420.localdomain sudo[41076]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:35 np0005546420.localdomain python3[41078]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 07:59:35 np0005546420.localdomain sudo[41076]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:36 np0005546420.localdomain sudo[41119]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hlagdayxaieilihieqjkfygketsnkomh ; /usr/bin/python3
Dec 05 07:59:36 np0005546420.localdomain sudo[41119]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:36 np0005546420.localdomain python3[41121]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921575.5611503-72175-214400824795759/source _original_basename=tmpjog_rhm6 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 07:59:36 np0005546420.localdomain sudo[41119]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:38 np0005546420.localdomain sudo[41136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 07:59:38 np0005546420.localdomain sudo[41136]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:59:38 np0005546420.localdomain sudo[41136]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:38 np0005546420.localdomain sudo[41151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 07:59:38 np0005546420.localdomain sudo[41151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:59:39 np0005546420.localdomain sudo[41151]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:40 np0005546420.localdomain sudo[41198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 07:59:40 np0005546420.localdomain sudo[41198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 07:59:40 np0005546420.localdomain sudo[41198]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:40 np0005546420.localdomain sudo[41226]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jmdxjyclalbrtxwrfridrwjulwqlbloa ; /usr/bin/python3
Dec 05 07:59:40 np0005546420.localdomain sudo[41226]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:41 np0005546420.localdomain python3[41228]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 07:59:41 np0005546420.localdomain sudo[41226]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:41 np0005546420.localdomain sudo[41287]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mcuksibzizrqywglcbcmwijbgxfmtkdh ; /usr/bin/python3
Dec 05 07:59:41 np0005546420.localdomain sudo[41287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:41 np0005546420.localdomain python3[41289]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:59:45 np0005546420.localdomain sudo[41287]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:45 np0005546420.localdomain sudo[41304]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-soupvjlhjqmzgqvogjsrncnkasiwczta ; /usr/bin/python3
Dec 05 07:59:45 np0005546420.localdomain sudo[41304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:46 np0005546420.localdomain python3[41306]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:59:50 np0005546420.localdomain sudo[41304]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:51 np0005546420.localdomain sudo[41321]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cdokpacewunzfdbqjnrpporksrixlmsv ; /usr/bin/python3
Dec 05 07:59:51 np0005546420.localdomain sudo[41321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:51 np0005546420.localdomain python3[41323]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:59:51 np0005546420.localdomain sudo[41321]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:51 np0005546420.localdomain sudo[41344]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swqnodxgonuewuscucesfowvvbzheguk ; /usr/bin/python3
Dec 05 07:59:51 np0005546420.localdomain sudo[41344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:51 np0005546420.localdomain python3[41346]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:59:52 np0005546420.localdomain systemd[36358]: Starting Mark boot as successful...
Dec 05 07:59:52 np0005546420.localdomain systemd[36358]: Finished Mark boot as successful.
Dec 05 07:59:55 np0005546420.localdomain sudo[41344]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:56 np0005546420.localdomain sudo[41362]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vezezxtonirbjnkvzeoncnrxgomnbofx ; /usr/bin/python3
Dec 05 07:59:56 np0005546420.localdomain sudo[41362]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:56 np0005546420.localdomain python3[41364]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 07:59:56 np0005546420.localdomain sudo[41362]: pam_unix(sudo:session): session closed for user root
Dec 05 07:59:56 np0005546420.localdomain sudo[41385]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhpybmgqileayhzkbqdmsutnyouhvpbt ; /usr/bin/python3
Dec 05 07:59:56 np0005546420.localdomain sudo[41385]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 07:59:56 np0005546420.localdomain python3[41387]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:00 np0005546420.localdomain sudo[41385]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:01 np0005546420.localdomain sudo[41402]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-reppfeyeaepzuqzagivuixafdbtjgvlx ; /usr/bin/python3
Dec 05 08:00:01 np0005546420.localdomain sudo[41402]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:01 np0005546420.localdomain python3[41404]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:05 np0005546420.localdomain sudo[41402]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:05 np0005546420.localdomain sudo[41419]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-knjnbeawkjpcelhyfkplxtrqrgiccwps ; /usr/bin/python3
Dec 05 08:00:05 np0005546420.localdomain sudo[41419]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:05 np0005546420.localdomain python3[41421]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:05 np0005546420.localdomain sudo[41419]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:05 np0005546420.localdomain sudo[41442]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjadzcfvjuuhvdodwysqjmupqzdyzlxi ; /usr/bin/python3
Dec 05 08:00:05 np0005546420.localdomain sudo[41442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:06 np0005546420.localdomain python3[41444]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:10 np0005546420.localdomain sudo[41442]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:10 np0005546420.localdomain sudo[41459]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rruhsvthqxydyfyqkbuwbluxmxhdefme ; /usr/bin/python3
Dec 05 08:00:10 np0005546420.localdomain sudo[41459]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:10 np0005546420.localdomain python3[41461]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:14 np0005546420.localdomain sudo[41459]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:14 np0005546420.localdomain sudo[41476]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-alekmmoupgzveqbwmncpwufvmffpsvbn ; /usr/bin/python3
Dec 05 08:00:14 np0005546420.localdomain sudo[41476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:15 np0005546420.localdomain python3[41478]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:15 np0005546420.localdomain sudo[41476]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:15 np0005546420.localdomain sudo[41499]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-agfblswoliosflkxoaabcngpxqmewodo ; /usr/bin/python3
Dec 05 08:00:15 np0005546420.localdomain sudo[41499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:15 np0005546420.localdomain python3[41501]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:19 np0005546420.localdomain sudo[41499]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:19 np0005546420.localdomain sudo[41516]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ljzxhnmsdxizudjmavapzwqvhtpgirsq ; /usr/bin/python3
Dec 05 08:00:19 np0005546420.localdomain sudo[41516]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:19 np0005546420.localdomain python3[41518]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:24 np0005546420.localdomain sudo[41516]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:24 np0005546420.localdomain sudo[41533]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gflwifooqjmoizaajwiupucshorvwipy ; /usr/bin/python3
Dec 05 08:00:24 np0005546420.localdomain sudo[41533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:24 np0005546420.localdomain python3[41535]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:24 np0005546420.localdomain sudo[41533]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:24 np0005546420.localdomain sudo[41556]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mrldikhrgsstnievoqkiyfinpnutvdwd ; /usr/bin/python3
Dec 05 08:00:24 np0005546420.localdomain sudo[41556]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:24 np0005546420.localdomain python3[41558]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:29 np0005546420.localdomain sudo[41556]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:29 np0005546420.localdomain sudo[41573]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xymtopffujmgrwuufqbcbrlzzijuxkdb ; /usr/bin/python3
Dec 05 08:00:29 np0005546420.localdomain sudo[41573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:29 np0005546420.localdomain python3[41575]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:33 np0005546420.localdomain sudo[41573]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:34 np0005546420.localdomain sudo[41590]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfmlunpqfnfrlouaousnwwsrelarhger ; /usr/bin/python3
Dec 05 08:00:34 np0005546420.localdomain sudo[41590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:34 np0005546420.localdomain python3[41592]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:34 np0005546420.localdomain sudo[41590]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:34 np0005546420.localdomain sudo[41638]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdbdtnfuvdzcoffkodlzvqfdegzhvioz ; /usr/bin/python3
Dec 05 08:00:34 np0005546420.localdomain sudo[41638]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:35 np0005546420.localdomain python3[41640]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:35 np0005546420.localdomain sudo[41638]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:35 np0005546420.localdomain sudo[41656]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iapoelsgaqqzphutstqihztmvvajdztt ; /usr/bin/python3
Dec 05 08:00:35 np0005546420.localdomain sudo[41656]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:35 np0005546420.localdomain python3[41658]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmp9vqdxfeo recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:35 np0005546420.localdomain sudo[41656]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:35 np0005546420.localdomain sudo[41686]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxwduqcuslqsokivkbseabngsyhgwngj ; /usr/bin/python3
Dec 05 08:00:35 np0005546420.localdomain sudo[41686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:35 np0005546420.localdomain python3[41688]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:35 np0005546420.localdomain sudo[41686]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:36 np0005546420.localdomain sudo[41734]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbndkcqylxficgczxtxgsrdjzngivqkw ; /usr/bin/python3
Dec 05 08:00:36 np0005546420.localdomain sudo[41734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:36 np0005546420.localdomain python3[41736]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:36 np0005546420.localdomain sudo[41734]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:36 np0005546420.localdomain sudo[41752]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sbordqctnlxuktwvtugahypxttxfjnub ; /usr/bin/python3
Dec 05 08:00:36 np0005546420.localdomain sudo[41752]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:36 np0005546420.localdomain python3[41754]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:36 np0005546420.localdomain sudo[41752]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:37 np0005546420.localdomain sudo[41814]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikdmxfyouaccnapocumidcurduxqostf ; /usr/bin/python3
Dec 05 08:00:37 np0005546420.localdomain sudo[41814]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:37 np0005546420.localdomain python3[41816]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:37 np0005546420.localdomain sudo[41814]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:37 np0005546420.localdomain sudo[41832]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wypjdxcwzmygxpsfszhdojeaxjebpvie ; /usr/bin/python3
Dec 05 08:00:37 np0005546420.localdomain sudo[41832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:37 np0005546420.localdomain python3[41834]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:37 np0005546420.localdomain sudo[41832]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:37 np0005546420.localdomain sudo[41894]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sghimunklbeavouritdxcmftjjcqgwfa ; /usr/bin/python3
Dec 05 08:00:37 np0005546420.localdomain sudo[41894]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:38 np0005546420.localdomain python3[41896]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:38 np0005546420.localdomain sudo[41894]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:38 np0005546420.localdomain sudo[41912]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vdndcjhmfguxchxlixaktnqkbrvodadz ; /usr/bin/python3
Dec 05 08:00:38 np0005546420.localdomain sudo[41912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:38 np0005546420.localdomain python3[41914]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:38 np0005546420.localdomain sudo[41912]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:38 np0005546420.localdomain sudo[41974]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utecxsjlmnntfrsmkzopufewighyjpux ; /usr/bin/python3
Dec 05 08:00:38 np0005546420.localdomain sudo[41974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:38 np0005546420.localdomain python3[41976]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:38 np0005546420.localdomain sudo[41974]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:39 np0005546420.localdomain sudo[41992]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vxqlhtrqrwwdoxtpctagusdqsivkdjig ; /usr/bin/python3
Dec 05 08:00:39 np0005546420.localdomain sudo[41992]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:39 np0005546420.localdomain python3[41994]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:39 np0005546420.localdomain sudo[41992]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:39 np0005546420.localdomain sudo[42054]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nnlhjspnvzqtnbvbjxbegyjbpbesloxu ; /usr/bin/python3
Dec 05 08:00:39 np0005546420.localdomain sudo[42054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:39 np0005546420.localdomain python3[42056]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:39 np0005546420.localdomain sudo[42054]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:39 np0005546420.localdomain sudo[42072]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltcowxsyiooribluttnjbjkmioqqqvqj ; /usr/bin/python3
Dec 05 08:00:39 np0005546420.localdomain sudo[42072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:39 np0005546420.localdomain python3[42074]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:39 np0005546420.localdomain sudo[42072]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:40 np0005546420.localdomain sudo[42134]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ukafcumxplxxysesoltdjunhdhtfjyai ; /usr/bin/python3
Dec 05 08:00:40 np0005546420.localdomain sudo[42134]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:40 np0005546420.localdomain sudo[42135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:00:40 np0005546420.localdomain sudo[42135]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:00:40 np0005546420.localdomain sudo[42135]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:40 np0005546420.localdomain sudo[42152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:00:40 np0005546420.localdomain sudo[42152]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:00:40 np0005546420.localdomain python3[42149]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:40 np0005546420.localdomain sudo[42134]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:40 np0005546420.localdomain sudo[42182]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmewwukzbcnoanlhklhgjqshqrlhokro ; /usr/bin/python3
Dec 05 08:00:40 np0005546420.localdomain sudo[42182]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:40 np0005546420.localdomain python3[42184]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:40 np0005546420.localdomain sudo[42182]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:41 np0005546420.localdomain sudo[42152]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:41 np0005546420.localdomain sudo[42277]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dsmrsvvionisxrarhrkzfdwoljjlzhnr ; /usr/bin/python3
Dec 05 08:00:41 np0005546420.localdomain sudo[42277]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:41 np0005546420.localdomain python3[42279]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:41 np0005546420.localdomain sudo[42277]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:41 np0005546420.localdomain sudo[42295]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dflqclamhfooridoxznpfhangrkghhdu ; /usr/bin/python3
Dec 05 08:00:41 np0005546420.localdomain sudo[42295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:41 np0005546420.localdomain python3[42297]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:41 np0005546420.localdomain sudo[42295]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:41 np0005546420.localdomain sudo[42357]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dmxgoxaxiqabxhdlxerugubmquivpkmi ; /usr/bin/python3
Dec 05 08:00:41 np0005546420.localdomain sudo[42357]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:41 np0005546420.localdomain python3[42359]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:41 np0005546420.localdomain sudo[42357]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:42 np0005546420.localdomain sudo[42375]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-txfxdqubmyifpuyfhfdvtknnbnwcrrwg ; /usr/bin/python3
Dec 05 08:00:42 np0005546420.localdomain sudo[42375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:42 np0005546420.localdomain python3[42377]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:42 np0005546420.localdomain sudo[42375]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:42 np0005546420.localdomain sudo[42437]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubxxnkmvgmixyvjkvmjrmudleazlmorg ; /usr/bin/python3
Dec 05 08:00:42 np0005546420.localdomain sudo[42437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:42 np0005546420.localdomain python3[42439]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:42 np0005546420.localdomain sudo[42437]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:42 np0005546420.localdomain sudo[42455]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fmjhtikcwexqpyoptxiejovfmjvhwqhx ; /usr/bin/python3
Dec 05 08:00:42 np0005546420.localdomain sudo[42455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:42 np0005546420.localdomain python3[42457]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:42 np0005546420.localdomain sudo[42455]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:43 np0005546420.localdomain sudo[42517]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dexyhwhwrjpmpxmviqpsdgptymvpoczo ; /usr/bin/python3
Dec 05 08:00:43 np0005546420.localdomain sudo[42517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:43 np0005546420.localdomain python3[42519]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:43 np0005546420.localdomain sudo[42520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:00:43 np0005546420.localdomain sudo[42520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:00:43 np0005546420.localdomain sudo[42520]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:43 np0005546420.localdomain sudo[42517]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:43 np0005546420.localdomain sudo[42550]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqzalwrywblaoihmpnmzcwgayjkgmdlu ; /usr/bin/python3
Dec 05 08:00:43 np0005546420.localdomain sudo[42550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:43 np0005546420.localdomain python3[42552]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:43 np0005546420.localdomain sudo[42550]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:43 np0005546420.localdomain sudo[42612]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-adympcaarbmsfmpkjomsejjyfmuaqayw ; /usr/bin/python3
Dec 05 08:00:43 np0005546420.localdomain sudo[42612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:44 np0005546420.localdomain python3[42614]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:44 np0005546420.localdomain sudo[42612]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:44 np0005546420.localdomain sudo[42630]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qpfwbmxlgqkzgaarkybyzksnanzdbuom ; /usr/bin/python3
Dec 05 08:00:44 np0005546420.localdomain sudo[42630]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:44 np0005546420.localdomain python3[42632]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:44 np0005546420.localdomain sudo[42630]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:44 np0005546420.localdomain sudo[42660]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zkdggumuasjwvxnnnbddyjflsjexybzt ; /usr/bin/python3
Dec 05 08:00:44 np0005546420.localdomain sudo[42660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:44 np0005546420.localdomain python3[42662]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:00:44 np0005546420.localdomain sudo[42660]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:45 np0005546420.localdomain sudo[42708]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aitsogojrwqyincozsfaoeyuydelgwyt ; /usr/bin/python3
Dec 05 08:00:45 np0005546420.localdomain sudo[42708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:45 np0005546420.localdomain python3[42710]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:45 np0005546420.localdomain sudo[42708]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:45 np0005546420.localdomain sudo[42726]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-txjrzmktskagvqzoarvxcotidrqzofax ; /usr/bin/python3
Dec 05 08:00:45 np0005546420.localdomain sudo[42726]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:45 np0005546420.localdomain python3[42728]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmphxqjdnie recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:45 np0005546420.localdomain sudo[42726]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:48 np0005546420.localdomain sudo[42756]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmcgyqpgqinhhtmugftckjeqdqqwdkob ; /usr/bin/python3
Dec 05 08:00:48 np0005546420.localdomain sudo[42756]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:48 np0005546420.localdomain python3[42758]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 08:00:51 np0005546420.localdomain sudo[42756]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:52 np0005546420.localdomain sudo[42773]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cnoiwbbhtishvwbyxmcboinbwdczxbpi ; /usr/bin/python3
Dec 05 08:00:52 np0005546420.localdomain sudo[42773]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:53 np0005546420.localdomain python3[42775]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:00:53 np0005546420.localdomain sudo[42773]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:53 np0005546420.localdomain sudo[42791]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ouoqwmtuqvtiqkzdedlbvmzimqyovovy ; /usr/bin/python3
Dec 05 08:00:53 np0005546420.localdomain sudo[42791]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:53 np0005546420.localdomain python3[42793]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:00:53 np0005546420.localdomain sudo[42791]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:53 np0005546420.localdomain sudo[42809]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aejuubnrzcaxmnfrnkikyookfjromfub ; /usr/bin/python3
Dec 05 08:00:53 np0005546420.localdomain sudo[42809]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:54 np0005546420.localdomain python3[42811]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:00:54 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:00:54 np0005546420.localdomain systemd-rc-local-generator[42841]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:00:54 np0005546420.localdomain systemd-sysv-generator[42844]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:00:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:00:54 np0005546420.localdomain systemd[1]: Starting Netfilter Tables...
Dec 05 08:00:54 np0005546420.localdomain systemd[1]: Finished Netfilter Tables.
Dec 05 08:00:54 np0005546420.localdomain sudo[42809]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:54 np0005546420.localdomain sudo[42899]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iifhmqgonvnityuvnwzplpninhgfoiqk ; /usr/bin/python3
Dec 05 08:00:54 np0005546420.localdomain sudo[42899]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:55 np0005546420.localdomain python3[42901]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:55 np0005546420.localdomain sudo[42899]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:55 np0005546420.localdomain sudo[42942]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-emgohtkhqyitgeacnklzwglpiedotvez ; /usr/bin/python3
Dec 05 08:00:55 np0005546420.localdomain sudo[42942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:55 np0005546420.localdomain python3[42944]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921654.836381-74886-174923592744422/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:55 np0005546420.localdomain sudo[42942]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:55 np0005546420.localdomain sudo[42972]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-imabdslhyibhaijwwygdnpbropktprbz ; /usr/bin/python3
Dec 05 08:00:55 np0005546420.localdomain sudo[42972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:56 np0005546420.localdomain python3[42974]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:56 np0005546420.localdomain sudo[42972]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:56 np0005546420.localdomain sudo[42990]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvqlgvqaabdmphtmwxdoslmyamwmyqtm ; /usr/bin/python3
Dec 05 08:00:56 np0005546420.localdomain sudo[42990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:56 np0005546420.localdomain python3[42992]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:00:56 np0005546420.localdomain sudo[42990]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:56 np0005546420.localdomain sudo[43039]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqcmnkahwgphlrhgmmsphyjqdaklzmvd ; /usr/bin/python3
Dec 05 08:00:56 np0005546420.localdomain sudo[43039]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:57 np0005546420.localdomain python3[43041]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:57 np0005546420.localdomain sudo[43039]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:57 np0005546420.localdomain sudo[43082]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vdcyjwegepzyamcsikxmrjjidbnsufaw ; /usr/bin/python3
Dec 05 08:00:57 np0005546420.localdomain sudo[43082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:57 np0005546420.localdomain python3[43084]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921656.7052903-75106-69713337859992/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:57 np0005546420.localdomain sudo[43082]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:57 np0005546420.localdomain sudo[43144]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fuqabwdayjjtmjbbvwpgskvspqsarsve ; /usr/bin/python3
Dec 05 08:00:57 np0005546420.localdomain sudo[43144]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:58 np0005546420.localdomain python3[43146]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:58 np0005546420.localdomain sudo[43144]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:58 np0005546420.localdomain sudo[43187]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rcmccdqahzwmjwmsgkozydhvedjbaqcg ; /usr/bin/python3
Dec 05 08:00:58 np0005546420.localdomain sudo[43187]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:58 np0005546420.localdomain python3[43189]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921657.6391904-75170-5703880743839/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:58 np0005546420.localdomain sudo[43187]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:58 np0005546420.localdomain sudo[43249]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hegaaiebmejtdqzlbbdmyklbrrwjtney ; /usr/bin/python3
Dec 05 08:00:58 np0005546420.localdomain sudo[43249]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:58 np0005546420.localdomain python3[43251]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:58 np0005546420.localdomain sudo[43249]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:59 np0005546420.localdomain sudo[43292]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wffnekomspfvknuhqmuwvpdjibvxphze ; /usr/bin/python3
Dec 05 08:00:59 np0005546420.localdomain sudo[43292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:59 np0005546420.localdomain python3[43294]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921658.5729346-75427-209479240160273/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:00:59 np0005546420.localdomain sudo[43292]: pam_unix(sudo:session): session closed for user root
Dec 05 08:00:59 np0005546420.localdomain sudo[43354]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wnfjwpdmkincbolswlotcelkyeypyyvl ; /usr/bin/python3
Dec 05 08:00:59 np0005546420.localdomain sudo[43354]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:00:59 np0005546420.localdomain python3[43356]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:00:59 np0005546420.localdomain sudo[43354]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:00 np0005546420.localdomain sudo[43397]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cubvwrtczyvwuouikwdmydefwhyhrcik ; /usr/bin/python3
Dec 05 08:01:00 np0005546420.localdomain sudo[43397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:00 np0005546420.localdomain python3[43399]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921659.5284677-75490-38412339423441/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:00 np0005546420.localdomain sudo[43397]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:01 np0005546420.localdomain sudo[43459]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymelaeecgarsviesidvaacqnhhaavdta ; /usr/bin/python3
Dec 05 08:01:01 np0005546420.localdomain sudo[43459]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:01 np0005546420.localdomain python3[43461]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:01:01 np0005546420.localdomain sudo[43459]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:01 np0005546420.localdomain sudo[43502]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tpbbanssxmgssnvhpgsrjmmykgshdkpm ; /usr/bin/python3
Dec 05 08:01:01 np0005546420.localdomain sudo[43502]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:01 np0005546420.localdomain python3[43504]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921660.4420574-75536-1412882774403/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:01 np0005546420.localdomain sudo[43502]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:01 np0005546420.localdomain sudo[43532]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wqlcucopzngvrqaerizuhekyrmdybryb ; /usr/bin/python3
Dec 05 08:01:01 np0005546420.localdomain sudo[43532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:01 np0005546420.localdomain CROND[43536]: (root) CMD (run-parts /etc/cron.hourly)
Dec 05 08:01:01 np0005546420.localdomain run-parts[43539]: (/etc/cron.hourly) starting 0anacron
Dec 05 08:01:01 np0005546420.localdomain run-parts[43545]: (/etc/cron.hourly) finished 0anacron
Dec 05 08:01:01 np0005546420.localdomain CROND[43535]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 05 08:01:01 np0005546420.localdomain python3[43534]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:01:02 np0005546420.localdomain sudo[43532]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:02 np0005546420.localdomain sudo[43608]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sheksznubupbouxptuoqovvavvzwmlbf ; /usr/bin/python3
Dec 05 08:01:02 np0005546420.localdomain sudo[43608]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:02 np0005546420.localdomain python3[43610]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"
                                                         include "/etc/nftables/tripleo-chains.nft"
                                                         include "/etc/nftables/tripleo-rules.nft"
                                                         include "/etc/nftables/tripleo-jumps.nft"
                                                          state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:02 np0005546420.localdomain sudo[43608]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:02 np0005546420.localdomain sudo[43625]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rwrzaognqgahehezfapcswyouahugwyk ; /usr/bin/python3
Dec 05 08:01:02 np0005546420.localdomain sudo[43625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:02 np0005546420.localdomain python3[43627]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:01:02 np0005546420.localdomain sudo[43625]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:03 np0005546420.localdomain sudo[43642]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-emoylvdrcmbzilacagopyhiiqdhjilgn ; /usr/bin/python3
Dec 05 08:01:03 np0005546420.localdomain sudo[43642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:03 np0005546420.localdomain python3[43644]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:01:03 np0005546420.localdomain sudo[43642]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:03 np0005546420.localdomain sudo[43661]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxbfmlbfavxojilwqybnbirffgcqzrtm ; /usr/bin/python3
Dec 05 08:01:03 np0005546420.localdomain sudo[43661]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:03 np0005546420.localdomain python3[43663]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:03 np0005546420.localdomain sudo[43661]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:03 np0005546420.localdomain sudo[43677]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltnsxgbejpqhsrxvfxsazpeehpeinazu ; /usr/bin/python3
Dec 05 08:01:03 np0005546420.localdomain sudo[43677]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:03 np0005546420.localdomain python3[43679]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:03 np0005546420.localdomain sudo[43677]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:04 np0005546420.localdomain sudo[43693]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wplhvgkiodzfizrpwtdvlffhqspkisxd ; /usr/bin/python3
Dec 05 08:01:04 np0005546420.localdomain sudo[43693]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:04 np0005546420.localdomain python3[43695]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:04 np0005546420.localdomain sudo[43693]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:04 np0005546420.localdomain sudo[43709]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iwvxnzswsvdelgjowxyzksxavvykdmea ; /usr/bin/python3
Dec 05 08:01:04 np0005546420.localdomain sudo[43709]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:04 np0005546420.localdomain python3[43711]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 05 08:01:05 np0005546420.localdomain sudo[43709]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:05 np0005546420.localdomain sudo[43729]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfbhpausmsztvkjdfmuvmqwmgpaaekof ; /usr/bin/python3
Dec 05 08:01:05 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Dec 05 08:01:05 np0005546420.localdomain sudo[43729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:06 np0005546420.localdomain python3[43731]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 05 08:01:06 np0005546420.localdomain kernel: SELinux:  Converting 2704 SID table entries...
Dec 05 08:01:06 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 08:01:06 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 08:01:06 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 08:01:06 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 08:01:06 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 08:01:06 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 08:01:06 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 08:01:06 np0005546420.localdomain sudo[43729]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:07 np0005546420.localdomain sudo[43751]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hokikuhvcfeyeuosbiezogqbtwclljpp ; /usr/bin/python3
Dec 05 08:01:07 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 05 08:01:07 np0005546420.localdomain sudo[43751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:07 np0005546420.localdomain python3[43753]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 05 08:01:08 np0005546420.localdomain kernel: SELinux:  Converting 2704 SID table entries...
Dec 05 08:01:08 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 08:01:08 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 08:01:08 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 08:01:08 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 08:01:08 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 08:01:08 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 08:01:08 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 08:01:08 np0005546420.localdomain sudo[43751]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:08 np0005546420.localdomain sudo[43772]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ezhuruxmnjjzguppihofurpkvilswfen ; /usr/bin/python3
Dec 05 08:01:08 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 05 08:01:08 np0005546420.localdomain sudo[43772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:08 np0005546420.localdomain python3[43774]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 05 08:01:09 np0005546420.localdomain kernel: SELinux:  Converting 2704 SID table entries...
Dec 05 08:01:09 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 08:01:09 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 08:01:09 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 08:01:09 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 08:01:09 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 08:01:09 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 08:01:09 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 08:01:09 np0005546420.localdomain sudo[43772]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:09 np0005546420.localdomain sudo[43793]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ujgmdebmtycvsaxkvwoaxenmrplsshyc ; /usr/bin/python3
Dec 05 08:01:09 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 05 08:01:09 np0005546420.localdomain sudo[43793]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:09 np0005546420.localdomain python3[43795]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:09 np0005546420.localdomain sudo[43793]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:10 np0005546420.localdomain sudo[43809]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qleqvpkpxrpfhibzswzfqeckpatfjurp ; /usr/bin/python3
Dec 05 08:01:10 np0005546420.localdomain sudo[43809]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:10 np0005546420.localdomain python3[43811]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:10 np0005546420.localdomain sudo[43809]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:10 np0005546420.localdomain sudo[43825]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jskwimafndzjasgcligfaxrxrmcgvubc ; /usr/bin/python3
Dec 05 08:01:10 np0005546420.localdomain sudo[43825]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:10 np0005546420.localdomain python3[43827]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:10 np0005546420.localdomain sudo[43825]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:10 np0005546420.localdomain sudo[43841]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwnpykpklqidqiaoxihbaqdcfvixseca ; /usr/bin/python3
Dec 05 08:01:10 np0005546420.localdomain sudo[43841]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:10 np0005546420.localdomain python3[43843]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:01:10 np0005546420.localdomain sudo[43841]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:10 np0005546420.localdomain sudo[43857]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-olbxynkkykzpejvdwfvkzkyxmixolwut ; /usr/bin/python3
Dec 05 08:01:11 np0005546420.localdomain sudo[43857]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:11 np0005546420.localdomain python3[43859]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:01:11 np0005546420.localdomain sudo[43857]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:11 np0005546420.localdomain sudo[43874]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jqvlqrnfcmubbalcpqzbyrnpnwskrjjx ; /usr/bin/python3
Dec 05 08:01:11 np0005546420.localdomain sudo[43874]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:11 np0005546420.localdomain python3[43876]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 08:01:15 np0005546420.localdomain sudo[43874]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:15 np0005546420.localdomain sudo[43891]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wzwbixxysiupvuqvxstwsjirnchvwjlx ; /usr/bin/python3
Dec 05 08:01:15 np0005546420.localdomain sudo[43891]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:15 np0005546420.localdomain python3[43893]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:15 np0005546420.localdomain sudo[43891]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:15 np0005546420.localdomain sudo[43939]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jifiihezyjhxqhhwxdijiiypqvqbfckp ; /usr/bin/python3
Dec 05 08:01:15 np0005546420.localdomain sudo[43939]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:16 np0005546420.localdomain python3[43941]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:01:16 np0005546420.localdomain sudo[43939]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:16 np0005546420.localdomain sudo[43982]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kfuwheaapoytqwrujsxwsbcyqyructnh ; /usr/bin/python3
Dec 05 08:01:16 np0005546420.localdomain sudo[43982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:16 np0005546420.localdomain python3[43984]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921675.7428908-76340-69719241411121/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:16 np0005546420.localdomain sudo[43982]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:16 np0005546420.localdomain sudo[44012]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hlnlavtirjbzcmiaewbojmjpuygbiqfh ; /usr/bin/python3
Dec 05 08:01:16 np0005546420.localdomain sudo[44012]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:16 np0005546420.localdomain python3[44014]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 08:01:16 np0005546420.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 05 08:01:16 np0005546420.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 05 08:01:16 np0005546420.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 05 08:01:17 np0005546420.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 05 08:01:17 np0005546420.localdomain kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 05 08:01:17 np0005546420.localdomain kernel: Bridge firewalling registered
Dec 05 08:01:17 np0005546420.localdomain systemd-modules-load[44017]: Inserted module 'br_netfilter'
Dec 05 08:01:17 np0005546420.localdomain systemd-modules-load[44017]: Module 'msr' is built in
Dec 05 08:01:17 np0005546420.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 05 08:01:17 np0005546420.localdomain sudo[44012]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:17 np0005546420.localdomain sudo[44066]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hwvytbjdsgppcvrteedanxutxzhvgaef ; /usr/bin/python3
Dec 05 08:01:17 np0005546420.localdomain sudo[44066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:17 np0005546420.localdomain python3[44068]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:01:17 np0005546420.localdomain sudo[44066]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:17 np0005546420.localdomain sudo[44109]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twoniletqfbudtsycelastctpwmjcdsd ; /usr/bin/python3
Dec 05 08:01:17 np0005546420.localdomain sudo[44109]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:17 np0005546420.localdomain python3[44111]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921677.2211185-76430-83016407880190/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:17 np0005546420.localdomain sudo[44109]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:18 np0005546420.localdomain sudo[44139]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jspobifvppqzqmhoypbiacpwkmkybzzj ; /usr/bin/python3
Dec 05 08:01:18 np0005546420.localdomain sudo[44139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:18 np0005546420.localdomain python3[44141]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:18 np0005546420.localdomain sudo[44139]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:18 np0005546420.localdomain sudo[44156]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rnmfyujvythadhhgjsewzspzejahpcyx ; /usr/bin/python3
Dec 05 08:01:18 np0005546420.localdomain sudo[44156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:18 np0005546420.localdomain python3[44158]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:18 np0005546420.localdomain sudo[44156]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:18 np0005546420.localdomain sudo[44174]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whnkeghxxdrymovnheuxsdyksagvvzxp ; /usr/bin/python3
Dec 05 08:01:18 np0005546420.localdomain sudo[44174]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:18 np0005546420.localdomain python3[44176]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:18 np0005546420.localdomain sudo[44174]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:19 np0005546420.localdomain sudo[44192]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhjadjhhuvjejcfmnyatvlshvkicozuz ; /usr/bin/python3
Dec 05 08:01:19 np0005546420.localdomain sudo[44192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:19 np0005546420.localdomain python3[44194]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:19 np0005546420.localdomain sudo[44192]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:19 np0005546420.localdomain sudo[44209]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-beyxdpgtwmvifodusffpphhbyxmsjacw ; /usr/bin/python3
Dec 05 08:01:19 np0005546420.localdomain sudo[44209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:19 np0005546420.localdomain python3[44211]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:19 np0005546420.localdomain sudo[44209]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:19 np0005546420.localdomain sudo[44226]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dtqfopqclbdnfjgibfjmisugdpfmzpmb ; /usr/bin/python3
Dec 05 08:01:19 np0005546420.localdomain sudo[44226]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:19 np0005546420.localdomain python3[44228]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:19 np0005546420.localdomain sudo[44226]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:20 np0005546420.localdomain sudo[44243]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scjngrvybterlejnqexxtqbbgogcabqm ; /usr/bin/python3
Dec 05 08:01:20 np0005546420.localdomain sudo[44243]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:20 np0005546420.localdomain python3[44245]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:20 np0005546420.localdomain sudo[44243]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:20 np0005546420.localdomain sudo[44261]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjcpheebefexgzhgwvxlpadmzuzfrkxv ; /usr/bin/python3
Dec 05 08:01:20 np0005546420.localdomain sudo[44261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:20 np0005546420.localdomain python3[44263]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:20 np0005546420.localdomain sudo[44261]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:20 np0005546420.localdomain sudo[44279]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gnollfohcgskqfyudswxjyyuojjwnhwp ; /usr/bin/python3
Dec 05 08:01:20 np0005546420.localdomain sudo[44279]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:20 np0005546420.localdomain python3[44281]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:20 np0005546420.localdomain sudo[44279]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:20 np0005546420.localdomain sudo[44297]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sxhyezuaedbnoktnxcthantznxndcegb ; /usr/bin/python3
Dec 05 08:01:20 np0005546420.localdomain sudo[44297]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:21 np0005546420.localdomain python3[44299]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:22 np0005546420.localdomain sudo[44297]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:22 np0005546420.localdomain sudo[44315]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbriqcuvcgxmmoaoeglrclzhrykpdyvs ; /usr/bin/python3
Dec 05 08:01:22 np0005546420.localdomain sudo[44315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:22 np0005546420.localdomain python3[44317]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:22 np0005546420.localdomain sudo[44315]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:22 np0005546420.localdomain sudo[44333]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-difulcmbibufbgkosnyiotjjcpudjfof ; /usr/bin/python3
Dec 05 08:01:22 np0005546420.localdomain sudo[44333]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:22 np0005546420.localdomain python3[44335]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:22 np0005546420.localdomain sudo[44333]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:22 np0005546420.localdomain sudo[44351]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktkflpvwkvwwgsnwiyccfnulgevbjowi ; /usr/bin/python3
Dec 05 08:01:22 np0005546420.localdomain sudo[44351]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:23 np0005546420.localdomain python3[44353]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:23 np0005546420.localdomain sudo[44351]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:23 np0005546420.localdomain sudo[44369]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhpdmtbsmtfcwfwyqhyfqbnabylnyrng ; /usr/bin/python3
Dec 05 08:01:23 np0005546420.localdomain sudo[44369]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:23 np0005546420.localdomain python3[44371]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:23 np0005546420.localdomain sudo[44369]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:23 np0005546420.localdomain sudo[44386]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vlfixpqjrocbxphkuuazbkbfluxcxpcy ; /usr/bin/python3
Dec 05 08:01:23 np0005546420.localdomain sudo[44386]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:23 np0005546420.localdomain python3[44388]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:23 np0005546420.localdomain sudo[44386]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:23 np0005546420.localdomain sudo[44403]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzpcwzpkjczqzsuddktkvkljngbrnbom ; /usr/bin/python3
Dec 05 08:01:23 np0005546420.localdomain sudo[44403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:24 np0005546420.localdomain python3[44405]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:24 np0005546420.localdomain sudo[44403]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:24 np0005546420.localdomain sudo[44420]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-muxepkwmatiedxbswmsurlrsgepkpnrw ; /usr/bin/python3
Dec 05 08:01:24 np0005546420.localdomain sudo[44420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:24 np0005546420.localdomain python3[44422]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:24 np0005546420.localdomain sudo[44420]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:24 np0005546420.localdomain sudo[44437]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lkcpdlwnbjnmidvmwyadudqrzpqdjzyu ; /usr/bin/python3
Dec 05 08:01:24 np0005546420.localdomain sudo[44437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:24 np0005546420.localdomain python3[44439]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 05 08:01:24 np0005546420.localdomain sudo[44437]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:24 np0005546420.localdomain sudo[44455]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-spjsxegqzwxvpadpcempvyyoypwgktwv ; /usr/bin/python3
Dec 05 08:01:24 np0005546420.localdomain sudo[44455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:25 np0005546420.localdomain python3[44457]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 08:01:25 np0005546420.localdomain systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 05 08:01:25 np0005546420.localdomain systemd[1]: Stopped Apply Kernel Variables.
Dec 05 08:01:25 np0005546420.localdomain systemd[1]: Stopping Apply Kernel Variables...
Dec 05 08:01:25 np0005546420.localdomain systemd[1]: Starting Apply Kernel Variables...
Dec 05 08:01:25 np0005546420.localdomain systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 05 08:01:25 np0005546420.localdomain systemd[1]: Finished Apply Kernel Variables.
Dec 05 08:01:25 np0005546420.localdomain sudo[44455]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:25 np0005546420.localdomain sudo[44475]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fsoeesqiamihcxnqotdqcijyxmoverwf ; /usr/bin/python3
Dec 05 08:01:25 np0005546420.localdomain sudo[44475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:25 np0005546420.localdomain python3[44477]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:25 np0005546420.localdomain sudo[44475]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:25 np0005546420.localdomain sudo[44491]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eddxbjcitpvzvbypwsurkpsqcavlmwvi ; /usr/bin/python3
Dec 05 08:01:25 np0005546420.localdomain sudo[44491]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:25 np0005546420.localdomain python3[44493]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:25 np0005546420.localdomain sudo[44491]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:26 np0005546420.localdomain sudo[44507]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tspllodwpqaqenjmmhnvjuzlqvbrxrww ; /usr/bin/python3
Dec 05 08:01:26 np0005546420.localdomain sudo[44507]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:26 np0005546420.localdomain python3[44509]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:26 np0005546420.localdomain sudo[44507]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:26 np0005546420.localdomain sudo[44523]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hglsdkrfohzqsqpdyboefinmrsuegwgl ; /usr/bin/python3
Dec 05 08:01:26 np0005546420.localdomain sudo[44523]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:26 np0005546420.localdomain python3[44525]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:01:26 np0005546420.localdomain sudo[44523]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:26 np0005546420.localdomain sudo[44539]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-asxagdxzkyybrxybpgbeexvlibvrmvvi ; /usr/bin/python3
Dec 05 08:01:26 np0005546420.localdomain sudo[44539]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:26 np0005546420.localdomain python3[44541]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:26 np0005546420.localdomain sudo[44539]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:26 np0005546420.localdomain sudo[44555]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qptgpomwgervuzdpdbylwhqzttfzztki ; /usr/bin/python3
Dec 05 08:01:26 np0005546420.localdomain sudo[44555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:27 np0005546420.localdomain python3[44557]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:27 np0005546420.localdomain sudo[44555]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:27 np0005546420.localdomain sudo[44571]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjrvybfrjloppfhswhlrfuzunujmrdsb ; /usr/bin/python3
Dec 05 08:01:27 np0005546420.localdomain sudo[44571]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:27 np0005546420.localdomain python3[44573]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:27 np0005546420.localdomain sudo[44571]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:27 np0005546420.localdomain sudo[44587]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rgjgadggmqcmaupupqgevqxqwsjrimrn ; /usr/bin/python3
Dec 05 08:01:27 np0005546420.localdomain sudo[44587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:27 np0005546420.localdomain python3[44589]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:27 np0005546420.localdomain sudo[44587]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:27 np0005546420.localdomain sudo[44603]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tlsfutqxqwthnuzjnnsxebubttdvitly ; /usr/bin/python3
Dec 05 08:01:27 np0005546420.localdomain sudo[44603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:27 np0005546420.localdomain python3[44605]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:27 np0005546420.localdomain sudo[44603]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:28 np0005546420.localdomain sudo[44651]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzrsxrdjlpjttajfguwtnzpnuiwuauna ; /usr/bin/python3
Dec 05 08:01:28 np0005546420.localdomain sudo[44651]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:28 np0005546420.localdomain python3[44653]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:01:28 np0005546420.localdomain sudo[44651]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:28 np0005546420.localdomain sudo[44694]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zgvfzmkuvdxszrqdtdrqwtcvxufjhsvj ; /usr/bin/python3
Dec 05 08:01:28 np0005546420.localdomain sudo[44694]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:28 np0005546420.localdomain python3[44696]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921688.1190398-76794-231497276096129/source _original_basename=tmpbcnz9tuk follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:28 np0005546420.localdomain sudo[44694]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:29 np0005546420.localdomain sudo[44724]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmsagujgxijptmupwtwgmeltovoqoihq ; /usr/bin/python3
Dec 05 08:01:29 np0005546420.localdomain sudo[44724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:29 np0005546420.localdomain python3[44726]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:01:29 np0005546420.localdomain sudo[44724]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:30 np0005546420.localdomain sudo[44741]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qbstdzefidcbovcewklcyuieqefmmtqg ; /usr/bin/python3
Dec 05 08:01:30 np0005546420.localdomain sudo[44741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:30 np0005546420.localdomain python3[44743]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:30 np0005546420.localdomain sudo[44741]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:30 np0005546420.localdomain sudo[44789]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-epxzrkuugczheyawhvymbnpmwlscwuia ; /usr/bin/python3
Dec 05 08:01:30 np0005546420.localdomain sudo[44789]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:31 np0005546420.localdomain python3[44791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:01:31 np0005546420.localdomain sudo[44789]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:31 np0005546420.localdomain sudo[44832]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abcfjfxyetlyoqeklwzcevreqvgdnfpr ; /usr/bin/python3
Dec 05 08:01:31 np0005546420.localdomain sudo[44832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:31 np0005546420.localdomain python3[44834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921690.8188443-77144-131469207116088/source _original_basename=tmp3ntmx68m follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:31 np0005546420.localdomain sudo[44832]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:31 np0005546420.localdomain sudo[44862]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mhpafvtkgkzylzfwcyxiyjorxrnidevs ; /usr/bin/python3
Dec 05 08:01:31 np0005546420.localdomain sudo[44862]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:32 np0005546420.localdomain python3[44864]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:32 np0005546420.localdomain sudo[44862]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:32 np0005546420.localdomain sudo[44878]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uohbbhzdiaejbcxsgtbrogwqfeujulcr ; /usr/bin/python3
Dec 05 08:01:32 np0005546420.localdomain sudo[44878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:32 np0005546420.localdomain python3[44880]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:32 np0005546420.localdomain sudo[44878]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:32 np0005546420.localdomain sudo[44894]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-evvsxnbhixqghsvhblakdahlrtbplaxg ; /usr/bin/python3
Dec 05 08:01:32 np0005546420.localdomain sudo[44894]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:32 np0005546420.localdomain python3[44896]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:32 np0005546420.localdomain sudo[44894]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:32 np0005546420.localdomain sudo[44910]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pkrhsmofqgvemlxqjgdvfyiqknsdywaq ; /usr/bin/python3
Dec 05 08:01:32 np0005546420.localdomain sudo[44910]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:33 np0005546420.localdomain python3[44912]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:33 np0005546420.localdomain sudo[44910]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:33 np0005546420.localdomain sudo[44926]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cwrddhvxugskeqkcdkolrnzxbyedkktm ; /usr/bin/python3
Dec 05 08:01:33 np0005546420.localdomain sudo[44926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:33 np0005546420.localdomain python3[44928]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:33 np0005546420.localdomain sudo[44926]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:33 np0005546420.localdomain sudo[44942]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ndoxckemgadlvdelospawoamtxtdnolt ; /usr/bin/python3
Dec 05 08:01:33 np0005546420.localdomain sudo[44942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:33 np0005546420.localdomain python3[44944]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:33 np0005546420.localdomain sudo[44942]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:33 np0005546420.localdomain sudo[44958]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avsrpbwdeqxtrqkssvebjruyqjlhzsum ; /usr/bin/python3
Dec 05 08:01:33 np0005546420.localdomain sudo[44958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:34 np0005546420.localdomain python3[44960]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:34 np0005546420.localdomain sudo[44958]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:34 np0005546420.localdomain sudo[44974]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mkwrpnfymetlpkcegtxhyhusgxsblsse ; /usr/bin/python3
Dec 05 08:01:34 np0005546420.localdomain sudo[44974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:34 np0005546420.localdomain python3[44976]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:34 np0005546420.localdomain sudo[44974]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:34 np0005546420.localdomain sudo[44990]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xjpssyfvboqfjqweqpxzgkipmhxwllnm ; /usr/bin/python3
Dec 05 08:01:34 np0005546420.localdomain sudo[44990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:34 np0005546420.localdomain python3[44992]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:34 np0005546420.localdomain sudo[44990]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:34 np0005546420.localdomain sudo[45006]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxoyghbbuyfmgfnvdectvhwbpuxfisyc ; /usr/bin/python3
Dec 05 08:01:34 np0005546420.localdomain sudo[45006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:35 np0005546420.localdomain python3[45008]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False
Dec 05 08:01:35 np0005546420.localdomain groupadd[45009]: group added to /etc/group: name=qemu, GID=107
Dec 05 08:01:35 np0005546420.localdomain groupadd[45009]: group added to /etc/gshadow: name=qemu
Dec 05 08:01:35 np0005546420.localdomain groupadd[45009]: new group: name=qemu, GID=107
Dec 05 08:01:35 np0005546420.localdomain sudo[45006]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:35 np0005546420.localdomain sudo[45028]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjhqxtygpmghimpubrydathkoyvaevqy ; /usr/bin/python3
Dec 05 08:01:35 np0005546420.localdomain sudo[45028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:35 np0005546420.localdomain python3[45030]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546420.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 05 08:01:35 np0005546420.localdomain useradd[45032]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=none
Dec 05 08:01:35 np0005546420.localdomain sudo[45028]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:35 np0005546420.localdomain sudo[45052]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utzlipreoxpspeixwouutoiqljfqlqmx ; /usr/bin/python3
Dec 05 08:01:35 np0005546420.localdomain sudo[45052]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:36 np0005546420.localdomain python3[45054]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None
Dec 05 08:01:36 np0005546420.localdomain sudo[45052]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:36 np0005546420.localdomain sudo[45068]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lezigraefthmesvlavevyzolrvdcypmm ; /usr/bin/python3
Dec 05 08:01:36 np0005546420.localdomain sudo[45068]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:36 np0005546420.localdomain python3[45070]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:01:36 np0005546420.localdomain sudo[45068]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:36 np0005546420.localdomain sudo[45117]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtrpajktxnrhkxgdoqfyqnblennjsnpe ; /usr/bin/python3
Dec 05 08:01:36 np0005546420.localdomain sudo[45117]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:36 np0005546420.localdomain python3[45119]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:01:36 np0005546420.localdomain sudo[45117]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:37 np0005546420.localdomain sudo[45160]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iqmnsjghdldyyvodarlubqmoupjqomon ; /usr/bin/python3
Dec 05 08:01:37 np0005546420.localdomain sudo[45160]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:37 np0005546420.localdomain python3[45162]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921696.6642594-77415-132916336409271/source _original_basename=tmp9u1yy7_3 follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:37 np0005546420.localdomain sudo[45160]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:37 np0005546420.localdomain sudo[45190]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-koqvkxgntwpkbmruqqrrqwdbeeagxfig ; /usr/bin/python3
Dec 05 08:01:37 np0005546420.localdomain sudo[45190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:37 np0005546420.localdomain python3[45192]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 05 08:01:38 np0005546420.localdomain sudo[45190]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:38 np0005546420.localdomain sudo[45212]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hovheplfeclozjcrhenlrzvugbsydxew ; /usr/bin/python3
Dec 05 08:01:38 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 05 08:01:38 np0005546420.localdomain sudo[45212]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:38 np0005546420.localdomain python3[45214]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:38 np0005546420.localdomain sudo[45212]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:38 np0005546420.localdomain sudo[45228]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hntukzdtpqeayotaxcmqmcntzcxvcvvg ; /usr/bin/python3
Dec 05 08:01:38 np0005546420.localdomain sudo[45228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:39 np0005546420.localdomain python3[45230]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:39 np0005546420.localdomain sudo[45228]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:39 np0005546420.localdomain sudo[45244]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqealsnqbzfiuubgtyowrkglcgwvuyyp ; /usr/bin/python3
Dec 05 08:01:39 np0005546420.localdomain sudo[45244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:39 np0005546420.localdomain python3[45246]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False
Dec 05 08:01:40 np0005546420.localdomain sudo[45244]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:40 np0005546420.localdomain sudo[45264]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cchotqezaubaikludkwjhhquvkhmunfb ; /usr/bin/python3
Dec 05 08:01:40 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 05 08:01:40 np0005546420.localdomain sudo[45264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:41 np0005546420.localdomain python3[45266]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 08:01:43 np0005546420.localdomain sudo[45268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:01:43 np0005546420.localdomain sudo[45268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:01:43 np0005546420.localdomain sudo[45268]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:43 np0005546420.localdomain sudo[45283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 08:01:43 np0005546420.localdomain sudo[45283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:01:43 np0005546420.localdomain sudo[45283]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:43 np0005546420.localdomain sudo[45264]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:44 np0005546420.localdomain sudo[45319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:01:44 np0005546420.localdomain sudo[45319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:01:44 np0005546420.localdomain sudo[45319]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:44 np0005546420.localdomain sudo[45334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:01:44 np0005546420.localdomain sudo[45334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:01:44 np0005546420.localdomain sudo[45360]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omlvamsvfihvatikdhebtujxfudtmtoj ; /usr/bin/python3
Dec 05 08:01:44 np0005546420.localdomain sudo[45360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:44 np0005546420.localdomain python3[45364]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 08:01:44 np0005546420.localdomain sudo[45360]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:44 np0005546420.localdomain sudo[45334]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:44 np0005546420.localdomain sudo[45454]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oslwmbkkhbcmxenolixqudgadgjgxajw ; /usr/bin/python3
Dec 05 08:01:44 np0005546420.localdomain sudo[45454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:45 np0005546420.localdomain python3[45456]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:45 np0005546420.localdomain sudo[45454]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:45 np0005546420.localdomain sudo[45481]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tusfnzqhldjmisbqbxzecpnapfzdkhdk ; /usr/bin/python3
Dec 05 08:01:45 np0005546420.localdomain sudo[45481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:45 np0005546420.localdomain sudo[45460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:01:45 np0005546420.localdomain sudo[45460]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:01:45 np0005546420.localdomain sudo[45460]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:45 np0005546420.localdomain python3[45486]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:01:45 np0005546420.localdomain sudo[45481]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:45 np0005546420.localdomain sudo[45545]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvpvyjwysbcfqfunafdlytxsagabnjxt ; /usr/bin/python3
Dec 05 08:01:45 np0005546420.localdomain sudo[45545]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:46 np0005546420.localdomain python3[45547]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:01:46 np0005546420.localdomain sudo[45545]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:46 np0005546420.localdomain sudo[45588]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fvkbszkpersyscuirsdpgmdlgkpxeexr ; /usr/bin/python3
Dec 05 08:01:46 np0005546420.localdomain sudo[45588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:46 np0005546420.localdomain python3[45590]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921705.6379807-77784-171057701706605/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=e2ce2c0614c2f3d358faae4bfbda4d3a1c437811 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:46 np0005546420.localdomain sudo[45588]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:46 np0005546420.localdomain sudo[45650]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yikkqmhdbnhvchzinrhcickoqfimkfax ; /usr/bin/python3
Dec 05 08:01:46 np0005546420.localdomain sudo[45650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:46 np0005546420.localdomain python3[45652]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:01:47 np0005546420.localdomain sudo[45650]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:47 np0005546420.localdomain sudo[45695]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jggvncsykdcbaizutbhnmxdjtokimilq ; /usr/bin/python3
Dec 05 08:01:47 np0005546420.localdomain sudo[45695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:47 np0005546420.localdomain python3[45697]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921706.6560574-77836-111596085500462/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:47 np0005546420.localdomain sudo[45695]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:47 np0005546420.localdomain sudo[45725]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xvpcanmlwfckcuxljhogfojriwlnpjut ; /usr/bin/python3
Dec 05 08:01:47 np0005546420.localdomain sudo[45725]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:47 np0005546420.localdomain python3[45727]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:47 np0005546420.localdomain sudo[45725]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:47 np0005546420.localdomain sudo[45741]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fyhvanbpahrqdiuafnhqanbvlcxemgmt ; /usr/bin/python3
Dec 05 08:01:47 np0005546420.localdomain sudo[45741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:48 np0005546420.localdomain python3[45743]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:48 np0005546420.localdomain sudo[45741]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:48 np0005546420.localdomain sudo[45757]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jtasimspqzaxlpiovfidasppfmgjiazc ; /usr/bin/python3
Dec 05 08:01:48 np0005546420.localdomain sudo[45757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:48 np0005546420.localdomain python3[45759]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:48 np0005546420.localdomain sudo[45757]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:48 np0005546420.localdomain sudo[45773]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ghrdsxqjxodejawmaipembvmtbnfmenu ; /usr/bin/python3
Dec 05 08:01:48 np0005546420.localdomain sudo[45773]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:48 np0005546420.localdomain python3[45775]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:48 np0005546420.localdomain sudo[45773]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:49 np0005546420.localdomain sudo[45821]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gftvwibowuzuhvmqmflezktfybsjblkt ; /usr/bin/python3
Dec 05 08:01:49 np0005546420.localdomain sudo[45821]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:49 np0005546420.localdomain python3[45823]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:01:49 np0005546420.localdomain sudo[45821]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:49 np0005546420.localdomain sudo[45864]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swrpjornqndaevrzyssfxfwjopvqcbkl ; /usr/bin/python3
Dec 05 08:01:49 np0005546420.localdomain sudo[45864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:49 np0005546420.localdomain python3[45866]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921709.0253181-77950-27019897397100/source _original_basename=tmp97dpeqp5 follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:49 np0005546420.localdomain sudo[45864]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:49 np0005546420.localdomain sudo[45894]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ufnsxorkwdrfvcurupsavpdzogukzrgx ; /usr/bin/python3
Dec 05 08:01:49 np0005546420.localdomain sudo[45894]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:50 np0005546420.localdomain python3[45896]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:50 np0005546420.localdomain sudo[45894]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:50 np0005546420.localdomain sudo[45910]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xndkjcwyawslpjezxducujfloalcybei ; /usr/bin/python3
Dec 05 08:01:50 np0005546420.localdomain sudo[45910]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:50 np0005546420.localdomain python3[45912]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:01:50 np0005546420.localdomain sudo[45910]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:50 np0005546420.localdomain sudo[45926]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmtimmjtwbgpbgzbmljfjfrvqllmrvbs ; /usr/bin/python3
Dec 05 08:01:50 np0005546420.localdomain sudo[45926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:51 np0005546420.localdomain python3[45928]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 08:01:53 np0005546420.localdomain sudo[45926]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:54 np0005546420.localdomain sudo[45975]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zsepysiboxfzwfdggekqgcrqmworsvjs ; /usr/bin/python3
Dec 05 08:01:54 np0005546420.localdomain sudo[45975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:54 np0005546420.localdomain python3[45977]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:01:54 np0005546420.localdomain sudo[45975]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:54 np0005546420.localdomain sudo[46020]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jrcagbtaupxabtpquvpyimnggllgiuao ; /usr/bin/python3
Dec 05 08:01:54 np0005546420.localdomain sudo[46020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:55 np0005546420.localdomain python3[46022]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921714.3805716-78155-48595121114818/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:01:55 np0005546420.localdomain sudo[46020]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:55 np0005546420.localdomain sudo[46051]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sdxykjnpbqbcoobdarycuncmjvyavowh ; /usr/bin/python3
Dec 05 08:01:55 np0005546420.localdomain sudo[46051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:55 np0005546420.localdomain python3[46053]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 05 08:01:55 np0005546420.localdomain sshd[1130]: Received signal 15; terminating.
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: sshd.service: Consumed 3.257s CPU time, read 2.1M from disk, written 44.0K to disk.
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 05 08:01:55 np0005546420.localdomain sshd[46057]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:01:55 np0005546420.localdomain sshd[46057]: Server listening on 0.0.0.0 port 22.
Dec 05 08:01:55 np0005546420.localdomain sshd[46057]: Server listening on :: port 22.
Dec 05 08:01:55 np0005546420.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 05 08:01:55 np0005546420.localdomain sudo[46051]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:55 np0005546420.localdomain sudo[46071]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydodtwyswihtymaffnixskjlguxdxzmd ; /usr/bin/python3
Dec 05 08:01:55 np0005546420.localdomain sudo[46071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:56 np0005546420.localdomain python3[46073]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:01:56 np0005546420.localdomain sudo[46071]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:57 np0005546420.localdomain sudo[46089]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-czuoasylqeuvbswsrgklqpboqzsihtzf ; /usr/bin/python3
Dec 05 08:01:57 np0005546420.localdomain sudo[46089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:57 np0005546420.localdomain python3[46091]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:01:57 np0005546420.localdomain sudo[46089]: pam_unix(sudo:session): session closed for user root
Dec 05 08:01:57 np0005546420.localdomain sudo[46107]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-idbusxgmthiqqennxltjvvxzwtwlztzi ; /usr/bin/python3
Dec 05 08:01:57 np0005546420.localdomain sudo[46107]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:01:57 np0005546420.localdomain python3[46109]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 08:02:00 np0005546420.localdomain sudo[46107]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:01 np0005546420.localdomain sudo[46156]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmkotyfhzjutsinkmlpvuqbixdfrtarj ; /usr/bin/python3
Dec 05 08:02:01 np0005546420.localdomain sudo[46156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:01 np0005546420.localdomain python3[46158]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:02:01 np0005546420.localdomain sudo[46156]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:01 np0005546420.localdomain sudo[46174]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ooouujnvbpnimxugkkimqmdzgpfaliek ; /usr/bin/python3
Dec 05 08:02:01 np0005546420.localdomain sudo[46174]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:01 np0005546420.localdomain python3[46176]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:02:01 np0005546420.localdomain sudo[46174]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:02 np0005546420.localdomain sudo[46204]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxcidqizkvqrcsdrvvygdxffehbpfsrr ; /usr/bin/python3
Dec 05 08:02:02 np0005546420.localdomain sudo[46204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:02 np0005546420.localdomain python3[46206]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:02:02 np0005546420.localdomain sudo[46204]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:02 np0005546420.localdomain sudo[46254]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twbtbihknpkedszkefalrmzvonklplnl ; /usr/bin/python3
Dec 05 08:02:02 np0005546420.localdomain sudo[46254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:03 np0005546420.localdomain python3[46256]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:02:03 np0005546420.localdomain sudo[46254]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:03 np0005546420.localdomain sudo[46272]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ddqhprhlcmzqzlecnbircilphqpgjgdm ; /usr/bin/python3
Dec 05 08:02:03 np0005546420.localdomain sudo[46272]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:03 np0005546420.localdomain python3[46274]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:02:03 np0005546420.localdomain sudo[46272]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:03 np0005546420.localdomain sudo[46302]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-azzidggsoofbxpzxynblmwponpvexbzs ; /usr/bin/python3
Dec 05 08:02:03 np0005546420.localdomain sudo[46302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:03 np0005546420.localdomain python3[46304]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:02:03 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:02:04 np0005546420.localdomain systemd-rc-local-generator[46325]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:02:04 np0005546420.localdomain systemd-sysv-generator[46329]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:02:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:02:04 np0005546420.localdomain systemd[1]: Starting chronyd online sources service...
Dec 05 08:02:04 np0005546420.localdomain chronyc[46344]: 200 OK
Dec 05 08:02:04 np0005546420.localdomain systemd[1]: chrony-online.service: Deactivated successfully.
Dec 05 08:02:04 np0005546420.localdomain systemd[1]: Finished chronyd online sources service.
Dec 05 08:02:04 np0005546420.localdomain sudo[46302]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:04 np0005546420.localdomain sudo[46358]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xkajequykeimqfgdvwcsscdyxdxrdsxk ; /usr/bin/python3
Dec 05 08:02:04 np0005546420.localdomain sudo[46358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:04 np0005546420.localdomain python3[46360]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:02:04 np0005546420.localdomain chronyd[26140]: System clock was stepped by 0.000056 seconds
Dec 05 08:02:04 np0005546420.localdomain sudo[46358]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:04 np0005546420.localdomain sudo[46375]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vpghxyuxytatueonajzpacmbiuocoxoj ; /usr/bin/python3
Dec 05 08:02:04 np0005546420.localdomain sudo[46375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:05 np0005546420.localdomain python3[46377]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:02:05 np0005546420.localdomain sudo[46375]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:05 np0005546420.localdomain sudo[46392]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gbfrdjseysajzobafspjovaldepqmnol ; /usr/bin/python3
Dec 05 08:02:05 np0005546420.localdomain sudo[46392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:05 np0005546420.localdomain python3[46394]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:02:05 np0005546420.localdomain chronyd[26140]: System clock was stepped by 0.000000 seconds
Dec 05 08:02:05 np0005546420.localdomain sudo[46392]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:05 np0005546420.localdomain sudo[46409]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdskjagmlrbvggcrzdqciuwbthpjayho ; /usr/bin/python3
Dec 05 08:02:05 np0005546420.localdomain sudo[46409]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:05 np0005546420.localdomain python3[46411]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:02:05 np0005546420.localdomain sudo[46409]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:06 np0005546420.localdomain sudo[46426]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nasvqyalpaxamdfwimctjewcurcxxttl ; /usr/bin/python3
Dec 05 08:02:06 np0005546420.localdomain sudo[46426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:02:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Cumulative writes: 3258 writes, 16K keys, 3258 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                                          Cumulative WAL: 3258 writes, 145 syncs, 22.47 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 3258 writes, 16K keys, 3258 commit groups, 1.0 writes per commit group, ingest: 14.68 MB, 0.02 MB/s
                                                          Interval WAL: 3258 writes, 145 syncs, 22.47 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 05 08:02:06 np0005546420.localdomain python3[46428]: ansible-timezone Invoked with name=UTC hwclock=None
Dec 05 08:02:06 np0005546420.localdomain systemd[1]: Starting Time & Date Service...
Dec 05 08:02:06 np0005546420.localdomain systemd[1]: Started Time & Date Service.
Dec 05 08:02:06 np0005546420.localdomain sudo[46426]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:07 np0005546420.localdomain sudo[46446]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-etocxnncxncrftncczupvdurutokaycq ; /usr/bin/python3
Dec 05 08:02:07 np0005546420.localdomain sudo[46446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:07 np0005546420.localdomain python3[46448]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:02:07 np0005546420.localdomain sudo[46446]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:07 np0005546420.localdomain sudo[46463]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-riydpxsqggqmvuajctiecpkikdurvwlx ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Dec 05 08:02:07 np0005546420.localdomain sudo[46463]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:07 np0005546420.localdomain python3[46465]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:02:07 np0005546420.localdomain sudo[46463]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:08 np0005546420.localdomain sudo[46480]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dowamenkmxvdgabrnsuabxoxjmmoayjt ; /usr/bin/python3
Dec 05 08:02:08 np0005546420.localdomain sudo[46480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:08 np0005546420.localdomain python3[46482]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Dec 05 08:02:08 np0005546420.localdomain sudo[46480]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:08 np0005546420.localdomain sudo[46496]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjjommkousqcikwkodwmnngpwumptahm ; /usr/bin/python3
Dec 05 08:02:08 np0005546420.localdomain sudo[46496]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:08 np0005546420.localdomain python3[46498]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:02:08 np0005546420.localdomain sudo[46496]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:09 np0005546420.localdomain sudo[46512]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-stedyzwusfsygemjegyegcnirlliaixq ; /usr/bin/python3
Dec 05 08:02:09 np0005546420.localdomain sudo[46512]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:09 np0005546420.localdomain python3[46514]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:02:09 np0005546420.localdomain sudo[46512]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:09 np0005546420.localdomain sudo[46528]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gcmaefapmlyjakzhayhdqhtkptibdryx ; /usr/bin/python3
Dec 05 08:02:09 np0005546420.localdomain sudo[46528]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:09 np0005546420.localdomain python3[46530]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:02:09 np0005546420.localdomain sudo[46528]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:10 np0005546420.localdomain sudo[46576]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-obrdoprunrrblxiyzwknlxhxtgdopleo ; /usr/bin/python3
Dec 05 08:02:10 np0005546420.localdomain sudo[46576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:10 np0005546420.localdomain python3[46578]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:02:10 np0005546420.localdomain sudo[46576]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:10 np0005546420.localdomain sudo[46619]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdbngiriqzlpbibimpwebsooyuxonrdo ; /usr/bin/python3
Dec 05 08:02:10 np0005546420.localdomain sudo[46619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:10 np0005546420.localdomain python3[46621]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921729.8764942-79278-48926858483035/source _original_basename=tmprzls3f4b follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:02:10 np0005546420.localdomain sudo[46619]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:02:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Cumulative writes: 3259 writes, 16K keys, 3259 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                                          Cumulative WAL: 3259 writes, 145 syncs, 22.48 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 3259 writes, 16K keys, 3259 commit groups, 1.0 writes per commit group, ingest: 14.67 MB, 0.02 MB/s
                                                          Interval WAL: 3259 writes, 145 syncs, 22.48 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 05 08:02:11 np0005546420.localdomain sudo[46681]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-firhfbcsdbeqpmhywxudwfqlemwgysck ; /usr/bin/python3
Dec 05 08:02:11 np0005546420.localdomain sudo[46681]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:11 np0005546420.localdomain python3[46683]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:02:11 np0005546420.localdomain sudo[46681]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:11 np0005546420.localdomain sudo[46724]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tyzxbuxggjsaskdkktinrcesuwamrcgz ; /usr/bin/python3
Dec 05 08:02:11 np0005546420.localdomain sudo[46724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:11 np0005546420.localdomain python3[46726]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921730.8615613-79333-112082083181904/source _original_basename=tmp0kfww1np follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:02:11 np0005546420.localdomain sudo[46724]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:12 np0005546420.localdomain sudo[46754]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wqxethcuqjjaffzpkijhcvmjfcjvwdkp ; /usr/bin/python3
Dec 05 08:02:12 np0005546420.localdomain sudo[46754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:12 np0005546420.localdomain python3[46756]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 08:02:12 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:02:12 np0005546420.localdomain systemd-sysv-generator[46787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:02:12 np0005546420.localdomain systemd-rc-local-generator[46783]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:02:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:02:12 np0005546420.localdomain sudo[46754]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:12 np0005546420.localdomain sudo[46807]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpgcvqkjzkexwjmkmftyifqcztreminp ; /usr/bin/python3
Dec 05 08:02:12 np0005546420.localdomain sudo[46807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:12 np0005546420.localdomain python3[46809]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:02:12 np0005546420.localdomain sudo[46807]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:13 np0005546420.localdomain sudo[46823]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ipcyltvkbvdqoyxmgvzknblcbybmwwmo ; /usr/bin/python3
Dec 05 08:02:13 np0005546420.localdomain sudo[46823]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:13 np0005546420.localdomain python3[46825]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:02:13 np0005546420.localdomain sudo[46823]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:13 np0005546420.localdomain sudo[46840]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvkirokeiskosscrnoobpuhegomaveet ; /usr/bin/python3
Dec 05 08:02:13 np0005546420.localdomain sudo[46840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:13 np0005546420.localdomain python3[46842]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:02:13 np0005546420.localdomain systemd[1]: run-netns-ns_temp.mount: Deactivated successfully.
Dec 05 08:02:13 np0005546420.localdomain sudo[46840]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:14 np0005546420.localdomain sudo[46857]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wrvdhsyzeihzjsiwtthbmddimrprpjpz ; /usr/bin/python3
Dec 05 08:02:14 np0005546420.localdomain sudo[46857]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:14 np0005546420.localdomain python3[46859]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:02:14 np0005546420.localdomain sudo[46857]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:14 np0005546420.localdomain sudo[46873]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pyhhsynoklsajxehdbenuwoksksjhwbl ; /usr/bin/python3
Dec 05 08:02:14 np0005546420.localdomain sudo[46873]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:14 np0005546420.localdomain python3[46875]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:02:14 np0005546420.localdomain sudo[46873]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:14 np0005546420.localdomain sudo[46921]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcwngcdflxcfhbfnmpmxkqostvyhllhp ; /usr/bin/python3
Dec 05 08:02:14 np0005546420.localdomain sudo[46921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:15 np0005546420.localdomain python3[46923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:02:15 np0005546420.localdomain sudo[46921]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:15 np0005546420.localdomain sudo[46964]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnstdgeiafadbudsmfkytxjxiogewrjp ; /usr/bin/python3
Dec 05 08:02:15 np0005546420.localdomain sudo[46964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:15 np0005546420.localdomain python3[46966]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921734.848842-79625-271751111932531/source _original_basename=tmpv1sqavet follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:02:15 np0005546420.localdomain sudo[46964]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:36 np0005546420.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 08:02:40 np0005546420.localdomain sudo[46997]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fjvenzsdiaidqdgegvxzsjwbchrixakr ; /usr/bin/python3
Dec 05 08:02:40 np0005546420.localdomain sudo[46997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:41 np0005546420.localdomain python3[46999]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:02:41 np0005546420.localdomain sudo[46997]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:41 np0005546420.localdomain sudo[47013]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aoxfmwfhodbhpruzkwfxyvqnyabcetjw ; /usr/bin/python3
Dec 05 08:02:41 np0005546420.localdomain sudo[47013]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:41 np0005546420.localdomain python3[47015]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None
Dec 05 08:02:41 np0005546420.localdomain sudo[47013]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:41 np0005546420.localdomain sudo[47029]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-edyrugkiawxhuhsqseduubwgfcsjeasw ; /usr/bin/python3
Dec 05 08:02:41 np0005546420.localdomain sudo[47029]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:41 np0005546420.localdomain python3[47031]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:02:41 np0005546420.localdomain sudo[47029]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:41 np0005546420.localdomain sudo[47045]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jcpigblmjaihxwbcgrbhjxwkoqqpjcff ; /usr/bin/python3
Dec 05 08:02:41 np0005546420.localdomain sudo[47045]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:42 np0005546420.localdomain python3[47047]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:02:42 np0005546420.localdomain sudo[47045]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:42 np0005546420.localdomain sudo[47061]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjcyehxcbxpqbraqvyrgiqlysolwbtuh ; /usr/bin/python3
Dec 05 08:02:42 np0005546420.localdomain sudo[47061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:42 np0005546420.localdomain python3[47063]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:02:42 np0005546420.localdomain sudo[47061]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:42 np0005546420.localdomain sudo[47077]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irryafuzqwsszfukaalanyiytigwnvlh ; /usr/bin/python3
Dec 05 08:02:42 np0005546420.localdomain sudo[47077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:42 np0005546420.localdomain python3[47079]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 05 08:02:43 np0005546420.localdomain kernel: SELinux:  Converting 2707 SID table entries...
Dec 05 08:02:43 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 08:02:43 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 08:02:43 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 08:02:43 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 08:02:43 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 08:02:43 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 08:02:43 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 08:02:43 np0005546420.localdomain sudo[47077]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:43 np0005546420.localdomain sudo[47102]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eintkeceaqpzfnftqbmbtbrscuozlgla ; /usr/bin/python3
Dec 05 08:02:44 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 05 08:02:44 np0005546420.localdomain sudo[47102]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:44 np0005546420.localdomain python3[47104]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:02:44 np0005546420.localdomain sudo[47102]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:44 np0005546420.localdomain sudo[47118]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ifroveqxjfknpamaviewrglkimenlopy ; /usr/bin/python3
Dec 05 08:02:44 np0005546420.localdomain sudo[47118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:44 np0005546420.localdomain sudo[47118]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:44 np0005546420.localdomain sudo[47166]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqnvcotbkwdleedzqbmfrvzqqncamxkj ; /usr/bin/python3
Dec 05 08:02:44 np0005546420.localdomain sudo[47166]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:44 np0005546420.localdomain sudo[47166]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:45 np0005546420.localdomain sudo[47209]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bkoygapwwqblharyzfgluotgxwdkvsvi ; /usr/bin/python3
Dec 05 08:02:45 np0005546420.localdomain sudo[47209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:45 np0005546420.localdomain sudo[47209]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:45 np0005546420.localdomain sudo[47212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:02:45 np0005546420.localdomain sudo[47212]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:02:45 np0005546420.localdomain sudo[47212]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:45 np0005546420.localdomain sudo[47241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:02:45 np0005546420.localdomain sudo[47241]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:02:45 np0005546420.localdomain sudo[47269]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvtgfyhesangesvajxduqpboyzomfptc ; /usr/bin/python3
Dec 05 08:02:45 np0005546420.localdomain sudo[47269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:45 np0005546420.localdomain python3[47271]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, 'nova_virtnodedevd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtproxyd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtqemud': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, 'nova_virtsecretd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtstoraged': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, 'rsyslog': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}}, 'step_4': {'ceilometer_agent_compute': {'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'ceilometer_agent_ipmi': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'configure_cms_options': {'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, 'logrotate_crond': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, 'nova_libvirt_init_secret': {'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, 'nova_migration_target': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, 'ovn_controller': {'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, 'ovn_metadata_agent': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, 'setup_ovs_manager': {'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}}, 'step_5': {'nova_compute': {'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, 'nova_wait_for_compute_service': {'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}}}
Dec 05 08:02:45 np0005546420.localdomain sudo[47269]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:46 np0005546420.localdomain sudo[47304]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nvaemivrohpktpplaefplocoyzsdevdv ; /usr/bin/python3
Dec 05 08:02:46 np0005546420.localdomain sudo[47304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:46 np0005546420.localdomain sudo[47241]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:46 np0005546420.localdomain rsyslogd[756]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Dec 05 08:02:46 np0005546420.localdomain python3[47306]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:02:46 np0005546420.localdomain sudo[47304]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:46 np0005546420.localdomain sudo[47332]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckjiwhykpzpvlvkjhgtmxjojbzzuokkb ; /usr/bin/python3
Dec 05 08:02:46 np0005546420.localdomain sudo[47332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:46 np0005546420.localdomain python3[47334]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:02:46 np0005546420.localdomain sudo[47332]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:46 np0005546420.localdomain sudo[47348]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbvholqujynuclwkjmxmmryzezeoiwqk ; /usr/bin/python3
Dec 05 08:02:46 np0005546420.localdomain sudo[47348]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:46 np0005546420.localdomain python3[47350]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}}
Dec 05 08:02:46 np0005546420.localdomain sudo[47348]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:48 np0005546420.localdomain sudo[47351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:02:48 np0005546420.localdomain sudo[47351]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:02:48 np0005546420.localdomain sudo[47351]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:51 np0005546420.localdomain sudo[47411]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fuqjugjcohicjhrchpekzfitmmzpmacl ; /usr/bin/python3
Dec 05 08:02:51 np0005546420.localdomain sudo[47411]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:51 np0005546420.localdomain python3[47413]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:02:51 np0005546420.localdomain sudo[47411]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:51 np0005546420.localdomain sudo[47454]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-psojvjgvtaknareyxjznwayrisdaakif ; /usr/bin/python3
Dec 05 08:02:51 np0005546420.localdomain sudo[47454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:52 np0005546420.localdomain python3[47456]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921771.4322863-81089-214969647408396/source _original_basename=tmp472e2bry follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:02:52 np0005546420.localdomain sudo[47454]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:52 np0005546420.localdomain systemd[36358]: Created slice User Background Tasks Slice.
Dec 05 08:02:52 np0005546420.localdomain systemd[36358]: Starting Cleanup of User's Temporary Files and Directories...
Dec 05 08:02:52 np0005546420.localdomain systemd[36358]: Finished Cleanup of User's Temporary Files and Directories.
Dec 05 08:02:52 np0005546420.localdomain sudo[47484]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nmgbokiuljbuzxvgopqrhswdimdtttig ; /usr/bin/python3
Dec 05 08:02:52 np0005546420.localdomain sudo[47484]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:52 np0005546420.localdomain python3[47487]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:02:52 np0005546420.localdomain sudo[47484]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:53 np0005546420.localdomain sudo[47535]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rqyvukyldrrmesaqcxmgjeepwcljahka ; /usr/bin/python3
Dec 05 08:02:53 np0005546420.localdomain sudo[47535]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:53 np0005546420.localdomain sudo[47535]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:53 np0005546420.localdomain sudo[47578]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nomrlumzctkubjuwvglzyybabbljshni ; /usr/bin/python3
Dec 05 08:02:53 np0005546420.localdomain sudo[47578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:54 np0005546420.localdomain sudo[47578]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:54 np0005546420.localdomain sudo[47608]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mvnftradihkugrnzynpawujazzfzsgua ; /usr/bin/python3
Dec 05 08:02:54 np0005546420.localdomain sudo[47608]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:54 np0005546420.localdomain python3[47610]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:02:54 np0005546420.localdomain sudo[47608]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:55 np0005546420.localdomain sudo[47656]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mppxtymskualihywxxygtlrefsvpicpo ; /usr/bin/python3
Dec 05 08:02:55 np0005546420.localdomain sudo[47656]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:55 np0005546420.localdomain sudo[47656]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:55 np0005546420.localdomain sudo[47699]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbbhgwvfmystrrganhbvjclnkrtlgnyb ; /usr/bin/python3
Dec 05 08:02:55 np0005546420.localdomain sudo[47699]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:55 np0005546420.localdomain sudo[47699]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:56 np0005546420.localdomain sudo[47729]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vkmvzybnrvcfjpxwtkkubhbcawnhqwel ; /usr/bin/python3
Dec 05 08:02:56 np0005546420.localdomain sudo[47729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:56 np0005546420.localdomain python3[47731]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 05 08:02:56 np0005546420.localdomain sudo[47729]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:58 np0005546420.localdomain sudo[47745]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ixjzqkvjrijcgmxmbmufklazinnujvdz ; /usr/bin/python3
Dec 05 08:02:58 np0005546420.localdomain sudo[47745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:58 np0005546420.localdomain python3[47747]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:02:58 np0005546420.localdomain sudo[47745]: pam_unix(sudo:session): session closed for user root
Dec 05 08:02:59 np0005546420.localdomain sudo[47762]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzaunbqkvpqfanowerrikigmrsgankmp ; /usr/bin/python3
Dec 05 08:02:59 np0005546420.localdomain sudo[47762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:02:59 np0005546420.localdomain python3[47764]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 08:03:03 np0005546420.localdomain dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Dec 05 08:03:03 np0005546420.localdomain dbus-broker-launch[18460]: Noticed file-system modification, trigger reload.
Dec 05 08:03:03 np0005546420.localdomain dbus-broker-launch[18460]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 05 08:03:03 np0005546420.localdomain dbus-broker-launch[18460]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 05 08:03:03 np0005546420.localdomain dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Dec 05 08:03:03 np0005546420.localdomain systemd[1]: Reexecuting.
Dec 05 08:03:03 np0005546420.localdomain systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 05 08:03:03 np0005546420.localdomain systemd[1]: Detected virtualization kvm.
Dec 05 08:03:03 np0005546420.localdomain systemd[1]: Detected architecture x86-64.
Dec 05 08:03:03 np0005546420.localdomain systemd-rc-local-generator[47817]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:03:03 np0005546420.localdomain systemd-sysv-generator[47821]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:03:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:03:11 np0005546420.localdomain kernel: SELinux:  Converting 2707 SID table entries...
Dec 05 08:03:12 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 08:03:12 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 08:03:12 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 08:03:12 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 08:03:12 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 08:03:12 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 08:03:12 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 08:03:12 np0005546420.localdomain dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Dec 05 08:03:12 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 05 08:03:12 np0005546420.localdomain dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Dec 05 08:03:13 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 08:03:13 np0005546420.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 05 08:03:13 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:03:13 np0005546420.localdomain systemd-rc-local-generator[47917]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:03:13 np0005546420.localdomain systemd-sysv-generator[47921]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:03:13 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 08:03:14 np0005546420.localdomain systemd-journald[619]: Journal stopped
Dec 05 08:03:14 np0005546420.localdomain systemd-journald[619]: Received SIGTERM from PID 1 (systemd).
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Stopping Journal Service...
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Stopped Journal Service.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: systemd-journald.service: Consumed 1.854s CPU time.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Starting Journal Service...
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: systemd-udevd.service: Consumed 3.203s CPU time.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 05 08:03:14 np0005546420.localdomain systemd-journald[48245]: Journal started
Dec 05 08:03:14 np0005546420.localdomain systemd-journald[48245]: Runtime Journal (/run/log/journal/d70e7573f9252a22999953aab4dc4dc5) is 12.2M, max 314.7M, 302.5M free.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Started Journal Service.
Dec 05 08:03:14 np0005546420.localdomain systemd-journald[48245]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation.
Dec 05 08:03:14 np0005546420.localdomain systemd-journald[48245]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 05 08:03:14 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 08:03:14 np0005546420.localdomain systemd-udevd[48255]: Using default interface naming scheme 'rhel-9.0'.
Dec 05 08:03:14 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:03:14 np0005546420.localdomain systemd-rc-local-generator[48793]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:03:14 np0005546420.localdomain systemd-sysv-generator[48796]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:03:14 np0005546420.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 08:03:15 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 08:03:15 np0005546420.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 05 08:03:15 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.355s CPU time.
Dec 05 08:03:15 np0005546420.localdomain systemd[1]: run-r63b85cc17d584cc3993aa0d9e0a068fa.service: Deactivated successfully.
Dec 05 08:03:15 np0005546420.localdomain systemd[1]: run-re57b98507993411d82e94ad0628bc351.service: Deactivated successfully.
Dec 05 08:03:16 np0005546420.localdomain sudo[47762]: pam_unix(sudo:session): session closed for user root
Dec 05 08:03:16 np0005546420.localdomain sudo[49254]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-emngwxlsncciwjqeizrfwrrdlfvvezll ; /usr/bin/python3
Dec 05 08:03:16 np0005546420.localdomain sudo[49254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:03:16 np0005546420.localdomain python3[49256]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False
Dec 05 08:03:16 np0005546420.localdomain sudo[49254]: pam_unix(sudo:session): session closed for user root
Dec 05 08:03:16 np0005546420.localdomain sudo[49273]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nsmrldjxtwntejfapassrxhuzsvrqjvd ; /usr/bin/python3
Dec 05 08:03:16 np0005546420.localdomain sudo[49273]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:03:17 np0005546420.localdomain python3[49275]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:03:17 np0005546420.localdomain sudo[49273]: pam_unix(sudo:session): session closed for user root
Dec 05 08:03:17 np0005546420.localdomain sudo[49291]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikuzncilfnszhixcbmojlaienmzqlfhv ; /usr/bin/python3
Dec 05 08:03:17 np0005546420.localdomain sudo[49291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:03:17 np0005546420.localdomain python3[49293]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 05 08:03:17 np0005546420.localdomain python3[49293]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json
Dec 05 08:03:18 np0005546420.localdomain python3[49293]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false
Dec 05 08:03:25 np0005546420.localdomain podman[49305]: 2025-12-05 08:03:18.055089811 +0000 UTC m=+0.036056574 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 05 08:03:25 np0005546420.localdomain python3[49293]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect bac901955dcf7a32a493c6ef724c092009bbc18467858aa8c55e916b8c2b2b8f --format json
Dec 05 08:03:25 np0005546420.localdomain sudo[49291]: pam_unix(sudo:session): session closed for user root
Dec 05 08:03:25 np0005546420.localdomain sudo[49404]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjdffuhihsnhtntvbcunijqujszbtavi ; /usr/bin/python3
Dec 05 08:03:25 np0005546420.localdomain sudo[49404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:03:25 np0005546420.localdomain python3[49406]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 05 08:03:25 np0005546420.localdomain python3[49406]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json
Dec 05 08:03:25 np0005546420.localdomain python3[49406]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false
Dec 05 08:03:32 np0005546420.localdomain podman[49420]: 2025-12-05 08:03:25.744910211 +0000 UTC m=+0.040068595 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 05 08:03:32 np0005546420.localdomain python3[49406]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 44feaf8d87c1d40487578230316b622680576d805efdb45dfeea6aad464b41f1 --format json
Dec 05 08:03:32 np0005546420.localdomain sudo[49404]: pam_unix(sudo:session): session closed for user root
Dec 05 08:03:32 np0005546420.localdomain sudo[49520]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jivhvbvthqlnmamzujeisxxzgcdslwue ; /usr/bin/python3
Dec 05 08:03:32 np0005546420.localdomain sudo[49520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:03:33 np0005546420.localdomain python3[49522]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 05 08:03:33 np0005546420.localdomain python3[49522]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json
Dec 05 08:03:33 np0005546420.localdomain python3[49522]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false
Dec 05 08:03:48 np0005546420.localdomain sudo[50242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:03:48 np0005546420.localdomain sudo[50242]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:03:48 np0005546420.localdomain sudo[50242]: pam_unix(sudo:session): session closed for user root
Dec 05 08:03:48 np0005546420.localdomain sudo[50257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 08:03:48 np0005546420.localdomain sudo[50257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:03:52 np0005546420.localdomain podman[49535]: 2025-12-05 08:03:33.204765159 +0000 UTC m=+0.036518896 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:03:52 np0005546420.localdomain python3[49522]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3a088c12511c977065fdc5f1594cba7b1a79f163578a6ffd0ac4a475b8e67938 --format json
Dec 05 08:03:52 np0005546420.localdomain sudo[49520]: pam_unix(sudo:session): session closed for user root
Dec 05 08:03:53 np0005546420.localdomain sudo[50366]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-anvtwcdqaubgjhnpldkiamamxwejkwxg ; /usr/bin/python3
Dec 05 08:03:53 np0005546420.localdomain sudo[50366]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:03:53 np0005546420.localdomain podman[50370]: 2025-12-05 08:03:53.185455096 +0000 UTC m=+0.082690928 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=)
Dec 05 08:03:53 np0005546420.localdomain python3[50377]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 05 08:03:53 np0005546420.localdomain python3[50377]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json
Dec 05 08:03:53 np0005546420.localdomain python3[50377]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false
Dec 05 08:03:53 np0005546420.localdomain podman[50370]: 2025-12-05 08:03:53.320761336 +0000 UTC m=+0.217997148 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 08:03:53 np0005546420.localdomain sudo[50257]: pam_unix(sudo:session): session closed for user root
Dec 05 08:03:53 np0005546420.localdomain sudo[50458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:03:53 np0005546420.localdomain sudo[50458]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:03:53 np0005546420.localdomain sudo[50458]: pam_unix(sudo:session): session closed for user root
Dec 05 08:03:53 np0005546420.localdomain sudo[50473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:03:53 np0005546420.localdomain sudo[50473]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:03:54 np0005546420.localdomain sudo[50473]: pam_unix(sudo:session): session closed for user root
Dec 05 08:03:54 np0005546420.localdomain sudo[50544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:03:54 np0005546420.localdomain sudo[50544]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:03:54 np0005546420.localdomain sudo[50544]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:01 np0005546420.localdomain anacron[6196]: Job `cron.monthly' started
Dec 05 08:04:01 np0005546420.localdomain anacron[6196]: Job `cron.monthly' terminated
Dec 05 08:04:01 np0005546420.localdomain anacron[6196]: Normal exit (3 jobs run)
Dec 05 08:04:06 np0005546420.localdomain podman[50406]: 2025-12-05 08:03:53.366066253 +0000 UTC m=+0.050162722 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 05 08:04:06 np0005546420.localdomain python3[50377]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 514d439186251360cf734cbc6d4a44c834664891872edf3798a653dfaacf10c0 --format json
Dec 05 08:04:06 np0005546420.localdomain sudo[50366]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:06 np0005546420.localdomain sudo[50599]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bmvbnqbixqsjwtiwcxhgaobxghaqsycl ; /usr/bin/python3
Dec 05 08:04:06 np0005546420.localdomain sudo[50599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:06 np0005546420.localdomain python3[50601]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 05 08:04:06 np0005546420.localdomain python3[50601]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json
Dec 05 08:04:06 np0005546420.localdomain python3[50601]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false
Dec 05 08:04:17 np0005546420.localdomain podman[50614]: 2025-12-05 08:04:06.619421971 +0000 UTC m=+0.051380228 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 05 08:04:17 np0005546420.localdomain python3[50601]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a9dd7a2ac6f35cb086249f87f74e2f8e74e7e2ad5141ce2228263be6faedce26 --format json
Dec 05 08:04:17 np0005546420.localdomain sudo[50599]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:18 np0005546420.localdomain sudo[50956]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxwvrleossbkcvfyctvuswcjqkqupawc ; /usr/bin/python3
Dec 05 08:04:18 np0005546420.localdomain sudo[50956]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:18 np0005546420.localdomain python3[50958]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 05 08:04:18 np0005546420.localdomain python3[50958]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json
Dec 05 08:04:18 np0005546420.localdomain python3[50958]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false
Dec 05 08:04:22 np0005546420.localdomain podman[50970]: 2025-12-05 08:04:18.355240536 +0000 UTC m=+0.049321017 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 05 08:04:22 np0005546420.localdomain python3[50958]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 24976907b2c2553304119aba5731a800204d664feed24ca9eb7f2b4c7d81016b --format json
Dec 05 08:04:23 np0005546420.localdomain sudo[50956]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:23 np0005546420.localdomain sudo[51044]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-veoegpoxyrminzwjlcdroelbhnhjtuhs ; /usr/bin/python3
Dec 05 08:04:23 np0005546420.localdomain sudo[51044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:23 np0005546420.localdomain python3[51046]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 05 08:04:23 np0005546420.localdomain python3[51046]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json
Dec 05 08:04:23 np0005546420.localdomain python3[51046]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false
Dec 05 08:04:25 np0005546420.localdomain podman[51060]: 2025-12-05 08:04:23.476559634 +0000 UTC m=+0.044912425 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 05 08:04:25 np0005546420.localdomain python3[51046]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 57163a7b21fdbb804a27897cb6e6052a5e5c7a339c45d663e80b52375a760dcf --format json
Dec 05 08:04:25 np0005546420.localdomain sudo[51044]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:25 np0005546420.localdomain sudo[51135]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dozvldftcijuevrovoutjxhqkxkrlhzd ; /usr/bin/python3
Dec 05 08:04:25 np0005546420.localdomain sudo[51135]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:26 np0005546420.localdomain python3[51137]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 05 08:04:26 np0005546420.localdomain python3[51137]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json
Dec 05 08:04:26 np0005546420.localdomain python3[51137]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false
Dec 05 08:04:28 np0005546420.localdomain podman[51150]: 2025-12-05 08:04:26.282416169 +0000 UTC m=+0.047054469 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 05 08:04:28 np0005546420.localdomain python3[51137]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 076d82a27d63c8328729ed27ceb4291585ae18d017befe6fe353df7aa11715ae --format json
Dec 05 08:04:28 np0005546420.localdomain sudo[51135]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:28 np0005546420.localdomain sudo[51224]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mclolkljykqlfceeytdmgyplrpwkhymb ; /usr/bin/python3
Dec 05 08:04:28 np0005546420.localdomain sudo[51224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:28 np0005546420.localdomain python3[51226]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 05 08:04:28 np0005546420.localdomain python3[51226]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json
Dec 05 08:04:28 np0005546420.localdomain python3[51226]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false
Dec 05 08:04:31 np0005546420.localdomain podman[51238]: 2025-12-05 08:04:28.670338635 +0000 UTC m=+0.045129432 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Dec 05 08:04:31 np0005546420.localdomain python3[51226]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d0dbcb95546840a8d088df044347a7877ad5ea45a2ddba0578e9bb5de4ab0da5 --format json
Dec 05 08:04:31 np0005546420.localdomain sudo[51224]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:31 np0005546420.localdomain sudo[51314]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jnrdvcglxylleyugigtvhnbotbxbgmkk ; /usr/bin/python3
Dec 05 08:04:31 np0005546420.localdomain sudo[51314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:31 np0005546420.localdomain python3[51316]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 05 08:04:31 np0005546420.localdomain python3[51316]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json
Dec 05 08:04:31 np0005546420.localdomain python3[51316]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false
Dec 05 08:04:35 np0005546420.localdomain podman[51328]: 2025-12-05 08:04:31.835531537 +0000 UTC m=+0.041323419 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 05 08:04:35 np0005546420.localdomain python3[51316]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e6e981540e553415b2d6eda490d7683db07164af2e7a0af8245623900338a4d6 --format json
Dec 05 08:04:35 np0005546420.localdomain sudo[51314]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:35 np0005546420.localdomain sudo[51416]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhkyvhjageszwwcnciimeyxtzhfvuvtg ; /usr/bin/python3
Dec 05 08:04:35 np0005546420.localdomain sudo[51416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:35 np0005546420.localdomain python3[51418]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 05 08:04:35 np0005546420.localdomain python3[51418]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json
Dec 05 08:04:35 np0005546420.localdomain python3[51418]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false
Dec 05 08:04:37 np0005546420.localdomain podman[51430]: 2025-12-05 08:04:35.658508061 +0000 UTC m=+0.048840597 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 05 08:04:37 np0005546420.localdomain python3[51418]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 87ee88cbf01fb42e0b22747072843bcca6130a90eda4de6e74b3ccd847bb4040 --format json
Dec 05 08:04:37 np0005546420.localdomain sudo[51416]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:38 np0005546420.localdomain sudo[51505]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-njwnetpwfhlbsdpixmwfqxpmhfcqktuy ; /usr/bin/python3
Dec 05 08:04:38 np0005546420.localdomain sudo[51505]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:38 np0005546420.localdomain python3[51507]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:04:38 np0005546420.localdomain sudo[51505]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:38 np0005546420.localdomain sudo[51555]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ooenluoxsxugevhlxhaewttabpqboyxf ; /usr/bin/python3
Dec 05 08:04:38 np0005546420.localdomain sudo[51555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:38 np0005546420.localdomain sudo[51555]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:39 np0005546420.localdomain sudo[51573]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbzrcszqjbklioqojgdukctcqvcfspot ; /usr/bin/python3
Dec 05 08:04:39 np0005546420.localdomain sudo[51573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:39 np0005546420.localdomain sudo[51573]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:39 np0005546420.localdomain sudo[51677]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewfddqwvoapmtwlpigyhxpxngrfwkovs ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921879.4358041-84074-174855904508791/async_wrapper.py 653775459126 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921879.4358041-84074-174855904508791/AnsiballZ_command.py _
Dec 05 08:04:39 np0005546420.localdomain sudo[51677]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 05 08:04:39 np0005546420.localdomain ansible-async_wrapper.py[51679]: Invoked with 653775459126 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921879.4358041-84074-174855904508791/AnsiballZ_command.py _
Dec 05 08:04:39 np0005546420.localdomain ansible-async_wrapper.py[51682]: Starting module and watcher
Dec 05 08:04:39 np0005546420.localdomain ansible-async_wrapper.py[51682]: Start watching 51683 (3600)
Dec 05 08:04:39 np0005546420.localdomain ansible-async_wrapper.py[51683]: Start module (51683)
Dec 05 08:04:39 np0005546420.localdomain ansible-async_wrapper.py[51679]: Return async_wrapper task started.
Dec 05 08:04:39 np0005546420.localdomain sudo[51677]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:40 np0005546420.localdomain sudo[51698]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkyexrrtbbrxqrvgjuoawsqczepwyanp ; /usr/bin/python3
Dec 05 08:04:40 np0005546420.localdomain sudo[51698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:40 np0005546420.localdomain python3[51700]: ansible-ansible.legacy.async_status Invoked with jid=653775459126.51679 mode=status _async_dir=/tmp/.ansible_async
Dec 05 08:04:40 np0005546420.localdomain sudo[51698]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:43 np0005546420.localdomain puppet-user[51703]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:04:43 np0005546420.localdomain puppet-user[51703]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:04:43 np0005546420.localdomain puppet-user[51703]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:04:43 np0005546420.localdomain puppet-user[51703]:    (file & line not available)
Dec 05 08:04:43 np0005546420.localdomain puppet-user[51703]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:04:43 np0005546420.localdomain puppet-user[51703]:    (file & line not available)
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.13 seconds
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Notice: Applied catalog in 0.24 seconds
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Application:
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:    Initial environment: production
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:    Converged environment: production
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:          Run mode: user
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Changes:
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:             Total: 3
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Events:
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:           Success: 3
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:             Total: 3
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Resources:
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:           Changed: 3
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:       Out of sync: 3
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:             Total: 10
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Time:
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:          Schedule: 0.00
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:              File: 0.00
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:              Exec: 0.01
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:            Augeas: 0.14
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:    Config retrieval: 0.16
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:    Transaction evaluation: 0.17
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:    Catalog application: 0.24
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:          Last run: 1764921884
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:        Filebucket: 0.00
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:             Total: 0.24
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]: Version:
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:            Config: 1764921883
Dec 05 08:04:44 np0005546420.localdomain puppet-user[51703]:            Puppet: 7.10.0
Dec 05 08:04:44 np0005546420.localdomain ansible-async_wrapper.py[51683]: Module complete (51683)
Dec 05 08:04:44 np0005546420.localdomain ansible-async_wrapper.py[51682]: Done in kid B.
Dec 05 08:04:50 np0005546420.localdomain sudo[51828]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-idpobvxlvztfmhrfgdkzotjkeaullrpr ; /usr/bin/python3
Dec 05 08:04:50 np0005546420.localdomain sudo[51828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:50 np0005546420.localdomain python3[51830]: ansible-ansible.legacy.async_status Invoked with jid=653775459126.51679 mode=status _async_dir=/tmp/.ansible_async
Dec 05 08:04:50 np0005546420.localdomain sudo[51828]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:51 np0005546420.localdomain sudo[51844]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iasykdzdxusiopyuygcdzbtqgzxbomwh ; /usr/bin/python3
Dec 05 08:04:51 np0005546420.localdomain sudo[51844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:51 np0005546420.localdomain python3[51846]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:04:51 np0005546420.localdomain sudo[51844]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:51 np0005546420.localdomain sudo[51860]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lstehqpjqhgpbspwtcrufqlcssqgkfyc ; /usr/bin/python3
Dec 05 08:04:51 np0005546420.localdomain sudo[51860]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:51 np0005546420.localdomain python3[51862]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:04:51 np0005546420.localdomain sudo[51860]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:51 np0005546420.localdomain sudo[51908]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uizsqbptlpxehririoukuosnpgekczap ; /usr/bin/python3
Dec 05 08:04:51 np0005546420.localdomain sudo[51908]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:52 np0005546420.localdomain python3[51910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:04:52 np0005546420.localdomain sudo[51908]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:52 np0005546420.localdomain sudo[51951]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ezfgcosbevthxalvrpwltjfilwlebbxk ; /usr/bin/python3
Dec 05 08:04:52 np0005546420.localdomain sudo[51951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:52 np0005546420.localdomain python3[51953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921891.7457402-84349-83024240300923/source _original_basename=tmpn7cwanux follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:04:52 np0005546420.localdomain sudo[51951]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:52 np0005546420.localdomain sudo[51981]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mtyiadpshcvqraagunxrtdqfuvyvufro ; /usr/bin/python3
Dec 05 08:04:52 np0005546420.localdomain sudo[51981]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:52 np0005546420.localdomain python3[51983]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:04:52 np0005546420.localdomain sudo[51981]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:52 np0005546420.localdomain sudo[51997]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ntflgzegtrruwtzfmnfaczazyfjpdknx ; /usr/bin/python3
Dec 05 08:04:52 np0005546420.localdomain sudo[51997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:53 np0005546420.localdomain sudo[51997]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:53 np0005546420.localdomain sudo[52084]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dpsuvoifqkdxmqguwxfheicnkfbsqhve ; /usr/bin/python3
Dec 05 08:04:53 np0005546420.localdomain sudo[52084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:53 np0005546420.localdomain python3[52086]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 05 08:04:53 np0005546420.localdomain sudo[52084]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:54 np0005546420.localdomain sudo[52103]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dlfeermciofksbxezwkhfovwgqhuhhdt ; /usr/bin/python3
Dec 05 08:04:54 np0005546420.localdomain sudo[52103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:54 np0005546420.localdomain python3[52105]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 08:04:54 np0005546420.localdomain sudo[52103]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:54 np0005546420.localdomain sudo[52119]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irairzxkbrxyhfajmbwckajtgkwkpxpk ; /usr/bin/python3
Dec 05 08:04:54 np0005546420.localdomain sudo[52119]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:54 np0005546420.localdomain python3[52121]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005546420 step=1 update_config_hash_only=False
Dec 05 08:04:54 np0005546420.localdomain sudo[52119]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:54 np0005546420.localdomain sudo[52122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:04:54 np0005546420.localdomain sudo[52148]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pnfsouzfxlrftygemnqptmvlnwmrhrcd ; /usr/bin/python3
Dec 05 08:04:54 np0005546420.localdomain sudo[52122]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:04:54 np0005546420.localdomain sudo[52148]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:54 np0005546420.localdomain sudo[52122]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:55 np0005546420.localdomain sudo[52153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:04:55 np0005546420.localdomain sudo[52153]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:04:55 np0005546420.localdomain python3[52152]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:04:55 np0005546420.localdomain sudo[52148]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:55 np0005546420.localdomain sudo[52188]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bxiocgrvyzpejikwqcnzshgmhrxcngbk ; /usr/bin/python3
Dec 05 08:04:55 np0005546420.localdomain sudo[52188]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:55 np0005546420.localdomain python3[52195]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 05 08:04:55 np0005546420.localdomain sudo[52188]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:55 np0005546420.localdomain sudo[52153]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:56 np0005546420.localdomain sudo[52228]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-varhncmnxhpspgwwzujfizbpdhmwudgh ; /usr/bin/python3
Dec 05 08:04:56 np0005546420.localdomain sudo[52228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:56 np0005546420.localdomain python3[52230]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 08:04:56 np0005546420.localdomain sudo[52228]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:56 np0005546420.localdomain sudo[52268]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvqdklezfmfpfvppajoxvcnfgsywqgrn ; /usr/bin/python3
Dec 05 08:04:56 np0005546420.localdomain sudo[52268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:04:57 np0005546420.localdomain python3[52270]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False
Dec 05 08:04:57 np0005546420.localdomain podman[52442]: 2025-12-05 08:04:57.275151202 +0000 UTC m=+0.032205008 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:04:57 np0005546420.localdomain podman[52452]: 2025-12-05 08:04:57.278257899 +0000 UTC m=+0.027704947 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 05 08:04:58 np0005546420.localdomain podman[52452]: 2025-12-05 08:04:58.065915007 +0000 UTC m=+0.815362075 container create c9718212acf12ed4874f876247855e2aa022f158ac3705d8becb956d6a17df13 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, name=rhosp17/openstack-cron, container_name=container-puppet-crond, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.12, tcib_managed=true, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:04:58 np0005546420.localdomain podman[52462]: 2025-12-05 08:04:57.289542061 +0000 UTC m=+0.033668123 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 05 08:04:58 np0005546420.localdomain podman[52473]: 2025-12-05 08:04:57.291007647 +0000 UTC m=+0.028305635 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 05 08:04:58 np0005546420.localdomain podman[52442]: 2025-12-05 08:04:58.101485048 +0000 UTC m=+0.858538874 container create 5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, container_name=container-puppet-nova_libvirt, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git)
Dec 05 08:04:58 np0005546420.localdomain systemd[1]: Started libpod-conmon-c9718212acf12ed4874f876247855e2aa022f158ac3705d8becb956d6a17df13.scope.
Dec 05 08:04:58 np0005546420.localdomain systemd[1]: Started libpod-conmon-5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376.scope.
Dec 05 08:04:58 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:04:58 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:04:58 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e6b391ec59dcd99cc00b446ec20b036d136b8f7911be581529c928ff9bef29/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 05 08:04:58 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9c00df1f2da9b4f8cc0d82d682fbe65babf7715eec1d298da553452a4b2d783/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 05 08:04:58 np0005546420.localdomain podman[52479]: 2025-12-05 08:04:58.164153256 +0000 UTC m=+0.898622126 container create 7aad1b8d0cb878e8c18da4e91b53cfce694a9649c7e9de79f5a1e190b75797e4 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 05 08:04:58 np0005546420.localdomain podman[52452]: 2025-12-05 08:04:58.178208355 +0000 UTC m=+0.927655423 container init c9718212acf12ed4874f876247855e2aa022f158ac3705d8becb956d6a17df13 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=container-puppet-crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git)
Dec 05 08:04:58 np0005546420.localdomain podman[52473]: 2025-12-05 08:04:58.181380674 +0000 UTC m=+0.918678682 container create 0cf876f370da3280422a670bd648e1f9b4ff6ec56b7853ec45e15134e22e3a71 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-iscsid, container_name=container-puppet-iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 05 08:04:58 np0005546420.localdomain podman[52479]: 2025-12-05 08:04:57.292908586 +0000 UTC m=+0.027377456 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 05 08:04:58 np0005546420.localdomain podman[52452]: 2025-12-05 08:04:58.198897312 +0000 UTC m=+0.948344380 container start c9718212acf12ed4874f876247855e2aa022f158ac3705d8becb956d6a17df13 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, container_name=container-puppet-crond, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:04:58 np0005546420.localdomain podman[52452]: 2025-12-05 08:04:58.199525861 +0000 UTC m=+0.948972969 container attach c9718212acf12ed4874f876247855e2aa022f158ac3705d8becb956d6a17df13 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, com.redhat.component=openstack-cron-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=container-puppet-crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 05 08:04:58 np0005546420.localdomain systemd[1]: Started libpod-conmon-7aad1b8d0cb878e8c18da4e91b53cfce694a9649c7e9de79f5a1e190b75797e4.scope.
Dec 05 08:04:58 np0005546420.localdomain systemd[1]: Started libpod-conmon-0cf876f370da3280422a670bd648e1f9b4ff6ec56b7853ec45e15134e22e3a71.scope.
Dec 05 08:04:58 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:04:58 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:04:58 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452fa9ef0503bc3aa3c08de7cd537beefc7561b4484c5941b91d2e19b04d76e4/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 05 08:04:58 np0005546420.localdomain podman[52462]: 2025-12-05 08:04:58.222691215 +0000 UTC m=+0.966817287 container create 915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-collectd, io.openshift.expose-services=, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=container-puppet-collectd)
Dec 05 08:04:58 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2e330997defc689ea7178f3ec3c4e18b224f1742cc6af7ec556ac2e9588fc5/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff)
Dec 05 08:04:58 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2e330997defc689ea7178f3ec3c4e18b224f1742cc6af7ec556ac2e9588fc5/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 05 08:04:58 np0005546420.localdomain systemd[1]: Started libpod-conmon-915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069.scope.
Dec 05 08:04:58 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:04:58 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9afdb42a401bcc34daaa41d4513f2b2692e74a65323c260e0716aac1381c2db1/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 05 08:04:58 np0005546420.localdomain sudo[52569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:04:58 np0005546420.localdomain sudo[52569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:04:58 np0005546420.localdomain sudo[52569]: pam_unix(sudo:session): session closed for user root
Dec 05 08:04:59 np0005546420.localdomain podman[52473]: 2025-12-05 08:04:59.510458309 +0000 UTC m=+2.247756317 container init 0cf876f370da3280422a670bd648e1f9b4ff6ec56b7853ec45e15134e22e3a71 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-iscsid, architecture=x86_64, version=17.1.12, config_id=tripleo_puppet_step1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:04:59 np0005546420.localdomain podman[52473]: 2025-12-05 08:04:59.526215441 +0000 UTC m=+2.263513469 container start 0cf876f370da3280422a670bd648e1f9b4ff6ec56b7853ec45e15134e22e3a71 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, release=1761123044, container_name=container-puppet-iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_puppet_step1)
Dec 05 08:04:59 np0005546420.localdomain podman[52473]: 2025-12-05 08:04:59.526674715 +0000 UTC m=+2.263972783 container attach 0cf876f370da3280422a670bd648e1f9b4ff6ec56b7853ec45e15134e22e3a71 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid)
Dec 05 08:04:59 np0005546420.localdomain podman[52442]: 2025-12-05 08:04:59.532493557 +0000 UTC m=+2.289547363 container init 5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, container_name=container-puppet-nova_libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 05 08:04:59 np0005546420.localdomain podman[52462]: 2025-12-05 08:04:59.541505518 +0000 UTC m=+2.285631600 container init 915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, container_name=container-puppet-collectd, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, release=1761123044, config_id=tripleo_puppet_step1, io.openshift.expose-services=)
Dec 05 08:04:59 np0005546420.localdomain podman[52479]: 2025-12-05 08:04:59.54444976 +0000 UTC m=+2.278918630 container init 7aad1b8d0cb878e8c18da4e91b53cfce694a9649c7e9de79f5a1e190b75797e4 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, container_name=container-puppet-metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public)
Dec 05 08:04:59 np0005546420.localdomain podman[52462]: 2025-12-05 08:04:59.552677047 +0000 UTC m=+2.296803119 container start 915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, container_name=container-puppet-collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:04:59 np0005546420.localdomain podman[52462]: 2025-12-05 08:04:59.552928705 +0000 UTC m=+2.297054827 container attach 915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, container_name=container-puppet-collectd, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, config_id=tripleo_puppet_step1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 05 08:04:59 np0005546420.localdomain podman[52479]: 2025-12-05 08:04:59.554625149 +0000 UTC m=+2.289094039 container start 7aad1b8d0cb878e8c18da4e91b53cfce694a9649c7e9de79f5a1e190b75797e4 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:04:59 np0005546420.localdomain podman[52479]: 2025-12-05 08:04:59.554894387 +0000 UTC m=+2.289363257 container attach 7aad1b8d0cb878e8c18da4e91b53cfce694a9649c7e9de79f5a1e190b75797e4 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_puppet_step1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, release=1761123044, container_name=container-puppet-metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 05 08:04:59 np0005546420.localdomain podman[52442]: 2025-12-05 08:04:59.593126351 +0000 UTC m=+2.350180187 container start 5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, container_name=container-puppet-nova_libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, build-date=2025-11-19T00:35:22Z, release=1761123044, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, managed_by=tripleo_ansible)
Dec 05 08:04:59 np0005546420.localdomain podman[52442]: 2025-12-05 08:04:59.594517395 +0000 UTC m=+2.351571271 container attach 5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, container_name=container-puppet-nova_libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container)
Dec 05 08:05:00 np0005546420.localdomain podman[52342]: 2025-12-05 08:04:57.170906125 +0000 UTC m=+0.038042390 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Dec 05 08:05:00 np0005546420.localdomain podman[52701]: 2025-12-05 08:05:00.834566627 +0000 UTC m=+0.082696165 container create 9884c977711a53b5342d3a248fb7987eb7aaaec0d2a35873eb1d9d2f14e41905 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, build-date=2025-11-19T00:11:59Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, tcib_managed=true, release=1761123044, io.openshift.expose-services=, container_name=container-puppet-ceilometer, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Dec 05 08:05:00 np0005546420.localdomain systemd[1]: Started libpod-conmon-9884c977711a53b5342d3a248fb7987eb7aaaec0d2a35873eb1d9d2f14e41905.scope.
Dec 05 08:05:00 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:05:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a64d6c17ae21dd1cdba3026e372bef8c469d0e8cedde2cc51b076cd6a294ae/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 05 08:05:00 np0005546420.localdomain podman[52701]: 2025-12-05 08:05:00.796226449 +0000 UTC m=+0.044356017 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Dec 05 08:05:00 np0005546420.localdomain podman[52701]: 2025-12-05 08:05:00.897782282 +0000 UTC m=+0.145911820 container init 9884c977711a53b5342d3a248fb7987eb7aaaec0d2a35873eb1d9d2f14e41905 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-ceilometer, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_puppet_step1, release=1761123044, com.redhat.component=openstack-ceilometer-central-container, description=Red Hat OpenStack Platform 17.1 ceilometer-central, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, build-date=2025-11-19T00:11:59Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-central, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git)
Dec 05 08:05:00 np0005546420.localdomain podman[52701]: 2025-12-05 08:05:00.909720435 +0000 UTC m=+0.157849973 container start 9884c977711a53b5342d3a248fb7987eb7aaaec0d2a35873eb1d9d2f14e41905 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-ceilometer-central-container, description=Red Hat OpenStack Platform 17.1 ceilometer-central, build-date=2025-11-19T00:11:59Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, container_name=container-puppet-ceilometer)
Dec 05 08:05:00 np0005546420.localdomain podman[52701]: 2025-12-05 08:05:00.909995793 +0000 UTC m=+0.158125341 container attach 9884c977711a53b5342d3a248fb7987eb7aaaec0d2a35873eb1d9d2f14e41905 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vendor=Red Hat, Inc., container_name=container-puppet-ceilometer, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, com.redhat.component=openstack-ceilometer-central-container, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, vcs-type=git, name=rhosp17/openstack-ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:59Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central)
Dec 05 08:05:01 np0005546420.localdomain ovs-vsctl[52762]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:    (file & line not available)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:    (file & line not available)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:    (file & line not available)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:    (file & line not available)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:    (file & line not available)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:    (file & line not available)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Notice: Accepting previously invalid value for target type 'Integer'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]:    (file & line not available)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.09 seconds
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]:    (file & line not available)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.09 seconds
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]:    (file & line not available)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]:    (file & line not available)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.12 seconds
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}2d090f9a2258ab19cbfadebfdbdc3353c3800b23781d45fb68c1b9a810daaf5a'
Dec 05 08:05:01 np0005546420.localdomain crontab[53040]: (root) LIST (root)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Notice: Applied catalog in 0.03 seconds
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Application:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:    Initial environment: production
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:    Converged environment: production
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:          Run mode: user
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Changes:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:             Total: 7
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Events:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:           Success: 7
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:             Total: 7
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Resources:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:           Skipped: 13
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:           Changed: 5
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:       Out of sync: 5
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:             Total: 20
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Time:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:              File: 0.01
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:    Transaction evaluation: 0.02
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:    Catalog application: 0.03
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:    Config retrieval: 0.15
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:          Last run: 1764921901
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:             Total: 0.03
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]: Version:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:            Config: 1764921901
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52643]:            Puppet: 7.10.0
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully
Dec 05 08:05:01 np0005546420.localdomain crontab[53042]: (root) REPLACE (root)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Notice: Applied catalog in 0.05 seconds
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Application:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:    Initial environment: production
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:    Converged environment: production
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:          Run mode: user
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Changes:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:             Total: 2
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Events:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:           Success: 2
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:             Total: 2
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Resources:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:           Changed: 2
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:       Out of sync: 2
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:           Skipped: 7
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:             Total: 9
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Time:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:              File: 0.01
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:              Cron: 0.01
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:    Transaction evaluation: 0.05
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:    Catalog application: 0.05
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:    Config retrieval: 0.12
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:          Last run: 1764921901
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:             Total: 0.05
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]: Version:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:            Config: 1764921901
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52594]:            Puppet: 7.10.0
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: in a future release. Use nova::cinder::os_region_name instead
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: in a future release. Use nova::cinder::catalog_info instead
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.39 seconds
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41)
Dec 05 08:05:01 np0005546420.localdomain systemd[1]: libpod-c9718212acf12ed4874f876247855e2aa022f158ac3705d8becb956d6a17df13.scope: Deactivated successfully.
Dec 05 08:05:01 np0005546420.localdomain systemd[1]: libpod-c9718212acf12ed4874f876247855e2aa022f158ac3705d8becb956d6a17df13.scope: Consumed 2.233s CPU time.
Dec 05 08:05:01 np0005546420.localdomain systemd[1]: libpod-7aad1b8d0cb878e8c18da4e91b53cfce694a9649c7e9de79f5a1e190b75797e4.scope: Deactivated successfully.
Dec 05 08:05:01 np0005546420.localdomain systemd[1]: libpod-7aad1b8d0cb878e8c18da4e91b53cfce694a9649c7e9de79f5a1e190b75797e4.scope: Consumed 2.179s CPU time.
Dec 05 08:05:01 np0005546420.localdomain podman[52479]: 2025-12-05 08:05:01.855539915 +0000 UTC m=+4.590008795 container died 7aad1b8d0cb878e8c18da4e91b53cfce694a9649c7e9de79f5a1e190b75797e4 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=container-puppet-metrics_qdr, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5)
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5)
Dec 05 08:05:01 np0005546420.localdomain podman[52452]: 2025-12-05 08:05:01.905841907 +0000 UTC m=+4.655289005 container died c9718212acf12ed4874f876247855e2aa022f158ac3705d8becb956d6a17df13 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, config_id=tripleo_puppet_step1, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Notice: Applied catalog in 0.46 seconds
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Application:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:    Initial environment: production
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:    Converged environment: production
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:          Run mode: user
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Changes:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:             Total: 4
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Events:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:           Success: 4
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:             Total: 4
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Resources:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:           Changed: 4
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:       Out of sync: 4
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:           Skipped: 8
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:             Total: 13
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Time:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:              File: 0.00
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:              Exec: 0.06
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:    Config retrieval: 0.12
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:            Augeas: 0.39
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:    Transaction evaluation: 0.45
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:    Catalog application: 0.46
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:          Last run: 1764921901
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:             Total: 0.46
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]: Version:
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:            Config: 1764921901
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52621]:            Puppet: 7.10.0
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set.
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52644]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance
Dec 05 08:05:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9718212acf12ed4874f876247855e2aa022f158ac3705d8becb956d6a17df13-userdata-shm.mount: Deactivated successfully.
Dec 05 08:05:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e9c00df1f2da9b4f8cc0d82d682fbe65babf7715eec1d298da553452a4b2d783-merged.mount: Deactivated successfully.
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750'
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed
Dec 05 08:05:01 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750'
Dec 05 08:05:01 np0005546420.localdomain podman[53112]: 2025-12-05 08:05:01.975625617 +0000 UTC m=+0.110960938 container cleanup c9718212acf12ed4874f876247855e2aa022f158ac3705d8becb956d6a17df13 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, container_name=container-puppet-crond, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Dec 05 08:05:01 np0005546420.localdomain systemd[1]: libpod-conmon-c9718212acf12ed4874f876247855e2aa022f158ac3705d8becb956d6a17df13.scope: Deactivated successfully.
Dec 05 08:05:01 np0005546420.localdomain python3[52270]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546420 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 05 08:05:02 np0005546420.localdomain podman[53113]: 2025-12-05 08:05:02.029996595 +0000 UTC m=+0.163062695 container cleanup 7aad1b8d0cb878e8c18da4e91b53cfce694a9649c7e9de79f5a1e190b75797e4 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, name=rhosp17/openstack-qdrouterd, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: libpod-conmon-7aad1b8d0cb878e8c18da4e91b53cfce694a9649c7e9de79f5a1e190b75797e4.scope: Deactivated successfully.
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}0d4e701b7b2398bbf396579a0713d46d3c496c79edc52f2e260456f359c9a46c'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c'
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-452fa9ef0503bc3aa3c08de7cd537beefc7561b4484c5941b91d2e19b04d76e4-merged.mount: Deactivated successfully.
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7aad1b8d0cb878e8c18da4e91b53cfce694a9649c7e9de79f5a1e190b75797e4-userdata-shm.mount: Deactivated successfully.
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed
Dec 05 08:05:02 np0005546420.localdomain python3[52270]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546420 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::metrics::qdr
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52644]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used.
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62'
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Notice: Applied catalog in 0.29 seconds
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Application:
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:    Initial environment: production
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:    Converged environment: production
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:          Run mode: user
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Changes:
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:             Total: 43
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Events:
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:           Success: 43
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:             Total: 43
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Resources:
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:           Skipped: 14
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:           Changed: 38
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:       Out of sync: 38
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:             Total: 82
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Time:
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:       Concat file: 0.00
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:              File: 0.13
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:    Transaction evaluation: 0.28
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:    Catalog application: 0.29
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:    Config retrieval: 0.46
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:          Last run: 1764921902
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:    Concat fragment: 0.00
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:             Total: 0.29
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]: Version:
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:            Config: 1764921901
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52649]:            Puppet: 7.10.0
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: libpod-0cf876f370da3280422a670bd648e1f9b4ff6ec56b7853ec45e15134e22e3a71.scope: Deactivated successfully.
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: libpod-0cf876f370da3280422a670bd648e1f9b4ff6ec56b7853ec45e15134e22e3a71.scope: Consumed 2.558s CPU time.
Dec 05 08:05:02 np0005546420.localdomain podman[52473]: 2025-12-05 08:05:02.233432691 +0000 UTC m=+4.970730689 container died 0cf876f370da3280422a670bd648e1f9b4ff6ec56b7853ec45e15134e22e3a71 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: tmp-crun.xBOeYD.mount: Deactivated successfully.
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cf876f370da3280422a670bd648e1f9b4ff6ec56b7853ec45e15134e22e3a71-userdata-shm.mount: Deactivated successfully.
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3c2e330997defc689ea7178f3ec3c4e18b224f1742cc6af7ec556ac2e9588fc5-merged.mount: Deactivated successfully.
Dec 05 08:05:02 np0005546420.localdomain podman[53243]: 2025-12-05 08:05:02.338307927 +0000 UTC m=+0.093783290 container cleanup 0cf876f370da3280422a670bd648e1f9b4ff6ec56b7853ec45e15134e22e3a71 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1)
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: libpod-conmon-0cf876f370da3280422a670bd648e1f9b4ff6ec56b7853ec45e15134e22e3a71.scope: Deactivated successfully.
Dec 05 08:05:02 np0005546420.localdomain python3[52270]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546420 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::iscsid
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 05 08:05:02 np0005546420.localdomain podman[53258]: 2025-12-05 08:05:02.389385814 +0000 UTC m=+0.124707968 container create f4d6847f60631445df7ca4136c1526d51daa461d297b8ab5cae70a40e2b5e3c6 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, config_id=tripleo_puppet_step1, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, container_name=container-puppet-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true)
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: Started libpod-conmon-f4d6847f60631445df7ca4136c1526d51daa461d297b8ab5cae70a40e2b5e3c6.scope.
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:05:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e062bde15c561a186b7e30080880293f1be1996e7656eda685d28e2ac8dfedb/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 05 08:05:02 np0005546420.localdomain podman[53258]: 2025-12-05 08:05:02.345280786 +0000 UTC m=+0.080602940 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 05 08:05:02 np0005546420.localdomain podman[53258]: 2025-12-05 08:05:02.459878566 +0000 UTC m=+0.195200720 container init f4d6847f60631445df7ca4136c1526d51daa461d297b8ab5cae70a40e2b5e3c6 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, name=rhosp17/openstack-rsyslog, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=container-puppet-rsyslog, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container)
Dec 05 08:05:02 np0005546420.localdomain podman[53258]: 2025-12-05 08:05:02.46706827 +0000 UTC m=+0.202390414 container start f4d6847f60631445df7ca4136c1526d51daa461d297b8ab5cae70a40e2b5e3c6 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4)
Dec 05 08:05:02 np0005546420.localdomain podman[53258]: 2025-12-05 08:05:02.467291247 +0000 UTC m=+0.202613391 container attach f4d6847f60631445df7ca4136c1526d51daa461d297b8ab5cae70a40e2b5e3c6 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:49:49Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: libpod-915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069.scope: Deactivated successfully.
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: libpod-915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069.scope: Consumed 2.690s CPU time.
Dec 05 08:05:02 np0005546420.localdomain podman[52462]: 2025-12-05 08:05:02.549348422 +0000 UTC m=+5.293474494 container died 915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:51:28Z, distribution-scope=public, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, container_name=container-puppet-collectd, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd)
Dec 05 08:05:02 np0005546420.localdomain podman[53353]: 2025-12-05 08:05:02.597257378 +0000 UTC m=+0.108442909 container create 33058e1bdb523cf141871aca508bc2d33ee8f7c5a0242a3abc3fcfe0f7aa792d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, container_name=container-puppet-ovn_controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: Started libpod-conmon-33058e1bdb523cf141871aca508bc2d33ee8f7c5a0242a3abc3fcfe0f7aa792d.scope.
Dec 05 08:05:02 np0005546420.localdomain podman[53353]: 2025-12-05 08:05:02.545764889 +0000 UTC m=+0.056950420 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:05:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed892db9d5a7f7e4ce8fde13396eaa6545b70008988b501a346c5f00ef20fcbd/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff)
Dec 05 08:05:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed892db9d5a7f7e4ce8fde13396eaa6545b70008988b501a346c5f00ef20fcbd/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 05 08:05:02 np0005546420.localdomain podman[53353]: 2025-12-05 08:05:02.65525885 +0000 UTC m=+0.166444381 container init 33058e1bdb523cf141871aca508bc2d33ee8f7c5a0242a3abc3fcfe0f7aa792d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-ovn_controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:05:02 np0005546420.localdomain podman[53353]: 2025-12-05 08:05:02.662420024 +0000 UTC m=+0.173605545 container start 33058e1bdb523cf141871aca508bc2d33ee8f7c5a0242a3abc3fcfe0f7aa792d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=container-puppet-ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_puppet_step1, tcib_managed=true, io.openshift.expose-services=, release=1761123044, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 05 08:05:02 np0005546420.localdomain podman[53353]: 2025-12-05 08:05:02.663079734 +0000 UTC m=+0.174265295 container attach 33058e1bdb523cf141871aca508bc2d33ee8f7c5a0242a3abc3fcfe0f7aa792d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, container_name=container-puppet-ovn_controller, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, com.redhat.component=openstack-ovn-controller-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Dec 05 08:05:02 np0005546420.localdomain podman[53399]: 2025-12-05 08:05:02.669931409 +0000 UTC m=+0.114401875 container cleanup 915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z)
Dec 05 08:05:02 np0005546420.localdomain systemd[1]: libpod-conmon-915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069.scope: Deactivated successfully.
Dec 05 08:05:02 np0005546420.localdomain python3[52270]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546420 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 05 08:05:02 np0005546420.localdomain puppet-user[52644]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 1.36 seconds
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]:    (file & line not available)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]:    (file & line not available)
Dec 05 08:05:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-9afdb42a401bcc34daaa41d4513f2b2692e74a65323c260e0716aac1381c2db1-merged.mount: Deactivated successfully.
Dec 05 08:05:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069-userdata-shm.mount: Deactivated successfully.
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}0afade6653aba11fc80cb8b8315af6d8dc0b0370c920f3d319c7bc1ad3fe1536'
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe'
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Warning: Empty environment setting 'TLS_PASSWORD'
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]:    (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}9d3f91d01b5791cf3407841cca0c92f3da22221b7d210843d9ee124ba6e4fb6f'
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26)
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.45 seconds
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created
Dec 05 08:05:03 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]: Notice: Applied catalog in 0.50 seconds
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]: Application:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:    Initial environment: production
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:    Converged environment: production
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:          Run mode: user
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]: Changes:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:             Total: 31
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]: Events:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:           Success: 31
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:             Total: 31
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]: Resources:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:           Skipped: 22
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:           Changed: 31
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:       Out of sync: 31
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:             Total: 151
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]: Time:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:           Package: 0.02
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:    Ceilometer config: 0.40
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:    Transaction evaluation: 0.49
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:    Catalog application: 0.50
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:    Config retrieval: 0.56
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:          Last run: 1764921904
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:         Resources: 0.00
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:             Total: 0.51
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]: Version:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:            Config: 1764921903
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52738]:            Puppet: 7.10.0
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:    (file & line not available)
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:    (file & line not available)
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.22 seconds
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain systemd[1]: libpod-9884c977711a53b5342d3a248fb7987eb7aaaec0d2a35873eb1d9d2f14e41905.scope: Deactivated successfully.
Dec 05 08:05:04 np0005546420.localdomain systemd[1]: libpod-9884c977711a53b5342d3a248fb7987eb7aaaec0d2a35873eb1d9d2f14e41905.scope: Consumed 3.201s CPU time.
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2'
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b'
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}3b7ee24aeedd28864950d1d847220a2cd2de2cafc8f59f234b7cef3ffd0f6857'
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Notice: Applied catalog in 0.11 seconds
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Application:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:    Initial environment: production
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:    Converged environment: production
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:          Run mode: user
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Changes:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:             Total: 3
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Events:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:           Success: 3
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:             Total: 3
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Resources:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:           Skipped: 11
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:           Changed: 3
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:       Out of sync: 3
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:             Total: 25
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Time:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:       Concat file: 0.00
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:    Concat fragment: 0.00
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:              File: 0.02
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:    Transaction evaluation: 0.10
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:    Catalog application: 0.11
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:    Config retrieval: 0.26
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:          Last run: 1764921904
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:             Total: 0.11
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]: Version:
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:            Config: 1764921904
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53383]:            Puppet: 7.10.0
Dec 05 08:05:04 np0005546420.localdomain podman[53670]: 2025-12-05 08:05:04.571403725 +0000 UTC m=+0.048336771 container died 9884c977711a53b5342d3a248fb7987eb7aaaec0d2a35873eb1d9d2f14e41905 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-ceilometer, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, build-date=2025-11-19T00:11:59Z, name=rhosp17/openstack-ceilometer-central, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-central-container, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_puppet_step1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain systemd[1]: tmp-crun.G737kc.mount: Deactivated successfully.
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9884c977711a53b5342d3a248fb7987eb7aaaec0d2a35873eb1d9d2f14e41905-userdata-shm.mount: Deactivated successfully.
Dec 05 08:05:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-34a64d6c17ae21dd1cdba3026e372bef8c469d0e8cedde2cc51b076cd6a294ae-merged.mount: Deactivated successfully.
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53451]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53451]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53451]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53451]:    (file & line not available)
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain podman[53670]: 2025-12-05 08:05:04.662019827 +0000 UTC m=+0.138952823 container cleanup 9884c977711a53b5342d3a248fb7987eb7aaaec0d2a35873eb1d9d2f14e41905 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vcs-type=git, name=rhosp17/openstack-ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-ceilometer, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:59Z, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central)
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain systemd[1]: libpod-conmon-9884c977711a53b5342d3a248fb7987eb7aaaec0d2a35873eb1d9d2f14e41905.scope: Deactivated successfully.
Dec 05 08:05:04 np0005546420.localdomain python3[52270]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546420 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::ceilometer::agent::polling
                                                         include tripleo::profile::base::ceilometer::agent::polling
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53451]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53451]:    (file & line not available)
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}c3826591c6773695ff03c6589172449facdf16883c7a86fceee478fd480e36dd'
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain systemd[1]: libpod-f4d6847f60631445df7ca4136c1526d51daa461d297b8ab5cae70a40e2b5e3c6.scope: Deactivated successfully.
Dec 05 08:05:04 np0005546420.localdomain systemd[1]: libpod-f4d6847f60631445df7ca4136c1526d51daa461d297b8ab5cae70a40e2b5e3c6.scope: Consumed 2.268s CPU time.
Dec 05 08:05:04 np0005546420.localdomain podman[53258]: 2025-12-05 08:05:04.851093114 +0000 UTC m=+2.586415308 container died f4d6847f60631445df7ca4136c1526d51daa461d297b8ab5cae70a40e2b5e3c6 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:49Z, config_id=tripleo_puppet_step1, architecture=x86_64)
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53451]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.28 seconds
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain podman[53788]: 2025-12-05 08:05:04.956659612 +0000 UTC m=+0.099497590 container cleanup f4d6847f60631445df7ca4136c1526d51daa461d297b8ab5cae70a40e2b5e3c6 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, build-date=2025-11-18T22:49:49Z, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, tcib_managed=true)
Dec 05 08:05:04 np0005546420.localdomain systemd[1]: libpod-conmon-f4d6847f60631445df7ca4136c1526d51daa461d297b8ab5cae70a40e2b5e3c6.scope: Deactivated successfully.
Dec 05 08:05:04 np0005546420.localdomain python3[52270]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546420 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain ovs-vsctl[53805]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642
Dec 05 08:05:04 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created
Dec 05 08:05:04 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53814]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53823]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.107
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53832]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005546420.localdomain
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005546420.novalocal' to 'np0005546420.localdomain'
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53836]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53838]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53840]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53842]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53844]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53846]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53848]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:c9:c8:f9
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53858]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53864]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain ovs-vsctl[53866]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created
Dec 05 08:05:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-6e062bde15c561a186b7e30080880293f1be1996e7656eda685d28e2ac8dfedb-merged.mount: Deactivated successfully.
Dec 05 08:05:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f4d6847f60631445df7ca4136c1526d51daa461d297b8ab5cae70a40e2b5e3c6-userdata-shm.mount: Deactivated successfully.
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Notice: Applied catalog in 0.71 seconds
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Application:
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:    Initial environment: production
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:    Converged environment: production
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:          Run mode: user
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Changes:
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:             Total: 14
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Events:
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:           Success: 14
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:             Total: 14
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Resources:
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:           Skipped: 12
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:           Changed: 14
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:       Out of sync: 14
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:             Total: 29
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Time:
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:              Exec: 0.03
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:    Config retrieval: 0.33
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:         Vs config: 0.57
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:    Transaction evaluation: 0.67
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:    Catalog application: 0.71
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:          Last run: 1764921905
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:             Total: 0.71
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]: Version:
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:            Config: 1764921904
Dec 05 08:05:05 np0005546420.localdomain puppet-user[53451]:            Puppet: 7.10.0
Dec 05 08:05:05 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully
Dec 05 08:05:06 np0005546420.localdomain systemd[1]: libpod-33058e1bdb523cf141871aca508bc2d33ee8f7c5a0242a3abc3fcfe0f7aa792d.scope: Deactivated successfully.
Dec 05 08:05:06 np0005546420.localdomain systemd[1]: libpod-33058e1bdb523cf141871aca508bc2d33ee8f7c5a0242a3abc3fcfe0f7aa792d.scope: Consumed 3.181s CPU time.
Dec 05 08:05:06 np0005546420.localdomain podman[53353]: 2025-12-05 08:05:06.085255352 +0000 UTC m=+3.596440943 container died 33058e1bdb523cf141871aca508bc2d33ee8f7c5a0242a3abc3fcfe0f7aa792d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, container_name=container-puppet-ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:05:06 np0005546420.localdomain podman[53415]: 2025-12-05 08:05:02.820821793 +0000 UTC m=+0.191170674 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Dec 05 08:05:06 np0005546420.localdomain systemd[1]: tmp-crun.PbFJt3.mount: Deactivated successfully.
Dec 05 08:05:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33058e1bdb523cf141871aca508bc2d33ee8f7c5a0242a3abc3fcfe0f7aa792d-userdata-shm.mount: Deactivated successfully.
Dec 05 08:05:06 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully
Dec 05 08:05:06 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created
Dec 05 08:05:06 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created
Dec 05 08:05:06 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created
Dec 05 08:05:06 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created
Dec 05 08:05:06 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created
Dec 05 08:05:06 np0005546420.localdomain podman[53905]: 2025-12-05 08:05:06.558064495 +0000 UTC m=+0.459856649 container cleanup 33058e1bdb523cf141871aca508bc2d33ee8f7c5a0242a3abc3fcfe0f7aa792d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, container_name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 05 08:05:06 np0005546420.localdomain python3[52270]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546420 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::neutron::agents::ovn
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 05 08:05:06 np0005546420.localdomain systemd[1]: libpod-conmon-33058e1bdb523cf141871aca508bc2d33ee8f7c5a0242a3abc3fcfe0f7aa792d.scope: Deactivated successfully.
Dec 05 08:05:06 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created
Dec 05 08:05:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ed892db9d5a7f7e4ce8fde13396eaa6545b70008988b501a346c5f00ef20fcbd-merged.mount: Deactivated successfully.
Dec 05 08:05:06 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Dec 05 08:05:06 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Dec 05 08:05:06 np0005546420.localdomain podman[53953]: 2025-12-05 08:05:06.752617472 +0000 UTC m=+0.091887182 container create 02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, build-date=2025-11-19T00:23:27Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_puppet_step1, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, name=rhosp17/openstack-neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-server)
Dec 05 08:05:06 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created
Dec 05 08:05:06 np0005546420.localdomain podman[53953]: 2025-12-05 08:05:06.698722969 +0000 UTC m=+0.037992659 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Dec 05 08:05:06 np0005546420.localdomain systemd[1]: Started libpod-conmon-02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471.scope.
Dec 05 08:05:06 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:05:06 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39d2f58466688fe53652a364ae73822e36bcfb567eba3c646d9e26add473af11/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 05 08:05:06 np0005546420.localdomain podman[53953]: 2025-12-05 08:05:06.835871704 +0000 UTC m=+0.175141404 container init 02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-server, container_name=container-puppet-neutron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., build-date=2025-11-19T00:23:27Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64)
Dec 05 08:05:06 np0005546420.localdomain podman[53953]: 2025-12-05 08:05:06.845093572 +0000 UTC m=+0.184363272 container start 02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=container-puppet-neutron, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:23:27Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-server)
Dec 05 08:05:06 np0005546420.localdomain podman[53953]: 2025-12-05 08:05:06.845422692 +0000 UTC m=+0.184692392 container attach 02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-server-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:23:27Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-neutron-server, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=container-puppet-neutron, description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}3a12438802493a75725c4f7704f2af6db1ef72af396369e5de28f6f4d6a7ed98'
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Notice: Applied catalog in 4.43 seconds
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Application:
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Initial environment: production
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Converged environment: production
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:          Run mode: user
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Changes:
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:             Total: 183
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Events:
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:           Success: 183
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:             Total: 183
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Resources:
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:           Changed: 183
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:       Out of sync: 183
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:           Skipped: 57
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:             Total: 487
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Time:
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Concat fragment: 0.00
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:            Anchor: 0.00
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:         File line: 0.00
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Virtlogd config: 0.00
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Virtstoraged config: 0.01
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:              Exec: 0.01
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Virtqemud config: 0.01
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:           Package: 0.02
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Virtsecretd config: 0.02
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Virtproxyd config: 0.03
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:              File: 0.03
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Virtnodedevd config: 0.04
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:            Augeas: 0.96
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Config retrieval: 1.60
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:          Last run: 1764921907
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:       Nova config: 3.08
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Transaction evaluation: 4.41
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:    Catalog application: 4.43
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:         Resources: 0.00
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:       Concat file: 0.00
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:             Total: 4.43
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]: Version:
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:            Config: 1764921901
Dec 05 08:05:07 np0005546420.localdomain puppet-user[52644]:            Puppet: 7.10.0
Dec 05 08:05:08 np0005546420.localdomain systemd[1]: libpod-5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376.scope: Deactivated successfully.
Dec 05 08:05:08 np0005546420.localdomain systemd[1]: libpod-5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376.scope: Consumed 8.523s CPU time.
Dec 05 08:05:08 np0005546420.localdomain podman[52442]: 2025-12-05 08:05:08.337831249 +0000 UTC m=+11.094885045 container died 5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_puppet_step1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, batch=17.1_20251118.1)
Dec 05 08:05:08 np0005546420.localdomain systemd[1]: tmp-crun.Gd9TsW.mount: Deactivated successfully.
Dec 05 08:05:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376-userdata-shm.mount: Deactivated successfully.
Dec 05 08:05:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-28e6b391ec59dcd99cc00b446ec20b036d136b8f7911be581529c928ff9bef29-merged.mount: Deactivated successfully.
Dec 05 08:05:08 np0005546420.localdomain podman[54035]: 2025-12-05 08:05:08.525556354 +0000 UTC m=+0.174055679 container cleanup 5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, container_name=container-puppet-nova_libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_puppet_step1, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64)
Dec 05 08:05:08 np0005546420.localdomain systemd[1]: libpod-conmon-5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376.scope: Deactivated successfully.
Dec 05 08:05:08 np0005546420.localdomain python3[52270]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546420 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages
                                                         # TODO(emilien): figure how to deal with libvirt profile.
                                                         # We'll probably treat it like we do with Neutron plugins.
                                                         # Until then, just include it in the default nova-compute role.
                                                         include tripleo::profile::base::nova::compute::libvirt
                                                         
                                                         include tripleo::profile::base::nova::libvirt
                                                         
                                                         include tripleo::profile::base::nova::compute::libvirt_guests
                                                         
                                                         include tripleo::profile::base::sshd
                                                         include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:05:08 np0005546420.localdomain puppet-user[53995]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass
Dec 05 08:05:08 np0005546420.localdomain puppet-user[53995]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:05:08 np0005546420.localdomain puppet-user[53995]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:05:08 np0005546420.localdomain puppet-user[53995]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:05:08 np0005546420.localdomain puppet-user[53995]:    (file & line not available)
Dec 05 08:05:08 np0005546420.localdomain puppet-user[53995]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:05:08 np0005546420.localdomain puppet-user[53995]:    (file & line not available)
Dec 05 08:05:08 np0005546420.localdomain puppet-user[53995]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37)
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.61 seconds
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Notice: Applied catalog in 0.45 seconds
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Application:
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:    Initial environment: production
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:    Converged environment: production
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:          Run mode: user
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Changes:
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:             Total: 33
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Events:
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:           Success: 33
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:             Total: 33
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Resources:
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:           Skipped: 21
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:           Changed: 33
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:       Out of sync: 33
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:             Total: 155
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Time:
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:         Resources: 0.00
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:    Ovn metadata agent config: 0.02
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:    Neutron config: 0.35
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:    Transaction evaluation: 0.45
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:    Catalog application: 0.45
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:    Config retrieval: 0.68
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:          Last run: 1764921909
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:             Total: 0.46
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]: Version:
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:            Config: 1764921908
Dec 05 08:05:09 np0005546420.localdomain puppet-user[53995]:            Puppet: 7.10.0
Dec 05 08:05:10 np0005546420.localdomain systemd[1]: libpod-02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471.scope: Deactivated successfully.
Dec 05 08:05:10 np0005546420.localdomain systemd[1]: libpod-02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471.scope: Consumed 3.572s CPU time.
Dec 05 08:05:10 np0005546420.localdomain podman[53953]: 2025-12-05 08:05:10.439936164 +0000 UTC m=+3.779205874 container died 02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, container_name=container-puppet-neutron, com.redhat.component=openstack-neutron-server-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-server, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://www.redhat.com, build-date=2025-11-19T00:23:27Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true)
Dec 05 08:05:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471-userdata-shm.mount: Deactivated successfully.
Dec 05 08:05:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-39d2f58466688fe53652a364ae73822e36bcfb567eba3c646d9e26add473af11-merged.mount: Deactivated successfully.
Dec 05 08:05:10 np0005546420.localdomain podman[54176]: 2025-12-05 08:05:10.633297805 +0000 UTC m=+0.182632307 container cleanup 02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, com.redhat.component=openstack-neutron-server-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:23:27Z, container_name=container-puppet-neutron, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Dec 05 08:05:10 np0005546420.localdomain systemd[1]: libpod-conmon-02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471.scope: Deactivated successfully.
Dec 05 08:05:10 np0005546420.localdomain python3[52270]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546420 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::neutron::ovn_metadata
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546420', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Dec 05 08:05:10 np0005546420.localdomain sudo[52268]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:11 np0005546420.localdomain sudo[54228]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cbjlmrivfnpjibmwjioxkaxptvlywrkf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:11 np0005546420.localdomain sudo[54228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:11 np0005546420.localdomain python3[54230]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:05:11 np0005546420.localdomain sudo[54228]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:11 np0005546420.localdomain sudo[54244]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfccwmnnmvoktadflcgcaayuvcuscjte ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:11 np0005546420.localdomain sudo[54244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:12 np0005546420.localdomain sudo[54244]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:12 np0005546420.localdomain sudo[54260]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-copecucgzdqrvlqtcewqztocmdkrsyzv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:12 np0005546420.localdomain sudo[54260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:12 np0005546420.localdomain python3[54262]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:05:12 np0005546420.localdomain sudo[54260]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:12 np0005546420.localdomain sudo[54310]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhkolkdgvjubxfelyrgaiovxhvxrmqyi ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:12 np0005546420.localdomain sudo[54310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:13 np0005546420.localdomain python3[54312]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:05:13 np0005546420.localdomain sudo[54310]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:13 np0005546420.localdomain sudo[54353]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubcigyoeewodzypfhnxwfmtddkoioqad ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:13 np0005546420.localdomain sudo[54353]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:13 np0005546420.localdomain python3[54355]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921912.6975665-85029-111894484034775/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:05:13 np0005546420.localdomain sudo[54353]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:14 np0005546420.localdomain sudo[54415]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hoaoyesacscctyjvwtkeaxrbfdlhnrph ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:14 np0005546420.localdomain sudo[54415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:14 np0005546420.localdomain python3[54417]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:05:14 np0005546420.localdomain sudo[54415]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:15 np0005546420.localdomain sudo[54458]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-igkaytqxxwkfkulsdelyyjnczggwkmhj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:15 np0005546420.localdomain sudo[54458]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:15 np0005546420.localdomain python3[54460]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921914.5908844-85029-107650898275982/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:05:15 np0005546420.localdomain sudo[54458]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:15 np0005546420.localdomain sudo[54520]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxzbbohimdthehbwbciatqlmbyzrkzge ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:15 np0005546420.localdomain sudo[54520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:15 np0005546420.localdomain python3[54522]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:05:15 np0005546420.localdomain sudo[54520]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:15 np0005546420.localdomain sudo[54563]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-viwgeuqvmorudhdzuirlbayquituucvf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:15 np0005546420.localdomain sudo[54563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:16 np0005546420.localdomain python3[54565]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921915.495866-85081-251564148491943/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:05:16 np0005546420.localdomain sudo[54563]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:16 np0005546420.localdomain sudo[54625]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzvhccovpayxnxkldtakwnesiehvckea ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:16 np0005546420.localdomain sudo[54625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:16 np0005546420.localdomain python3[54627]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:05:16 np0005546420.localdomain sudo[54625]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:16 np0005546420.localdomain sudo[54668]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-piklwyfgvzowpyievqzqlardkegyjnos ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:16 np0005546420.localdomain sudo[54668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:17 np0005546420.localdomain python3[54670]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921916.3180315-85097-162273131880608/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:05:17 np0005546420.localdomain sudo[54668]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:17 np0005546420.localdomain sudo[54698]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xebrvicghtokrqbbajcwlmwkhnrxkejw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:17 np0005546420.localdomain sudo[54698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:17 np0005546420.localdomain python3[54700]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:05:17 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:05:17 np0005546420.localdomain systemd-rc-local-generator[54724]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:05:17 np0005546420.localdomain systemd-sysv-generator[54728]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:05:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:05:17 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:05:17 np0005546420.localdomain systemd-sysv-generator[54764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:05:17 np0005546420.localdomain systemd-rc-local-generator[54757]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:05:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:05:18 np0005546420.localdomain systemd[1]: Starting TripleO Container Shutdown...
Dec 05 08:05:18 np0005546420.localdomain systemd[1]: Finished TripleO Container Shutdown.
Dec 05 08:05:18 np0005546420.localdomain sudo[54698]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:18 np0005546420.localdomain sudo[54822]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gvuobbhmirsnqdvjadjirmioaitrxezy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:18 np0005546420.localdomain sudo[54822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:18 np0005546420.localdomain python3[54824]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:05:18 np0005546420.localdomain sudo[54822]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:18 np0005546420.localdomain sudo[54865]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mzsqeijullmruorodfyvucxoyzzugxxa ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:18 np0005546420.localdomain sudo[54865]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:18 np0005546420.localdomain python3[54867]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921918.2216175-85216-198492352193227/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:05:18 np0005546420.localdomain sudo[54865]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:19 np0005546420.localdomain sudo[54927]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ustirsegtaljmwrjjooffvtlycovchqj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:19 np0005546420.localdomain sudo[54927]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:19 np0005546420.localdomain python3[54929]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:05:19 np0005546420.localdomain sudo[54927]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:19 np0005546420.localdomain sudo[54970]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmlxjgubaoqcbukqxloqxevzapjkcwwb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:19 np0005546420.localdomain sudo[54970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:19 np0005546420.localdomain python3[54972]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921919.0989993-85243-274875086333540/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:05:19 np0005546420.localdomain sudo[54970]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:19 np0005546420.localdomain sudo[55000]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvfplouissznqhwrzmoefukdzizsscif ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:20 np0005546420.localdomain sudo[55000]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:20 np0005546420.localdomain python3[55002]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:05:20 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:05:20 np0005546420.localdomain systemd-rc-local-generator[55025]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:05:20 np0005546420.localdomain systemd-sysv-generator[55030]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:05:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:05:20 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:05:20 np0005546420.localdomain systemd-sysv-generator[55071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:05:20 np0005546420.localdomain systemd-rc-local-generator[55068]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:05:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:05:20 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 08:05:20 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 08:05:20 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 08:05:20 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 08:05:20 np0005546420.localdomain sudo[55000]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:21 np0005546420.localdomain sudo[55093]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jkklyyeyskmjmrijkviwmjxujjqtxsmr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:21 np0005546420.localdomain sudo[55093]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: 5ff3cb86de79e978498bafac8cf0172c
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: 4767aaabc3de112d8791c290aa2b669d
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: f466dfc41ade6bb0052985f932e2b61e
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: ac0f5be6f71e6f8c16cd05155c4b5429
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: ac0f5be6f71e6f8c16cd05155c4b5429
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: ac0f5be6f71e6f8c16cd05155c4b5429
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: ac0f5be6f71e6f8c16cd05155c4b5429
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: ac0f5be6f71e6f8c16cd05155c4b5429
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: ac0f5be6f71e6f8c16cd05155c4b5429
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: a03f6602210fb500978d9137df7e914f
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 2a14d146ce921397a1b78b68c853c045
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 2a14d146ce921397a1b78b68c853c045
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: ac0f5be6f71e6f8c16cd05155c4b5429
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: ac0f5be6f71e6f8c16cd05155c4b5429
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: d6812e1160bfb2e956bcab4e760845cf
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429
Dec 05 08:05:21 np0005546420.localdomain python3[55095]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: ac0f5be6f71e6f8c16cd05155c4b5429
Dec 05 08:05:21 np0005546420.localdomain sudo[55093]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:21 np0005546420.localdomain sudo[55109]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ncetbdiniyvqktinmdovbxgervqwmtex ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:21 np0005546420.localdomain sudo[55109]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:22 np0005546420.localdomain sudo[55109]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:22 np0005546420.localdomain sudo[55152]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uedzjiczghhonyfylpuutedmaznibnla ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:22 np0005546420.localdomain sudo[55152]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:22 np0005546420.localdomain python3[55154]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 05 08:05:23 np0005546420.localdomain podman[55191]: 2025-12-05 08:05:23.16056297 +0000 UTC m=+0.063750723 container create ef32d9afb9cd95f5d4451ed7c4e4519d452fe3ba8eef8877f53ee8370a4fa6bc (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64)
Dec 05 08:05:23 np0005546420.localdomain systemd[1]: Started libpod-conmon-ef32d9afb9cd95f5d4451ed7c4e4519d452fe3ba8eef8877f53ee8370a4fa6bc.scope.
Dec 05 08:05:23 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:05:23 np0005546420.localdomain podman[55191]: 2025-12-05 08:05:23.128495228 +0000 UTC m=+0.031683031 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 05 08:05:23 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a287e5baf3cfe95e635859734034da81401b58443d725291782300b9af04e40/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff)
Dec 05 08:05:23 np0005546420.localdomain podman[55191]: 2025-12-05 08:05:23.240713784 +0000 UTC m=+0.143901557 container init ef32d9afb9cd95f5d4451ed7c4e4519d452fe3ba8eef8877f53ee8370a4fa6bc (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr_init_logs, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1)
Dec 05 08:05:23 np0005546420.localdomain podman[55191]: 2025-12-05 08:05:23.248934151 +0000 UTC m=+0.152121944 container start ef32d9afb9cd95f5d4451ed7c4e4519d452fe3ba8eef8877f53ee8370a4fa6bc (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 05 08:05:23 np0005546420.localdomain podman[55191]: 2025-12-05 08:05:23.249281832 +0000 UTC m=+0.152469615 container attach ef32d9afb9cd95f5d4451ed7c4e4519d452fe3ba8eef8877f53ee8370a4fa6bc (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=metrics_qdr_init_logs, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd)
Dec 05 08:05:23 np0005546420.localdomain systemd[1]: libpod-ef32d9afb9cd95f5d4451ed7c4e4519d452fe3ba8eef8877f53ee8370a4fa6bc.scope: Deactivated successfully.
Dec 05 08:05:23 np0005546420.localdomain podman[55191]: 2025-12-05 08:05:23.257274902 +0000 UTC m=+0.160462705 container died ef32d9afb9cd95f5d4451ed7c4e4519d452fe3ba8eef8877f53ee8370a4fa6bc (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, container_name=metrics_qdr_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 05 08:05:23 np0005546420.localdomain podman[55211]: 2025-12-05 08:05:23.341013708 +0000 UTC m=+0.070328199 container cleanup ef32d9afb9cd95f5d4451ed7c4e4519d452fe3ba8eef8877f53ee8370a4fa6bc (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, container_name=metrics_qdr_init_logs, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd)
Dec 05 08:05:23 np0005546420.localdomain systemd[1]: libpod-conmon-ef32d9afb9cd95f5d4451ed7c4e4519d452fe3ba8eef8877f53ee8370a4fa6bc.scope: Deactivated successfully.
Dec 05 08:05:23 np0005546420.localdomain python3[55154]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd
Dec 05 08:05:23 np0005546420.localdomain podman[55290]: 2025-12-05 08:05:23.795506657 +0000 UTC m=+0.082427076 container create 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step1, container_name=metrics_qdr, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64)
Dec 05 08:05:23 np0005546420.localdomain systemd[1]: Started libpod-conmon-89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.scope.
Dec 05 08:05:23 np0005546420.localdomain podman[55290]: 2025-12-05 08:05:23.752280896 +0000 UTC m=+0.039201395 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 05 08:05:23 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:05:23 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef5a06c835915ebb12133f669566b60e1f53fa40ede7bc1454e6dd2b41cdd2b/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff)
Dec 05 08:05:23 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef5a06c835915ebb12133f669566b60e1f53fa40ede7bc1454e6dd2b41cdd2b/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff)
Dec 05 08:05:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:05:23 np0005546420.localdomain podman[55290]: 2025-12-05 08:05:23.890348501 +0000 UTC m=+0.177268990 container init 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T22:49:46Z, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 05 08:05:23 np0005546420.localdomain sudo[55310]: qdrouterd : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:05:23 np0005546420.localdomain sudo[55310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42465)
Dec 05 08:05:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:05:23 np0005546420.localdomain podman[55290]: 2025-12-05 08:05:23.928224533 +0000 UTC m=+0.215144972 container start 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:05:23 np0005546420.localdomain python3[55154]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=5ff3cb86de79e978498bafac8cf0172c --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 05 08:05:23 np0005546420.localdomain sudo[55310]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:24 np0005546420.localdomain podman[55312]: 2025-12-05 08:05:24.043798744 +0000 UTC m=+0.098966753 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, release=1761123044, name=rhosp17/openstack-qdrouterd, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step1, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 05 08:05:24 np0005546420.localdomain sudo[55152]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5a287e5baf3cfe95e635859734034da81401b58443d725291782300b9af04e40-merged.mount: Deactivated successfully.
Dec 05 08:05:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef32d9afb9cd95f5d4451ed7c4e4519d452fe3ba8eef8877f53ee8370a4fa6bc-userdata-shm.mount: Deactivated successfully.
Dec 05 08:05:24 np0005546420.localdomain podman[55312]: 2025-12-05 08:05:24.252279278 +0000 UTC m=+0.307447257 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, container_name=metrics_qdr, release=1761123044, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Dec 05 08:05:24 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:05:24 np0005546420.localdomain sudo[55380]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yyddhriarxptcmudvyjnjnyrniyyrbrc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:24 np0005546420.localdomain sudo[55380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:24 np0005546420.localdomain python3[55382]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:05:24 np0005546420.localdomain sudo[55380]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:24 np0005546420.localdomain sudo[55396]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vgpwedptkeqltprklstigarqrkkyzmcw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:24 np0005546420.localdomain sudo[55396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:24 np0005546420.localdomain python3[55398]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:05:24 np0005546420.localdomain sudo[55396]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:25 np0005546420.localdomain sudo[55457]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmrunyerhvrydbxahnswkqbmnhdkqenw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:25 np0005546420.localdomain sudo[55457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:25 np0005546420.localdomain python3[55459]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921924.7581422-85341-188068225595692/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:05:25 np0005546420.localdomain sudo[55457]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:25 np0005546420.localdomain sudo[55473]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efhvmryicuvjlerqzaogsglgwtcygtkq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:25 np0005546420.localdomain sudo[55473]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:25 np0005546420.localdomain python3[55475]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 08:05:25 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:05:25 np0005546420.localdomain systemd-sysv-generator[55503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:05:25 np0005546420.localdomain systemd-rc-local-generator[55499]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:05:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:05:26 np0005546420.localdomain sudo[55473]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:26 np0005546420.localdomain sudo[55525]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dlesxtrmsqmrcowrcvzrmwrfmznnanab ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:05:26 np0005546420.localdomain sudo[55525]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:26 np0005546420.localdomain python3[55527]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:05:26 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:05:26 np0005546420.localdomain systemd-sysv-generator[55554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:05:26 np0005546420.localdomain systemd-rc-local-generator[55550]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:05:26 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:05:26 np0005546420.localdomain systemd[1]: Starting metrics_qdr container...
Dec 05 08:05:26 np0005546420.localdomain systemd[1]: Started metrics_qdr container.
Dec 05 08:05:26 np0005546420.localdomain sudo[55525]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:27 np0005546420.localdomain sudo[55604]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rapbgxcukjdikhoqrncdgifkscptwywy ; /usr/bin/python3
Dec 05 08:05:27 np0005546420.localdomain sudo[55604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:27 np0005546420.localdomain python3[55606]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:05:27 np0005546420.localdomain sudo[55604]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:27 np0005546420.localdomain sudo[55652]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-euatqamgeounkzggjzmalezenmqxmrux ; /usr/bin/python3
Dec 05 08:05:27 np0005546420.localdomain sudo[55652]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:27 np0005546420.localdomain sudo[55652]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:28 np0005546420.localdomain sudo[55695]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-knorpdrvxiegnutkqldghoyurfixelmm ; /usr/bin/python3
Dec 05 08:05:28 np0005546420.localdomain sudo[55695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:28 np0005546420.localdomain sudo[55695]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:28 np0005546420.localdomain sudo[55725]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bemmbbjnztmlrsvpbnemklbaeidwianj ; /usr/bin/python3
Dec 05 08:05:28 np0005546420.localdomain sudo[55725]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:28 np0005546420.localdomain python3[55727]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005546420 step=1 update_config_hash_only=False
Dec 05 08:05:28 np0005546420.localdomain sudo[55725]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:29 np0005546420.localdomain sudo[55741]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rllzbkwuxwuezukxgaycgpkvurjjmrjm ; /usr/bin/python3
Dec 05 08:05:29 np0005546420.localdomain sudo[55741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:29 np0005546420.localdomain python3[55743]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:05:29 np0005546420.localdomain sudo[55741]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:29 np0005546420.localdomain sudo[55757]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dtovyektggajhtwbkxissmvtpccdhkxx ; /usr/bin/python3
Dec 05 08:05:29 np0005546420.localdomain sudo[55757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:05:29 np0005546420.localdomain python3[55759]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 05 08:05:29 np0005546420.localdomain sudo[55757]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:05:54 np0005546420.localdomain systemd[1]: tmp-crun.CHFRqW.mount: Deactivated successfully.
Dec 05 08:05:54 np0005546420.localdomain podman[55760]: 2025-12-05 08:05:54.530437244 +0000 UTC m=+0.101390222 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=metrics_qdr, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:05:54 np0005546420.localdomain podman[55760]: 2025-12-05 08:05:54.749431294 +0000 UTC m=+0.320384282 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com)
Dec 05 08:05:54 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:05:58 np0005546420.localdomain sudo[55787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:05:58 np0005546420.localdomain sudo[55787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:05:58 np0005546420.localdomain sudo[55787]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:58 np0005546420.localdomain sudo[55802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:05:58 np0005546420.localdomain sudo[55802]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:05:59 np0005546420.localdomain sudo[55802]: pam_unix(sudo:session): session closed for user root
Dec 05 08:05:59 np0005546420.localdomain sudo[55848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:05:59 np0005546420.localdomain sudo[55848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:05:59 np0005546420.localdomain sudo[55848]: pam_unix(sudo:session): session closed for user root
Dec 05 08:06:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:06:25 np0005546420.localdomain systemd[1]: tmp-crun.XcBXSI.mount: Deactivated successfully.
Dec 05 08:06:25 np0005546420.localdomain podman[55863]: 2025-12-05 08:06:25.511245955 +0000 UTC m=+0.089017197 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.openshift.expose-services=, release=1761123044, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team)
Dec 05 08:06:25 np0005546420.localdomain podman[55863]: 2025-12-05 08:06:25.744600932 +0000 UTC m=+0.322372204 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:06:25 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:06:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:06:56 np0005546420.localdomain podman[55890]: 2025-12-05 08:06:56.51548163 +0000 UTC m=+0.087420878 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:46Z, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Dec 05 08:06:56 np0005546420.localdomain podman[55890]: 2025-12-05 08:06:56.721714361 +0000 UTC m=+0.293653659 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 05 08:06:56 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:06:59 np0005546420.localdomain sudo[55921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:06:59 np0005546420.localdomain sudo[55921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:06:59 np0005546420.localdomain sudo[55921]: pam_unix(sudo:session): session closed for user root
Dec 05 08:07:00 np0005546420.localdomain sudo[55936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:07:00 np0005546420.localdomain sudo[55936]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:07:00 np0005546420.localdomain sudo[55936]: pam_unix(sudo:session): session closed for user root
Dec 05 08:07:01 np0005546420.localdomain sudo[55984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:07:01 np0005546420.localdomain sudo[55984]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:07:01 np0005546420.localdomain sudo[55984]: pam_unix(sudo:session): session closed for user root
Dec 05 08:07:13 np0005546420.localdomain sshd[55999]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:07:14 np0005546420.localdomain sshd[55999]: Connection reset by authenticating user root 45.135.232.92 port 21804 [preauth]
Dec 05 08:07:15 np0005546420.localdomain sshd[56001]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:07:16 np0005546420.localdomain sshd[56001]: Connection reset by authenticating user root 45.135.232.92 port 25972 [preauth]
Dec 05 08:07:16 np0005546420.localdomain sshd[56003]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:07:18 np0005546420.localdomain sshd[56003]: Connection reset by authenticating user root 45.135.232.92 port 25988 [preauth]
Dec 05 08:07:18 np0005546420.localdomain sshd[56005]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:07:21 np0005546420.localdomain sshd[56005]: Invalid user user from 45.135.232.92 port 25996
Dec 05 08:07:21 np0005546420.localdomain sshd[56005]: Connection reset by invalid user user 45.135.232.92 port 25996 [preauth]
Dec 05 08:07:21 np0005546420.localdomain sshd[56007]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:07:23 np0005546420.localdomain sshd[56007]: Connection reset by authenticating user root 45.135.232.92 port 26012 [preauth]
Dec 05 08:07:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:07:27 np0005546420.localdomain systemd[1]: tmp-crun.cVesTy.mount: Deactivated successfully.
Dec 05 08:07:27 np0005546420.localdomain podman[56009]: 2025-12-05 08:07:27.520511877 +0000 UTC m=+0.096254642 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:07:27 np0005546420.localdomain podman[56009]: 2025-12-05 08:07:27.715941282 +0000 UTC m=+0.291683987 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step1)
Dec 05 08:07:27 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:07:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:07:58 np0005546420.localdomain systemd[1]: tmp-crun.j44XL0.mount: Deactivated successfully.
Dec 05 08:07:58 np0005546420.localdomain podman[56038]: 2025-12-05 08:07:58.505399804 +0000 UTC m=+0.085647516 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, config_id=tripleo_step1, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, distribution-scope=public)
Dec 05 08:07:58 np0005546420.localdomain podman[56038]: 2025-12-05 08:07:58.69630121 +0000 UTC m=+0.276548882 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=metrics_qdr, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 05 08:07:58 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:08:01 np0005546420.localdomain sudo[56067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:08:01 np0005546420.localdomain sudo[56067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:08:01 np0005546420.localdomain sudo[56067]: pam_unix(sudo:session): session closed for user root
Dec 05 08:08:01 np0005546420.localdomain sudo[56082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:08:01 np0005546420.localdomain sudo[56082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:08:02 np0005546420.localdomain sudo[56082]: pam_unix(sudo:session): session closed for user root
Dec 05 08:08:02 np0005546420.localdomain sudo[56130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:08:02 np0005546420.localdomain sudo[56130]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:08:02 np0005546420.localdomain sudo[56130]: pam_unix(sudo:session): session closed for user root
Dec 05 08:08:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:08:29 np0005546420.localdomain podman[56145]: 2025-12-05 08:08:29.507537164 +0000 UTC m=+0.081545189 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-type=git, container_name=metrics_qdr, url=https://www.redhat.com)
Dec 05 08:08:29 np0005546420.localdomain podman[56145]: 2025-12-05 08:08:29.713797615 +0000 UTC m=+0.287805600 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, tcib_managed=true, config_id=tripleo_step1, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:08:29 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:09:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:09:00 np0005546420.localdomain systemd[1]: tmp-crun.3NKx4S.mount: Deactivated successfully.
Dec 05 08:09:00 np0005546420.localdomain podman[56174]: 2025-12-05 08:09:00.507472429 +0000 UTC m=+0.080647300 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64)
Dec 05 08:09:00 np0005546420.localdomain podman[56174]: 2025-12-05 08:09:00.735704939 +0000 UTC m=+0.308879780 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, container_name=metrics_qdr, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, version=17.1.12)
Dec 05 08:09:00 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:09:01 np0005546420.localdomain sshd[56204]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:09:02 np0005546420.localdomain sshd[56204]: Connection reset by authenticating user root 45.140.17.124 port 54226 [preauth]
Dec 05 08:09:02 np0005546420.localdomain sudo[56206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:09:02 np0005546420.localdomain sudo[56206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:09:03 np0005546420.localdomain sudo[56206]: pam_unix(sudo:session): session closed for user root
Dec 05 08:09:03 np0005546420.localdomain sudo[56221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:09:03 np0005546420.localdomain sudo[56221]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:09:03 np0005546420.localdomain sshd[56236]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:09:03 np0005546420.localdomain sudo[56221]: pam_unix(sudo:session): session closed for user root
Dec 05 08:09:04 np0005546420.localdomain sudo[56269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:09:04 np0005546420.localdomain sudo[56269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:09:04 np0005546420.localdomain sudo[56269]: pam_unix(sudo:session): session closed for user root
Dec 05 08:09:05 np0005546420.localdomain sshd[56236]: Connection reset by authenticating user root 45.140.17.124 port 54234 [preauth]
Dec 05 08:09:05 np0005546420.localdomain sshd[56284]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:09:07 np0005546420.localdomain sshd[56284]: Invalid user user from 45.140.17.124 port 54246
Dec 05 08:09:08 np0005546420.localdomain sshd[56284]: Connection reset by invalid user user 45.140.17.124 port 54246 [preauth]
Dec 05 08:09:08 np0005546420.localdomain sshd[56286]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:09:10 np0005546420.localdomain sshd[56286]: Connection reset by authenticating user root 45.140.17.124 port 54264 [preauth]
Dec 05 08:09:10 np0005546420.localdomain sshd[56288]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:09:12 np0005546420.localdomain sshd[56288]: Invalid user user from 45.140.17.124 port 54294
Dec 05 08:09:12 np0005546420.localdomain sshd[56288]: Connection reset by invalid user user 45.140.17.124 port 54294 [preauth]
Dec 05 08:09:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:09:31 np0005546420.localdomain systemd[1]: tmp-crun.Gc6EQ7.mount: Deactivated successfully.
Dec 05 08:09:31 np0005546420.localdomain podman[56290]: 2025-12-05 08:09:31.524212052 +0000 UTC m=+0.102113345 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4)
Dec 05 08:09:31 np0005546420.localdomain podman[56290]: 2025-12-05 08:09:31.7643828 +0000 UTC m=+0.342284043 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd)
Dec 05 08:09:31 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:10:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:10:02 np0005546420.localdomain systemd[1]: tmp-crun.xBkuqy.mount: Deactivated successfully.
Dec 05 08:10:02 np0005546420.localdomain podman[56319]: 2025-12-05 08:10:02.497740445 +0000 UTC m=+0.073934320 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, name=rhosp17/openstack-qdrouterd)
Dec 05 08:10:02 np0005546420.localdomain podman[56319]: 2025-12-05 08:10:02.672672395 +0000 UTC m=+0.248866270 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:10:02 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:10:04 np0005546420.localdomain sudo[56348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:10:04 np0005546420.localdomain sudo[56348]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:10:04 np0005546420.localdomain sudo[56348]: pam_unix(sudo:session): session closed for user root
Dec 05 08:10:04 np0005546420.localdomain sudo[56363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:10:04 np0005546420.localdomain sudo[56363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:10:05 np0005546420.localdomain sudo[56363]: pam_unix(sudo:session): session closed for user root
Dec 05 08:10:05 np0005546420.localdomain sudo[56409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:10:05 np0005546420.localdomain sudo[56409]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:10:05 np0005546420.localdomain sudo[56409]: pam_unix(sudo:session): session closed for user root
Dec 05 08:10:07 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 22 pg[2.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2,1,3] r=1 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:08 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 24 pg[3.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1,2,0] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:09 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 25 pg[3.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1,2,0] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:11 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 26 pg[4.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [3,5,1] r=2 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:12 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 28 pg[5.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [4,3,2] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:14 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 29 pg[5.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [4,3,2] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:27 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 34 pg[6.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0,5,1] r=2 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:30 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 36 pg[7.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [5,1,3] r=1 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:31 np0005546420.localdomain sudo[56424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:10:31 np0005546420.localdomain sudo[56424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:10:31 np0005546420.localdomain sudo[56424]: pam_unix(sudo:session): session closed for user root
Dec 05 08:10:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:10:33 np0005546420.localdomain podman[56439]: 2025-12-05 08:10:33.517157439 +0000 UTC m=+0.071812254 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step1, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.12, container_name=metrics_qdr, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:10:33 np0005546420.localdomain podman[56439]: 2025-12-05 08:10:33.706427851 +0000 UTC m=+0.261082626 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-qdrouterd)
Dec 05 08:10:33 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:10:34 np0005546420.localdomain sudo[56468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:10:34 np0005546420.localdomain sudo[56468]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:10:34 np0005546420.localdomain sudo[56468]: pam_unix(sudo:session): session closed for user root
Dec 05 08:10:34 np0005546420.localdomain sudo[56483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:10:34 np0005546420.localdomain sudo[56483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:10:34 np0005546420.localdomain sudo[56483]: pam_unix(sudo:session): session closed for user root
Dec 05 08:10:37 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 41 pg[2.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=41 pruub=10.306292534s) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active pruub 1121.279785156s@ mbc={}] start_peering_interval up [2,1,3] -> [2,1,3], acting [2,1,3] -> [2,1,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:37 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=12.318445206s) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 1123.292114258s@ mbc={}] start_peering_interval up [1,2,0] -> [1,2,0], acting [1,2,0] -> [1,2,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:37 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=12.318445206s) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1123.292114258s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:37 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 41 pg[2.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=41 pruub=10.301738739s) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1121.279785156s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.19( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.13( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.14( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.14( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.16( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.18( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.19( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.12( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.11( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.13( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.10( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.10( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.f( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.e( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.c( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.d( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.b( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.b( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.1( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.4( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.2( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.2( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.8( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1a( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.1a( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.1c( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1c( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.1d( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.1e( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[2.1f( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=1 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=41/42 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.19( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:38 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=0 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011579514s) [1,5,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.009643555s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,0], acting [2,1,3] -> [1,5,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011459351s) [1,0,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.009643555s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,5], acting [2,1,3] -> [1,0,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011579514s) [1,5,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.009643555s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009799004s) [1,5,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.008178711s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,3], acting [2,1,3] -> [1,5,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011459351s) [1,0,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.009643555s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009799004s) [1,5,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.008178711s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010943413s) [1,3,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.009643555s@ mbc={}] start_peering_interval up [2,1,3] -> [1,3,2], acting [2,1,3] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010943413s) [1,3,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.009643555s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012185097s) [1,5,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.011230469s@ mbc={}] start_peering_interval up [1,2,0] -> [1,5,0], acting [1,2,0] -> [1,5,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012185097s) [1,5,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.011230469s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008332253s) [1,5,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.007690430s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,3], acting [2,1,3] -> [1,5,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.14( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008380890s) [2,4,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.007690430s@ mbc={}] start_peering_interval up [2,1,3] -> [2,4,0], acting [2,1,3] -> [2,4,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.14( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008342743s) [2,4,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.007690430s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011666298s) [2,3,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.011108398s@ mbc={}] start_peering_interval up [1,2,0] -> [2,3,1], acting [1,2,0] -> [2,3,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008332253s) [1,5,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.007690430s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012151718s) [2,1,0] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.011474609s@ mbc={}] start_peering_interval up [1,2,0] -> [2,1,0], acting [1,2,0] -> [2,1,0], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011595726s) [2,3,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.011108398s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010461807s) [2,4,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.010375977s@ mbc={}] start_peering_interval up [1,2,0] -> [2,4,0], acting [1,2,0] -> [2,4,0], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010404587s) [2,4,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.010375977s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007883072s) [1,0,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.007812500s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,2], acting [2,1,3] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007883072s) [1,0,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.007812500s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011757851s) [2,1,0] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.011474609s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010251999s) [2,3,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.010742188s@ mbc={}] start_peering_interval up [1,2,0] -> [2,3,4], acting [1,2,0] -> [2,3,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010187149s) [2,3,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.010742188s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011160851s) [0,2,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.012084961s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,1], acting [1,2,0] -> [0,2,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011111259s) [0,2,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.012084961s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006780624s) [5,1,3] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.007934570s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,3], acting [2,1,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009302139s) [0,5,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.010498047s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,1], acting [1,2,0] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006684303s) [5,1,3] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.007934570s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009259224s) [0,5,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.010498047s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006427765s) [5,3,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.007934570s@ mbc={}] start_peering_interval up [2,1,3] -> [5,3,4], acting [2,1,3] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.12( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006444931s) [4,3,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.007934570s@ mbc={}] start_peering_interval up [2,1,3] -> [4,3,2], acting [2,1,3] -> [4,3,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008347511s) [0,5,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.010131836s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,1], acting [1,2,0] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006378174s) [5,3,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.007934570s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.12( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006275177s) [4,3,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.007934570s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008295059s) [0,5,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.010131836s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006250381s) [4,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.007812500s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,3], acting [2,1,3] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.10( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006079674s) [4,0,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.007934570s@ mbc={}] start_peering_interval up [2,1,3] -> [4,0,5], acting [2,1,3] -> [4,0,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009894371s) [5,1,3] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.011962891s@ mbc={}] start_peering_interval up [1,2,0] -> [5,1,3], acting [1,2,0] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.10( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006041527s) [4,0,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.007934570s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009857178s) [5,1,3] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.011962891s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008723259s) [4,5,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.010864258s@ mbc={}] start_peering_interval up [1,2,0] -> [4,5,0], acting [1,2,0] -> [4,5,0], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007866859s) [4,2,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.010131836s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,0], acting [2,1,3] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005888939s) [3,4,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.008178711s@ mbc={}] start_peering_interval up [2,1,3] -> [3,4,2], acting [2,1,3] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008677483s) [4,5,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.010864258s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007834435s) [4,2,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.010131836s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005851746s) [3,4,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.008178711s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006045341s) [5,1,0] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.007812500s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,0], acting [2,1,3] -> [5,1,0], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008871078s) [5,3,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.011352539s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011146545s) [5,3,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.013671875s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009213448s) [5,1,3] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.011718750s@ mbc={}] start_peering_interval up [1,2,0] -> [5,1,3], acting [1,2,0] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008822441s) [5,3,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.011352539s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009154320s) [5,1,3] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.011718750s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011093140s) [5,3,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.013671875s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009677887s) [3,5,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.012451172s@ mbc={}] start_peering_interval up [1,2,0] -> [3,5,1], acting [1,2,0] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006904602s) [3,5,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.009643555s@ mbc={}] start_peering_interval up [2,1,3] -> [3,5,4], acting [2,1,3] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009624481s) [3,5,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.012451172s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010048866s) [4,0,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.012939453s@ mbc={}] start_peering_interval up [1,2,0] -> [4,0,5], acting [1,2,0] -> [4,0,5], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006834030s) [3,5,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.009643555s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010005951s) [4,0,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.012939453s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005891800s) [4,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.007812500s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009527206s) [0,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.012695312s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,4], acting [1,2,0] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006689072s) [3,4,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.009765625s@ mbc={}] start_peering_interval up [2,1,3] -> [3,4,5], acting [2,1,3] -> [3,4,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009485245s) [0,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.012695312s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006592751s) [3,4,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.009765625s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[4.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=43 pruub=12.901194572s) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.904418945s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,1], acting [3,5,1] -> [3,5,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006243706s) [3,1,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.009643555s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,5], acting [2,1,3] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006188393s) [3,1,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.009643555s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010007858s) [3,1,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.013793945s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,2], acting [1,2,0] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005145073s) [5,1,0] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.007812500s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009958267s) [3,1,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.013793945s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.1d( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,5,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005956650s) [3,1,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.010131836s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,2], acting [2,1,3] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005907059s) [3,1,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.010131836s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008992195s) [5,3,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.013793945s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,4], acting [1,2,0] -> [5,3,4], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010184288s) [5,0,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.015014648s@ mbc={}] start_peering_interval up [2,1,3] -> [5,0,4], acting [2,1,3] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.1a( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,3,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005116463s) [1,0,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.010131836s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,2], acting [2,1,3] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.1b( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,5,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007058144s) [3,1,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.012207031s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,2], acting [1,2,0] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007019997s) [3,1,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.012207031s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.1c( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.11( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,5,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009733200s) [5,0,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.015014648s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008431435s) [5,3,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.013793945s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006917953s) [2,0,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.012329102s@ mbc={}] start_peering_interval up [1,2,0] -> [2,0,4], acting [1,2,0] -> [2,0,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006879807s) [2,0,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.012329102s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007334709s) [3,4,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.012817383s@ mbc={}] start_peering_interval up [1,2,0] -> [3,4,5], acting [1,2,0] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007291794s) [3,4,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.012817383s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009800911s) [5,1,3] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.015625000s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,3], acting [2,1,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010020256s) [5,3,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.015625000s@ mbc={}] start_peering_interval up [2,1,3] -> [5,3,1], acting [2,1,3] -> [5,3,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009736061s) [5,1,3] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.015625000s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009564400s) [5,3,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.015625000s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006139755s) [0,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.012207031s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,4], acting [1,2,0] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006104469s) [0,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.012207031s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.10( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,0,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008909225s) [2,1,0] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.015258789s@ mbc={}] start_peering_interval up [2,1,3] -> [2,1,0], acting [2,1,3] -> [2,1,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008879662s) [2,1,0] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.015258789s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005116463s) [1,0,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.010131836s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006830215s) [4,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.013305664s@ mbc={}] start_peering_interval up [1,2,0] -> [4,2,3], acting [1,2,0] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.9( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006796837s) [4,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.013305664s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.003973961s) [3,1,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.010620117s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,5], acting [1,2,0] -> [3,1,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.003938675s) [3,1,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.010620117s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.000023842s) [3,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.006958008s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.999980927s) [3,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.006958008s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.18( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.f( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.12( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,3,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006337166s) [4,5,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.013671875s@ mbc={}] start_peering_interval up [1,2,0] -> [4,5,3], acting [1,2,0] -> [4,5,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[4.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=43 pruub=12.897197723s) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.904418945s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006291389s) [4,5,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.013671875s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008043289s) [2,4,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.015380859s@ mbc={}] start_peering_interval up [2,1,3] -> [2,4,3], acting [2,1,3] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008002281s) [2,4,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.015380859s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005967140s) [4,3,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.013305664s@ mbc={}] start_peering_interval up [1,2,0] -> [4,3,2], acting [1,2,0] -> [4,3,2], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005938530s) [4,3,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.013305664s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008011818s) [1,2,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.015502930s@ mbc={}] start_peering_interval up [2,1,3] -> [1,2,3], acting [2,1,3] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.008011818s) [1,2,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.015502930s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005453110s) [1,2,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.013061523s@ mbc={}] start_peering_interval up [1,2,0] -> [1,2,3], acting [1,2,0] -> [1,2,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005453110s) [1,2,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.013061523s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007420540s) [4,2,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.015380859s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,0], acting [2,1,3] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007383347s) [4,2,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.015380859s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005715370s) [5,3,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.013793945s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005661011s) [5,3,1] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.013793945s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007295609s) [4,5,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.015380859s@ mbc={}] start_peering_interval up [2,1,3] -> [4,5,0], acting [2,1,3] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005669594s) [0,5,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.013916016s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,4], acting [1,2,0] -> [0,5,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.007122040s) [4,5,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.015380859s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.8( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,0,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.004897118s) [3,4,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.013671875s@ mbc={}] start_peering_interval up [1,2,0] -> [3,4,5], acting [1,2,0] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.004836082s) [3,4,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.013671875s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006673813s) [3,1,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.015380859s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,5], acting [2,1,3] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[5.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43 pruub=15.687231064s) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active pruub 1124.027954102s@ mbc={}] start_peering_interval up [4,3,2] -> [4,3,2], acting [4,3,2] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006398201s) [3,1,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.015380859s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006369591s) [0,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.015502930s@ mbc={}] start_peering_interval up [2,1,3] -> [0,2,4], acting [2,1,3] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.005071640s) [0,5,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.013916016s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 43 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.006322861s) [0,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.015502930s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:39 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[5.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43 pruub=15.687231064s) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1124.027954102s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.1e( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.11( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.13( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.15( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.17( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.1f( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.16( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.9( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.b( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.a( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.c( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.6( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.1f( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,5,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.2( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.3( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.19( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.7( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.6( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1c( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.3( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1d( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.4( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.6( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.1( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1a( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.e( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.1( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.1f( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1b( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.17( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.10( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.13( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.12( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.f( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.8( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.1c( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.1d( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.1a( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.1b( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.18( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[4.19( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=2 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1f( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.1e( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.11( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.15( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.16( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.e( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.b( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.14( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.c( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.d( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.18( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.a( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1e( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.5( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.3( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [2,0,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.16( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [2,3,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.14( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [2,4,0] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.e( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.9( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,5,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.2( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.14( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2,4,0] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.1( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.19( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,2,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.1a( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2,4,3] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,0,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[3.5( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,3,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,5,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.11( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,3,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 43 pg[2.2( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,0,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,0,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[2.c( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,0,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[2.b( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,5,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[2.12( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,3,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[2.18( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,5,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,3,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[2.13( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,5,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,2,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,2,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[3.1a( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,3,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[3.9( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[2.1c( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[2.f( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.0( empty local-lis/les=43/44 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[3.11( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,5,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[3.1b( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,5,3] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[3.8( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,0,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[2.1d( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,5,0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.10( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.17( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[2.10( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,0,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:40 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=0 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:41 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 45 pg[7.0( v 38'39 (0'0,38'39] local-lis/les=36/37 n=22 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=12.801365852s) [5,1,3] r=1 lpr=45 pi=[36,45)/1 luod=0'0 lua=38'37 crt=38'39 lcod 38'38 mlcod 0'0 active pruub 1128.046508789s@ mbc={}] start_peering_interval up [5,1,3] -> [5,1,3], acting [5,1,3] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:41 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 45 pg[6.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=10.351431847s) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 1125.596679688s@ mbc={}] start_peering_interval up [0,5,1] -> [0,5,1], acting [0,5,1] -> [0,5,1], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:41 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 45 pg[7.0( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=12.800192833s) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 lcod 38'38 mlcod 0'0 unknown NOTIFY pruub 1128.046508789s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:41 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 45 pg[6.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=10.348045349s) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.596679688s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.13( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.11( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.10( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.16( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.15( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.14( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.12( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.9( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.8( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.e( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.f( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.c( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.5( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.4( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.5( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.1( v 38'39 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.7( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.7( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.3( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.2( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[7.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=1 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.1f( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.19( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.18( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.1a( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.1b( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=2 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 05 08:10:42 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.12( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306126595s) [0,1,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.833374023s@ mbc={}] start_peering_interval up [0,5,1] -> [0,1,2], acting [0,5,1] -> [0,1,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306059837s) [0,1,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.833374023s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306365013s) [1,3,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.833984375s@ mbc={}] start_peering_interval up [0,5,1] -> [1,3,5], acting [0,5,1] -> [1,3,5], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.305687904s) [1,0,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.833374023s@ mbc={}] start_peering_interval up [0,5,1] -> [1,0,2], acting [0,5,1] -> [1,0,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306365013s) [1,3,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.833984375s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.305687904s) [1,0,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.833374023s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.305532455s) [0,5,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.833496094s@ mbc={}] start_peering_interval up [0,5,1] -> [0,5,4], acting [0,5,1] -> [0,5,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.305495262s) [0,5,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.833496094s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.8( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,0,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.4( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,3,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.1a( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.529669762s) [1,3,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.058959961s@ mbc={}] start_peering_interval up [3,5,1] -> [1,3,2], acting [3,5,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.529669762s) [1,3,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.058959961s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.305157661s) [1,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834838867s@ mbc={}] start_peering_interval up [0,5,1] -> [1,2,0], acting [0,5,1] -> [1,2,0], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.305157661s) [1,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.834838867s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.530244827s) [2,3,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.060058594s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,1], acting [3,5,1] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.530164719s) [2,3,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.060058594s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.304923058s) [5,4,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834838867s@ mbc={}] start_peering_interval up [0,5,1] -> [5,4,0], acting [0,5,1] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.304425240s) [5,3,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834716797s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,1], acting [0,5,1] -> [5,3,1], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.527798653s) [2,3,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.058105469s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,1], acting [3,5,1] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.304368973s) [5,3,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834716797s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.528703690s) [2,3,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.058959961s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,4], acting [3,5,1] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.527732849s) [2,3,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.058105469s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.528645515s) [2,3,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.058959961s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.304225922s) [0,2,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834838867s@ mbc={}] start_peering_interval up [0,5,1] -> [0,2,4], acting [0,5,1] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.304839134s) [5,4,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834838867s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.304168701s) [0,2,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834838867s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.303846359s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834594727s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.303652763s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834594727s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525822639s) [4,5,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.057006836s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.303739548s) [4,5,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834960938s@ mbc={}] start_peering_interval up [0,5,1] -> [4,5,3], acting [0,5,1] -> [4,5,3], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525749207s) [4,5,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.057006836s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.526973724s) [1,3,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.058349609s@ mbc={}] start_peering_interval up [3,5,1] -> [1,3,2], acting [3,5,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.303676605s) [4,5,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834960938s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.526973724s) [1,3,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.058349609s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.303380966s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.835083008s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.303818703s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.835815430s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.303330421s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.835083008s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.301595688s) [2,4,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.833496094s@ mbc={}] start_peering_interval up [0,5,1] -> [2,4,0], acting [0,5,1] -> [2,4,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.303753853s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.835815430s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.301551819s) [2,4,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.833496094s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524571419s) [2,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.056884766s@ mbc={}] start_peering_interval up [3,5,1] -> [2,4,0], acting [3,5,1] -> [2,4,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524510384s) [2,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.056884766s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.302081108s) [3,2,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834594727s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,4], acting [0,5,1] -> [3,2,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.302020073s) [3,2,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834594727s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524170876s) [4,2,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.056884766s@ mbc={}] start_peering_interval up [3,5,1] -> [4,2,0], acting [3,5,1] -> [4,2,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524290085s) [4,5,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.057006836s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524054527s) [4,5,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.057006836s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523768425s) [5,1,0] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.056762695s@ mbc={}] start_peering_interval up [3,5,1] -> [5,1,0], acting [3,5,1] -> [5,1,0], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.301591873s) [5,4,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834594727s@ mbc={}] start_peering_interval up [0,5,1] -> [5,4,0], acting [0,5,1] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523712158s) [5,1,0] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.056762695s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.301405907s) [5,4,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834594727s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524080276s) [4,2,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.056884766s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.302008629s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.835327148s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523121834s) [0,1,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.056640625s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.301947594s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.835327148s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523069382s) [0,1,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.056640625s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.301169395s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834716797s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.299508095s) [3,4,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.833251953s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,2], acting [0,5,1] -> [3,4,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.301100731s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834716797s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.299420357s) [3,4,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.833251953s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300369263s) [3,5,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834472656s@ mbc={}] start_peering_interval up [0,5,1] -> [3,5,1], acting [0,5,1] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300257683s) [3,5,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834472656s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300011635s) [1,5,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834472656s@ mbc={}] start_peering_interval up [0,5,1] -> [1,5,3], acting [0,5,1] -> [1,5,3], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300011635s) [1,5,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.834472656s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.521674156s) [1,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.056274414s@ mbc={}] start_peering_interval up [3,5,1] -> [1,5,3], acting [3,5,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.521674156s) [1,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.056274414s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.1( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.299805641s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.834594727s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523059845s) [1,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.057861328s@ mbc={}] start_peering_interval up [3,5,1] -> [1,5,3], acting [3,5,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.3( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300584793s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.835693359s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523059845s) [1,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.057861328s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.1( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.299494743s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834594727s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.3( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300519943s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.835693359s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.299302101s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834716797s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.298784256s) [5,1,0] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834350586s@ mbc={}] start_peering_interval up [0,5,1] -> [5,1,0], acting [0,5,1] -> [5,1,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.299148560s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834716797s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.298715591s) [5,1,0] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834350586s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.5( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.299406052s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.835083008s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.5( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.299362183s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.835083008s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517973900s) [4,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.054199219s@ mbc={}] start_peering_interval up [3,5,1] -> [4,3,2], acting [3,5,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517997742s) [0,1,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.054199219s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517918587s) [4,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.054199219s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.297765732s) [3,2,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834106445s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,1], acting [0,5,1] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.297690392s) [3,2,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834106445s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.519013405s) [3,4,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.055419922s@ mbc={}] start_peering_interval up [3,5,1] -> [3,4,5], acting [3,5,1] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.297551155s) [2,3,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834106445s@ mbc={}] start_peering_interval up [0,5,1] -> [2,3,1], acting [0,5,1] -> [2,3,1], acting_primary 0 -> 2, up_primary 0 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.297482491s) [2,3,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834106445s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.518969536s) [3,4,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.055419922s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517796516s) [0,1,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.054199219s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.298344612s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.835327148s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.296963692s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834106445s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.298284531s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.835327148s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.520793915s) [5,3,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.057983398s@ mbc={}] start_peering_interval up [3,5,1] -> [5,3,1], acting [3,5,1] -> [5,3,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.296910286s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834106445s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.520748138s) [5,3,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.057983398s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517177582s) [2,1,3] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.054687500s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,3], acting [3,5,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.296657562s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834228516s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517126083s) [2,1,3] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.054687500s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.296603203s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834228516s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.9( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.296928406s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.834716797s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.9( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.296885490s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834716797s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.295558929s) [3,5,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.833496094s@ mbc={}] start_peering_interval up [0,5,1] -> [3,5,4], acting [0,5,1] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.295492172s) [3,5,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.833496094s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515918732s) [2,0,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.054199219s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,1], acting [3,5,1] -> [2,0,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515876770s) [2,0,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.054199219s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.296145439s) [3,1,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.833618164s@ mbc={}] start_peering_interval up [0,5,1] -> [3,1,2], acting [0,5,1] -> [3,1,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.295205116s) [3,1,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.833618164s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.296525955s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.835083008s@ mbc={}] start_peering_interval up [0,5,1] -> [2,1,3], acting [0,5,1] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515365601s) [0,2,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.053955078s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,4], acting [3,5,1] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.294904709s) [0,1,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.833984375s@ mbc={}] start_peering_interval up [0,5,1] -> [0,1,5], acting [0,5,1] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.514933586s) [0,2,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.053955078s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.294856071s) [0,1,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.833984375s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.296036720s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.835083008s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.b( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.295751572s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.835083008s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515245438s) [1,2,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.054687500s@ mbc={}] start_peering_interval up [3,5,1] -> [1,2,3], acting [3,5,1] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.294630051s) [5,0,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.834106445s@ mbc={}] start_peering_interval up [0,5,1] -> [5,0,4], acting [0,5,1] -> [5,0,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[7.b( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.295687675s) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.835083008s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.294592857s) [5,0,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834106445s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515245438s) [1,2,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.054687500s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.513640404s) [1,0,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.053344727s@ mbc={}] start_peering_interval up [3,5,1] -> [1,0,2], acting [3,5,1] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.513640404s) [1,0,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.053344727s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.293728828s) [3,2,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.833618164s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,1], acting [0,5,1] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.293552399s) [3,2,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.833618164s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.516454697s) [0,2,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.056640625s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,1], acting [3,5,1] -> [0,2,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.516367912s) [0,2,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.056640625s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.513762474s) [3,2,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.054077148s@ mbc={}] start_peering_interval up [3,5,1] -> [3,2,4], acting [3,5,1] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515209198s) [4,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.055786133s@ mbc={}] start_peering_interval up [3,5,1] -> [4,0,5], acting [3,5,1] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.513405800s) [3,2,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.054077148s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.512709618s) [4,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.053710938s@ mbc={}] start_peering_interval up [3,5,1] -> [4,3,2], acting [3,5,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.514582634s) [4,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.055786133s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517278671s) [0,1,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.058471680s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.512567520s) [4,3,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.053710938s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517238617s) [0,1,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.058471680s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.511465073s) [2,1,3] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.053100586s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,3], acting [3,5,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.516702652s) [3,4,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.058349609s@ mbc={}] start_peering_interval up [3,5,1] -> [3,4,2], acting [3,5,1] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.516607285s) [3,4,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.058349609s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.291916847s) [4,2,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1132.833618164s@ mbc={}] start_peering_interval up [0,5,1] -> [4,2,0], acting [0,5,1] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.511040688s) [2,1,3] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.053100586s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517512321s) [3,5,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.059570312s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,4], acting [3,5,1] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.291607857s) [4,2,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.833618164s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517317772s) [3,5,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.059570312s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.510484695s) [0,5,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1130.053100586s@ mbc={}] start_peering_interval up [3,5,1] -> [0,5,1], acting [3,5,1] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.510420799s) [0,5,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.053100586s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.1f( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.501546860s) [0,2,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.383300781s@ mbc={}] start_peering_interval up [4,3,2] -> [0,2,4], acting [4,3,2] -> [0,2,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.501429558s) [0,2,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.383300781s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.518459320s) [2,1,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.400756836s@ mbc={}] start_peering_interval up [4,3,2] -> [2,1,3], acting [4,3,2] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.500764847s) [5,3,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.383056641s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,1], acting [4,3,2] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.518389702s) [2,1,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.400756836s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.500720978s) [5,3,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.383056641s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.15( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,3,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499609947s) [3,4,5] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.382934570s@ mbc={}] start_peering_interval up [4,3,2] -> [3,4,5], acting [4,3,2] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499606133s) [2,4,0] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.382934570s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499550819s) [2,4,0] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.382934570s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499543190s) [3,4,5] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.382934570s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.12( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499704361s) [4,5,0] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.383666992s@ mbc={}] start_peering_interval up [4,3,2] -> [4,5,0], acting [4,3,2] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499704361s) [4,5,0] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.383666992s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499168396s) [4,0,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.383666992s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,5], acting [4,3,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499168396s) [4,0,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.383666992s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499159813s) [0,1,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.384033203s@ mbc={}] start_peering_interval up [4,3,2] -> [0,1,2], acting [4,3,2] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.504526138s) [4,0,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.389404297s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,2], acting [4,3,2] -> [4,0,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499110222s) [0,1,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.384033203s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.504526138s) [4,0,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.389404297s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.505767822s) [3,2,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.390991211s@ mbc={}] start_peering_interval up [4,3,2] -> [3,2,4], acting [4,3,2] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.505685806s) [3,2,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.390991211s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.14( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,0,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.497442245s) [5,3,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.383300781s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,1], acting [4,3,2] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.497385025s) [5,3,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.383300781s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.496929169s) [1,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.383300781s@ mbc={}] start_peering_interval up [4,3,2] -> [1,0,5], acting [4,3,2] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.496099472s) [2,4,3] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.382446289s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,3], acting [4,3,2] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.514670372s) [5,1,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.401123047s@ mbc={}] start_peering_interval up [4,3,2] -> [5,1,3], acting [4,3,2] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.496040344s) [2,4,3] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.382446289s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.514460564s) [5,1,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.401123047s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.495588303s) [1,5,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.382324219s@ mbc={}] start_peering_interval up [4,3,2] -> [1,5,3], acting [4,3,2] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.496132851s) [4,0,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.383056641s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,5], acting [4,3,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.495537758s) [1,5,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.382324219s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.496132851s) [4,0,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.383056641s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.502640724s) [0,2,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.389648438s@ mbc={}] start_peering_interval up [4,3,2] -> [0,2,1], acting [4,3,2] -> [0,2,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.10( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.495382309s) [2,4,0] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.382446289s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.496867180s) [1,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.383300781s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.502566338s) [0,2,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.389648438s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.17( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.496141434s) [3,1,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.383544922s@ mbc={}] start_peering_interval up [4,3,2] -> [3,1,5], acting [4,3,2] -> [3,1,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.17( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.496089935s) [3,1,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.383544922s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.1( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,2,0] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.10( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.494766235s) [2,4,0] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.382446289s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.503574371s) [2,0,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.391967773s@ mbc={}] start_peering_interval up [4,3,2] -> [2,0,4], acting [4,3,2] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.502372742s) [1,3,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.390747070s@ mbc={}] start_peering_interval up [4,3,2] -> [1,3,5], acting [4,3,2] -> [1,3,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.503508568s) [2,0,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.391967773s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.502655983s) [3,2,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.391235352s@ mbc={}] start_peering_interval up [4,3,2] -> [3,2,4], acting [4,3,2] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.502421379s) [3,2,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.391235352s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,5,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.502018929s) [2,4,0] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.391235352s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.1d( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.502306938s) [1,3,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.390747070s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.501966476s) [2,4,0] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.391235352s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.494276047s) [5,3,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.384155273s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,4], acting [4,3,2] -> [5,3,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.6( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,3,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.494217873s) [5,3,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.384155273s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.501499176s) [1,5,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.391845703s@ mbc={}] start_peering_interval up [4,3,2] -> [1,5,3], acting [4,3,2] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.501094818s) [0,5,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.391357422s@ mbc={}] start_peering_interval up [4,3,2] -> [0,5,1], acting [4,3,2] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.501038551s) [0,5,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.391357422s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.501431465s) [1,5,3] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.391845703s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499768257s) [2,3,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.390258789s@ mbc={}] start_peering_interval up [4,3,2] -> [2,3,1], acting [4,3,2] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499710083s) [2,3,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.390258789s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499707222s) [5,0,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.390502930s@ mbc={}] start_peering_interval up [4,3,2] -> [5,0,1], acting [4,3,2] -> [5,0,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.499647141s) [5,0,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.390502930s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.500580788s) [3,5,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.391479492s@ mbc={}] start_peering_interval up [4,3,2] -> [3,5,4], acting [4,3,2] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.500416756s) [3,5,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.391479492s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.498045921s) [0,5,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.389160156s@ mbc={}] start_peering_interval up [4,3,2] -> [0,5,1], acting [4,3,2] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.497732162s) [0,5,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.389160156s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.491730690s) [5,4,0] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1125.383789062s@ mbc={}] start_peering_interval up [4,3,2] -> [5,4,0], acting [4,3,2] -> [5,4,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:45 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.491657257s) [5,4,0] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.383789062s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.17( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,1,5] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.5( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,2,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.19( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,5,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.3( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,5,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.1( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [2,3,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.18( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [2,1,3] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,2,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.f( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,4,5] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.a( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,1,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.15( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [5,3,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.16( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [5,3,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.14( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.17( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,2,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.15( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [2,4,0] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.e( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [2,4,0] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.10( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,4,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.1a( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [2,3,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.f( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [5,1,3] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 47 pg[5.2( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [5,0,1] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[5.1a( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.11( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,5,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[5.4( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,3,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[4.2( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,3,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,0,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,4,0] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.16( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,5,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[4.b( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,2,4] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.18( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,2,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,3,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[6.1( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,5,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[4.3( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[5.12( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[6.1c( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.1a( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,4,0] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 47 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,3,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[4.1f( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[6.1e( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,5,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[6.1b( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[4.1c( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,3,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[4.9( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,0,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[4.19( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,3,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[4.1( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,2,0] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[5.13( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,0,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[4.1d( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,5,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[5.8( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,0,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[4.6( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,3,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[4.14( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,0,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[4.15( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,3,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[5.e( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,0,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[5.b( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,0,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[5.d( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,5,0] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 48 pg[6.12( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[6.17( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,0,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:46 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 48 pg[4.8( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,2,3] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:10:47 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 49 pg[7.6( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.217591286s) [3,1,5] r=1 lpr=49 pi=[45,49)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.835815430s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:47 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 49 pg[7.2( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.217230797s) [3,1,5] r=1 lpr=49 pi=[45,49)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.835449219s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:47 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 49 pg[7.6( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.217514038s) [3,1,5] r=1 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.835815430s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:47 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 49 pg[7.2( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.217149734s) [3,1,5] r=1 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.835449219s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:47 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 49 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.215373039s) [3,1,5] r=1 lpr=49 pi=[45,49)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.834472656s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:47 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 49 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.215331078s) [3,1,5] r=1 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834472656s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:47 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 49 pg[7.a( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.215067863s) [3,1,5] r=1 lpr=49 pi=[45,49)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1132.834472656s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:47 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 49 pg[7.a( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.215006828s) [3,1,5] r=1 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.834472656s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:48 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 05 08:10:52 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts
Dec 05 08:10:53 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.18 deep-scrub ok
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 51 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.311623573s) [3,2,4] r=-1 lpr=51 pi=[47,51)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1144.567016602s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 51 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.311555862s) [3,2,4] r=-1 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1144.567016602s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 51 pg[7.3( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.315831184s) [3,2,4] r=-1 lpr=51 pi=[47,51)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1144.571533203s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 51 pg[7.3( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.315696716s) [3,2,4] r=-1 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1144.571533203s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 51 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.310796738s) [3,2,4] r=-1 lpr=51 pi=[47,51)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1144.566894531s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 51 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.310764313s) [3,2,4] r=-1 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1144.566894531s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 51 pg[7.b( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=15.310151100s) [3,2,4] r=-1 lpr=51 pi=[47,51)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1144.566650391s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 51 pg[7.b( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=15.310119629s) [3,2,4] r=-1 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1144.566650391s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 3.0 deep-scrub starts
Dec 05 08:10:55 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 3.0 deep-scrub ok
Dec 05 08:10:56 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 51 pg[7.7( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51) [3,2,4] r=2 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:56 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 51 pg[7.3( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51) [3,2,4] r=2 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:56 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 51 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51) [3,2,4] r=2 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:56 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 51 pg[7.b( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [3,2,4] r=2 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:57 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 05 08:10:57 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 05 08:10:57 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 53 pg[7.4( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.503851891s) [0,5,4] r=-1 lpr=53 pi=[45,53)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1140.835327148s@ mbc={}] start_peering_interval up [5,1,3] -> [0,5,4], acting [5,1,3] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:57 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 53 pg[7.4( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.503770828s) [0,5,4] r=-1 lpr=53 pi=[45,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1140.835327148s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:57 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 53 pg[7.c( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.504384995s) [0,5,4] r=-1 lpr=53 pi=[45,53)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1140.836059570s@ mbc={}] start_peering_interval up [5,1,3] -> [0,5,4], acting [5,1,3] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:57 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 53 pg[7.c( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.503653526s) [0,5,4] r=-1 lpr=53 pi=[45,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1140.836059570s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:58 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53) [0,5,4] r=2 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:58 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53) [0,5,4] r=2 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:59 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 55 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=11.132329941s) [4,0,2] r=-1 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1144.571533203s@ mbc={}] start_peering_interval up [2,1,3] -> [4,0,2], acting [2,1,3] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:59 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 55 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=11.132242203s) [4,0,2] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1144.571533203s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:59 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 55 pg[7.5( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=11.127481461s) [4,0,2] r=-1 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1144.567260742s@ mbc={}] start_peering_interval up [2,1,3] -> [4,0,2], acting [2,1,3] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:10:59 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 55 pg[7.5( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=11.127354622s) [4,0,2] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1144.567260742s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:10:59 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 55 pg[7.d( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55) [4,0,2] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:10:59 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 55 pg[7.5( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55) [4,0,2] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:11:00 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 05 08:11:00 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 56 pg[7.5( v 38'39 lc 38'9 (0'0,38'39] local-lis/les=55/56 n=2 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55) [4,0,2] r=0 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:11:00 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 56 pg[7.d( v 38'39 lc 38'10 (0'0,38'39] local-lis/les=55/56 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55) [4,0,2] r=0 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:11:00 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 05 08:11:01 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 57 pg[7.6( v 38'39 (0'0,38'39] local-lis/les=49/50 n=2 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.301312447s) [0,2,4] r=-1 lpr=57 pi=[49,57)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1146.788940430s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:11:01 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 57 pg[7.6( v 38'39 (0'0,38'39] local-lis/les=49/50 n=2 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.301214218s) [0,2,4] r=-1 lpr=57 pi=[49,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1146.788940430s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:01 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 57 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=49/50 n=1 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.300099373s) [0,2,4] r=-1 lpr=57 pi=[49,57)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1146.788330078s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:11:01 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 57 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=49/50 n=1 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.299935341s) [0,2,4] r=-1 lpr=57 pi=[49,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1146.788330078s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:01 np0005546420.localdomain sudo[56513]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-imkuvrecuxblpqudvvmvxzkdbgolqpyk ; /usr/bin/python3
Dec 05 08:11:01 np0005546420.localdomain sudo[56513]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:01 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 05 08:11:01 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 05 08:11:02 np0005546420.localdomain python3[56515]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:02 np0005546420.localdomain sudo[56513]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:03 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 57 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,2,4] r=2 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:03 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 57 pg[7.6( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,2,4] r=2 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:03 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 59 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=51/52 n=1 ec=45/36 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.794227600s) [2,1,3] r=-1 lpr=59 pi=[51,59)/1 luod=0'0 crt=38'39 mlcod 0'0 active pruub 1141.656005859s@ mbc={}] start_peering_interval up [3,2,4] -> [2,1,3], acting [3,2,4] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:11:03 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 59 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=51/52 n=1 ec=45/36 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.794286728s) [2,1,3] r=-1 lpr=59 pi=[51,59)/1 luod=0'0 crt=38'39 mlcod 0'0 active pruub 1141.656127930s@ mbc={}] start_peering_interval up [3,2,4] -> [2,1,3], acting [3,2,4] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:11:03 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 59 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=51/52 n=1 ec=45/36 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.794117928s) [2,1,3] r=-1 lpr=59 pi=[51,59)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 1141.656005859s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:03 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 59 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=51/52 n=1 ec=45/36 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.794231415s) [2,1,3] r=-1 lpr=59 pi=[51,59)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 1141.656127930s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:03 np0005546420.localdomain sudo[56529]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pmedoqnpomzoudaaltadphxsrvmnydid ; /usr/bin/python3
Dec 05 08:11:03 np0005546420.localdomain sudo[56529]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:11:03 np0005546420.localdomain systemd[1]: tmp-crun.4MB0rO.mount: Deactivated successfully.
Dec 05 08:11:03 np0005546420.localdomain podman[56532]: 2025-12-05 08:11:03.888523691 +0000 UTC m=+0.079788699 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:11:03 np0005546420.localdomain python3[56531]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:03 np0005546420.localdomain sudo[56529]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:04 np0005546420.localdomain podman[56532]: 2025-12-05 08:11:04.0663129 +0000 UTC m=+0.257577878 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, config_id=tripleo_step1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:11:04 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:11:04 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 59 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=51/51 les/c/f=52/52/0 sis=59) [2,1,3] r=1 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:04 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 59 pg[7.7( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=51/51 les/c/f=52/52/0 sis=59) [2,1,3] r=1 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:05 np0005546420.localdomain sudo[56574]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oydnvoaazhqedzbqdscpujgxfwwisutr ; /usr/bin/python3
Dec 05 08:11:05 np0005546420.localdomain sudo[56574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:05 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 61 pg[7.8( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=9.184247971s) [3,2,1] r=2 lpr=61 pi=[45,61)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1148.835571289s@ mbc={}] start_peering_interval up [5,1,3] -> [3,2,1], acting [5,1,3] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:11:05 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 61 pg[7.8( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=9.184053421s) [3,2,1] r=2 lpr=61 pi=[45,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1148.835571289s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:05 np0005546420.localdomain python3[56576]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:05 np0005546420.localdomain sudo[56574]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:05 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 05 08:11:05 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 05 08:11:06 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 62 pg[7.9( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=62 pruub=11.907522202s) [0,4,2] r=-1 lpr=62 pi=[47,62)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1152.567138672s@ mbc={}] start_peering_interval up [2,1,3] -> [0,4,2], acting [2,1,3] -> [0,4,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:11:06 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 62 pg[7.9( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=62 pruub=11.907228470s) [0,4,2] r=-1 lpr=62 pi=[47,62)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1152.567138672s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:08 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 62 pg[7.9( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=62) [0,4,2] r=1 lpr=62 pi=[47,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:08 np0005546420.localdomain sudo[56622]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrzkbokmvpxobbxeuehxjkdikeflnlpy ; /usr/bin/python3
Dec 05 08:11:08 np0005546420.localdomain sudo[56622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:08 np0005546420.localdomain python3[56624]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:08 np0005546420.localdomain sudo[56622]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:08 np0005546420.localdomain sudo[56665]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rkoztvgwynkqqvjistthrxcwvdzlbnop ; /usr/bin/python3
Dec 05 08:11:08 np0005546420.localdomain sudo[56665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:08 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 05 08:11:08 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 05 08:11:09 np0005546420.localdomain python3[56667]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922268.3242283-92232-138753359641003/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=56b574bbcbb2378bafed25b3f279b3c007056bbe backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:09 np0005546420.localdomain sudo[56665]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:09 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 64 pg[7.a( v 38'39 (0'0,38'39] local-lis/les=49/50 n=1 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=11.570579529s) [4,0,5] r=-1 lpr=64 pi=[49,64)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1154.788940430s@ mbc={}] start_peering_interval up [3,1,5] -> [4,0,5], acting [3,1,5] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:11:09 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 64 pg[7.a( v 38'39 (0'0,38'39] local-lis/les=49/50 n=1 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=11.570472717s) [4,0,5] r=-1 lpr=64 pi=[49,64)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1154.788940430s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:09 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 64 pg[7.a( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=64) [4,0,5] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 08:11:10 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 65 pg[7.a( v 38'39 (0'0,38'39] local-lis/les=64/65 n=1 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=64) [4,0,5] r=0 lpr=64 pi=[49,64)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 08:11:10 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 05 08:11:11 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 05 08:11:11 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 05 08:11:11 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 05 08:11:12 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 05 08:11:12 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 05 08:11:12 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 05 08:11:12 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 05 08:11:13 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 67 pg[7.c( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=45/36 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=9.104557037s) [2,3,4] r=2 lpr=67 pi=[53,67)/1 luod=0'0 crt=38'39 mlcod 0'0 active pruub 1151.762573242s@ mbc={}] start_peering_interval up [0,5,4] -> [2,3,4], acting [0,5,4] -> [2,3,4], acting_primary 0 -> 2, up_primary 0 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:11:13 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 67 pg[7.c( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=45/36 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=9.104475975s) [2,3,4] r=2 lpr=67 pi=[53,67)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 1151.762573242s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:13 np0005546420.localdomain sudo[56727]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ypcishqxaarlmtrbofiemiflynhhszuw ; /usr/bin/python3
Dec 05 08:11:13 np0005546420.localdomain sudo[56727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:14 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 05 08:11:14 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 05 08:11:14 np0005546420.localdomain python3[56729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:14 np0005546420.localdomain sudo[56727]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:14 np0005546420.localdomain sudo[56770]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yajyzpyxhtvcdflpjsxrfibcxtzarbdv ; /usr/bin/python3
Dec 05 08:11:14 np0005546420.localdomain sudo[56770]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:14 np0005546420.localdomain python3[56772]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922273.7580025-92232-226872734622612/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=81b1e70c98aa594608eafceac10d1e7c5fcc2dc9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:14 np0005546420.localdomain sudo[56770]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:15 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 05 08:11:15 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 05 08:11:15 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 69 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=45/36 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=9.465726852s) [2,3,1] r=-1 lpr=69 pi=[55,69)/1 crt=38'39 mlcod 0'0 active pruub 1154.168701172s@ mbc={255={}}] start_peering_interval up [4,0,2] -> [2,3,1], acting [4,0,2] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:11:15 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 69 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=45/36 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=9.465603828s) [2,3,1] r=-1 lpr=69 pi=[55,69)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 1154.168701172s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:16 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 69 pg[7.d( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=55/55 les/c/f=56/56/0 sis=69) [2,3,1] r=2 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:17 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 05 08:11:17 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 05 08:11:17 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 05 08:11:17 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 71 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=45/36 lis/c=57/57 les/c/f=58/58/0 sis=71 pruub=9.566490173s) [3,1,5] r=-1 lpr=71 pi=[57,71)/1 luod=0'0 crt=38'39 mlcod 0'0 active pruub 1156.294189453s@ mbc={}] start_peering_interval up [0,2,4] -> [3,1,5], acting [0,2,4] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:11:17 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 71 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=45/36 lis/c=57/57 les/c/f=58/58/0 sis=71 pruub=9.566404343s) [3,1,5] r=-1 lpr=71 pi=[57,71)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 1156.294189453s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:18 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Dec 05 08:11:18 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Dec 05 08:11:18 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 71 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=57/57 les/c/f=58/58/0 sis=71) [3,1,5] r=1 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:19 np0005546420.localdomain sudo[56832]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-haaxxeaztkadjusdhidplxgzehlupecr ; /usr/bin/python3
Dec 05 08:11:19 np0005546420.localdomain sudo[56832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:19 np0005546420.localdomain python3[56834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:19 np0005546420.localdomain sudo[56832]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:19 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 73 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=45/36 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=9.658156395s) [0,4,5] r=-1 lpr=73 pi=[59,73)/1 luod=0'0 crt=38'39 mlcod 0'0 active pruub 1163.091674805s@ mbc={}] start_peering_interval up [2,1,3] -> [0,4,5], acting [2,1,3] -> [0,4,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 08:11:19 np0005546420.localdomain ceph-osd[31961]: osd.1 pg_epoch: 73 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=45/36 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=9.658064842s) [0,4,5] r=-1 lpr=73 pi=[59,73)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 1163.091674805s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:19 np0005546420.localdomain sudo[56875]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhqjqnotocnuhbghuwvknwlmdztoxpby ; /usr/bin/python3
Dec 05 08:11:19 np0005546420.localdomain sudo[56875]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:19 np0005546420.localdomain python3[56877]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922279.105776-92232-51785317425155/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=31a82f9bde3ef47ca8b17ff1e2177aab5748b36a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:19 np0005546420.localdomain sudo[56875]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:20 np0005546420.localdomain ceph-osd[32907]: osd.4 pg_epoch: 73 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=59/59 les/c/f=60/60/0 sis=73) [0,4,5] r=1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 05 08:11:22 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 05 08:11:22 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 05 08:11:22 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 05 08:11:22 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 05 08:11:23 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 05 08:11:24 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 05 08:11:24 np0005546420.localdomain sudo[56937]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-msbbhcssrezhddnefcnyzdhgsvkxrvqc ; /usr/bin/python3
Dec 05 08:11:24 np0005546420.localdomain sudo[56937]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:25 np0005546420.localdomain python3[56939]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:25 np0005546420.localdomain sudo[56937]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:25 np0005546420.localdomain sudo[56982]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhkdkbniacmkblwjtqrvargwehnnydtd ; /usr/bin/python3
Dec 05 08:11:25 np0005546420.localdomain sudo[56982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:25 np0005546420.localdomain python3[56984]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922284.7256646-92806-196132888912022/source _original_basename=tmpa35mcrio follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:25 np0005546420.localdomain sudo[56982]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:25 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 05 08:11:26 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 05 08:11:26 np0005546420.localdomain sudo[57044]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqniihqzzwbngcxnplaedlhdrplwroth ; /usr/bin/python3
Dec 05 08:11:26 np0005546420.localdomain sudo[57044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:26 np0005546420.localdomain python3[57046]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:26 np0005546420.localdomain sudo[57044]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:26 np0005546420.localdomain sudo[57087]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xocqhxutqfsquebvkhajccssjwypnsiw ; /usr/bin/python3
Dec 05 08:11:26 np0005546420.localdomain sudo[57087]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:27 np0005546420.localdomain python3[57089]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922286.4085274-92893-206033351834849/source _original_basename=tmpeu08daej follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:27 np0005546420.localdomain sudo[57087]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:27 np0005546420.localdomain sudo[57117]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cnsnkojshcwxgdqdlypjvxjzrcmdfvij ; /usr/bin/python3
Dec 05 08:11:27 np0005546420.localdomain sudo[57117]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:27 np0005546420.localdomain python3[57119]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None
Dec 05 08:11:27 np0005546420.localdomain crontab[57120]: (root) LIST (root)
Dec 05 08:11:27 np0005546420.localdomain crontab[57121]: (root) REPLACE (root)
Dec 05 08:11:27 np0005546420.localdomain sudo[57117]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:27 np0005546420.localdomain sudo[57135]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vaabxzzshnprlnohfsouluzealzfgouz ; /usr/bin/python3
Dec 05 08:11:27 np0005546420.localdomain sudo[57135]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:27 np0005546420.localdomain python3[57137]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:11:28 np0005546420.localdomain sudo[57135]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:28 np0005546420.localdomain sudo[57185]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-drphpvwkwbtolhnlpvrdlazhtkssxsso ; /usr/bin/python3
Dec 05 08:11:28 np0005546420.localdomain sudo[57185]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:28 np0005546420.localdomain sudo[57185]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:28 np0005546420.localdomain sudo[57203]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lccvbayxzekfktrbjfoqrungftvdlolj ; /usr/bin/python3
Dec 05 08:11:28 np0005546420.localdomain sudo[57203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:28 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 05 08:11:28 np0005546420.localdomain sudo[57203]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:29 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 05 08:11:29 np0005546420.localdomain sudo[57307]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-etntnbpudbqtltydysxccuqxipbrubcj ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922289.1821282-93012-128707667277297/async_wrapper.py 288658412674 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922289.1821282-93012-128707667277297/AnsiballZ_command.py _
Dec 05 08:11:29 np0005546420.localdomain sudo[57307]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 05 08:11:29 np0005546420.localdomain ansible-async_wrapper.py[57309]: Invoked with 288658412674 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922289.1821282-93012-128707667277297/AnsiballZ_command.py _
Dec 05 08:11:29 np0005546420.localdomain ansible-async_wrapper.py[57312]: Starting module and watcher
Dec 05 08:11:29 np0005546420.localdomain ansible-async_wrapper.py[57312]: Start watching 57313 (3600)
Dec 05 08:11:29 np0005546420.localdomain ansible-async_wrapper.py[57313]: Start module (57313)
Dec 05 08:11:29 np0005546420.localdomain ansible-async_wrapper.py[57309]: Return async_wrapper task started.
Dec 05 08:11:29 np0005546420.localdomain sudo[57307]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:29 np0005546420.localdomain sudo[57328]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gmaqwfqfxfhjbcxchjzouhnykkiwjguo ; /usr/bin/python3
Dec 05 08:11:29 np0005546420.localdomain sudo[57328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:29 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Dec 05 08:11:29 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Dec 05 08:11:30 np0005546420.localdomain python3[57333]: ansible-ansible.legacy.async_status Invoked with jid=288658412674.57309 mode=status _async_dir=/tmp/.ansible_async
Dec 05 08:11:30 np0005546420.localdomain sudo[57328]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:30 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 05 08:11:30 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 05 08:11:31 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 05 08:11:31 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 05 08:11:32 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 05 08:11:32 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:    (file & line not available)
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:    (file & line not available)
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.11 seconds
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Notice: Applied catalog in 0.04 seconds
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Application:
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:    Initial environment: production
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:    Converged environment: production
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:          Run mode: user
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Changes:
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Events:
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Resources:
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:             Total: 10
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Time:
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:          Schedule: 0.00
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:        Filebucket: 0.00
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:              File: 0.00
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:            Augeas: 0.01
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:              Exec: 0.01
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:    Transaction evaluation: 0.03
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:    Catalog application: 0.04
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:    Config retrieval: 0.15
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:          Last run: 1764922293
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:             Total: 0.04
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]: Version:
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:            Config: 1764922293
Dec 05 08:11:33 np0005546420.localdomain puppet-user[57331]:            Puppet: 7.10.0
Dec 05 08:11:33 np0005546420.localdomain ansible-async_wrapper.py[57313]: Module complete (57313)
Dec 05 08:11:33 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 5.e deep-scrub starts
Dec 05 08:11:33 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 5.e deep-scrub ok
Dec 05 08:11:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:11:34 np0005546420.localdomain podman[57445]: 2025-12-05 08:11:34.505270901 +0000 UTC m=+0.079983416 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, release=1761123044, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1)
Dec 05 08:11:34 np0005546420.localdomain podman[57445]: 2025-12-05 08:11:34.667023315 +0000 UTC m=+0.241735830 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:11:34 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:11:34 np0005546420.localdomain ansible-async_wrapper.py[57312]: Done in kid B.
Dec 05 08:11:34 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec 05 08:11:34 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec 05 08:11:35 np0005546420.localdomain sudo[57474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:11:35 np0005546420.localdomain sudo[57474]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:11:35 np0005546420.localdomain sudo[57474]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:35 np0005546420.localdomain sudo[57489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:11:35 np0005546420.localdomain sudo[57489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:11:35 np0005546420.localdomain sudo[57489]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:36 np0005546420.localdomain sudo[57535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:11:36 np0005546420.localdomain sudo[57535]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:11:36 np0005546420.localdomain sudo[57535]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:38 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 05 08:11:39 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 05 08:11:40 np0005546420.localdomain sudo[57563]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-poagvlvldbacnabzuufwdvahdoesmfxp ; /usr/bin/python3
Dec 05 08:11:40 np0005546420.localdomain sudo[57563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:40 np0005546420.localdomain python3[57565]: ansible-ansible.legacy.async_status Invoked with jid=288658412674.57309 mode=status _async_dir=/tmp/.ansible_async
Dec 05 08:11:40 np0005546420.localdomain sudo[57563]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:40 np0005546420.localdomain sudo[57579]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-igtamwqvvmpjpondioxonaiporepalpn ; /usr/bin/python3
Dec 05 08:11:40 np0005546420.localdomain sudo[57579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:40 np0005546420.localdomain python3[57581]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:11:40 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 05 08:11:40 np0005546420.localdomain sudo[57579]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:40 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 05 08:11:41 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec 05 08:11:41 np0005546420.localdomain sudo[57595]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ebduxcuxbzesovqmrhuojdscvpevepob ; /usr/bin/python3
Dec 05 08:11:41 np0005546420.localdomain sudo[57595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:41 np0005546420.localdomain python3[57597]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:11:41 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec 05 08:11:41 np0005546420.localdomain sudo[57595]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:41 np0005546420.localdomain sudo[57645]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bemhkwqlrpvmxkpilvxvcklfbkatfirt ; /usr/bin/python3
Dec 05 08:11:41 np0005546420.localdomain sudo[57645]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:41 np0005546420.localdomain python3[57647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:41 np0005546420.localdomain sudo[57645]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:42 np0005546420.localdomain sudo[57663]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pidbhotrzfkgnafetddzdgozpewhocll ; /usr/bin/python3
Dec 05 08:11:42 np0005546420.localdomain sudo[57663]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:42 np0005546420.localdomain python3[57665]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpl0lascvz recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:11:42 np0005546420.localdomain sudo[57663]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:42 np0005546420.localdomain sudo[57693]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orsbndhkiiclkyrnsgklxewkjyrptzsh ; /usr/bin/python3
Dec 05 08:11:42 np0005546420.localdomain sudo[57693]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:42 np0005546420.localdomain python3[57695]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:42 np0005546420.localdomain sudo[57693]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:42 np0005546420.localdomain sudo[57709]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xibbosffeoaakyreoynvqjytestlkyry ; /usr/bin/python3
Dec 05 08:11:42 np0005546420.localdomain sudo[57709]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:43 np0005546420.localdomain sudo[57709]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:43 np0005546420.localdomain sudo[57797]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ytmcvxsjshafivzhzhplkwcniydhhtsz ; /usr/bin/python3
Dec 05 08:11:43 np0005546420.localdomain sudo[57797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:43 np0005546420.localdomain python3[57799]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 05 08:11:43 np0005546420.localdomain sudo[57797]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:44 np0005546420.localdomain sudo[57816]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqkmtoftqtxoqxixyrodsqcoimfjbsix ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:44 np0005546420.localdomain sudo[57816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:44 np0005546420.localdomain python3[57818]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:44 np0005546420.localdomain sudo[57816]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:44 np0005546420.localdomain sudo[57832]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tzjdoaupreqpetounovblugquzdymxyj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:44 np0005546420.localdomain sudo[57832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:45 np0005546420.localdomain sudo[57832]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:45 np0005546420.localdomain sudo[57848]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nmlihevtjhlxevsmkinucjhwptdyvbxm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:45 np0005546420.localdomain sudo[57848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:45 np0005546420.localdomain python3[57850]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:11:45 np0005546420.localdomain sudo[57848]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:46 np0005546420.localdomain sudo[57898]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nihewszknjlgvgwoqyxvrlqsuvibecyy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:46 np0005546420.localdomain sudo[57898]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:46 np0005546420.localdomain python3[57900]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:46 np0005546420.localdomain sudo[57898]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:46 np0005546420.localdomain sudo[57916]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhemuigpmxlgdpjrzatnavybbfdiimys ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:46 np0005546420.localdomain sudo[57916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:46 np0005546420.localdomain python3[57918]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:46 np0005546420.localdomain sudo[57916]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:46 np0005546420.localdomain sudo[57978]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-icttzhdfaicepnefgwbujfsadijeygti ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:46 np0005546420.localdomain sudo[57978]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:46 np0005546420.localdomain python3[57980]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:46 np0005546420.localdomain sudo[57978]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:47 np0005546420.localdomain sudo[57996]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xktfbfjibdrsmqtrbhpwbgpuqtcgefdr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:47 np0005546420.localdomain sudo[57996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:47 np0005546420.localdomain python3[57998]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:47 np0005546420.localdomain sudo[57996]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:47 np0005546420.localdomain sudo[58058]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gbwkdsdymlvdnplxrcqeoradxzylqsob ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:47 np0005546420.localdomain sudo[58058]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:47 np0005546420.localdomain python3[58060]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:47 np0005546420.localdomain sudo[58058]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:47 np0005546420.localdomain sudo[58076]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-npmoujafcyyoqplqbocwrdozqebvjjky ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:47 np0005546420.localdomain sudo[58076]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:48 np0005546420.localdomain python3[58078]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:48 np0005546420.localdomain sudo[58076]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:48 np0005546420.localdomain sudo[58138]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-equikbjmnkwjzgbxpjrzzgdszxhdpeuu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:48 np0005546420.localdomain sudo[58138]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:48 np0005546420.localdomain python3[58140]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:48 np0005546420.localdomain sudo[58138]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:48 np0005546420.localdomain sudo[58156]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kupiinxglgdygcftoeyoicnrnkqubgjp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:48 np0005546420.localdomain sudo[58156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:48 np0005546420.localdomain python3[58158]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:48 np0005546420.localdomain sudo[58156]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:48 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 05 08:11:48 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 05 08:11:49 np0005546420.localdomain sudo[58186]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ickaatdjlpaamslpswvfwpakmucnovap ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:49 np0005546420.localdomain sudo[58186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:49 np0005546420.localdomain python3[58188]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:11:49 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:11:49 np0005546420.localdomain systemd-rc-local-generator[58209]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:11:49 np0005546420.localdomain systemd-sysv-generator[58215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:11:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:11:49 np0005546420.localdomain sudo[58186]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:50 np0005546420.localdomain sudo[58271]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dunfbqqlqopgysshknqzcrmmfvwlvtat ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:50 np0005546420.localdomain sudo[58271]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:50 np0005546420.localdomain python3[58273]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:50 np0005546420.localdomain sudo[58271]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:50 np0005546420.localdomain sudo[58289]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmkvpuhlderjtfmzfmeirzxjaidafltx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:50 np0005546420.localdomain sudo[58289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:50 np0005546420.localdomain python3[58291]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:50 np0005546420.localdomain sudo[58289]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:50 np0005546420.localdomain sudo[58351]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zywrhxuxtagmtkozdzofemiuspmnhseh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:50 np0005546420.localdomain sudo[58351]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:51 np0005546420.localdomain python3[58353]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:11:51 np0005546420.localdomain sudo[58351]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:51 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 05 08:11:51 np0005546420.localdomain sudo[58369]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qtdkjuqepljtcvvvteshpizczxracred ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:51 np0005546420.localdomain sudo[58369]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:51 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 05 08:11:51 np0005546420.localdomain python3[58371]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:51 np0005546420.localdomain sudo[58369]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:51 np0005546420.localdomain sudo[58399]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-crxwbmsektcmkzycqmdsltstqtpgjxwn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:51 np0005546420.localdomain sudo[58399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:51 np0005546420.localdomain python3[58401]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:11:51 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:11:51 np0005546420.localdomain systemd-sysv-generator[58428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:11:51 np0005546420.localdomain systemd-rc-local-generator[58425]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:11:52 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:11:52 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 08:11:52 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 08:11:52 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 08:11:52 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 08:11:52 np0005546420.localdomain sudo[58399]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:52 np0005546420.localdomain sudo[58457]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odxiuqspzbjiqveswqidzfdvzodrmnmp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:52 np0005546420.localdomain sudo[58457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:52 np0005546420.localdomain python3[58459]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 05 08:11:52 np0005546420.localdomain sudo[58457]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:53 np0005546420.localdomain sudo[58473]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zxwppilqwaicexqqpdocfcjsfcrmhnwj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:53 np0005546420.localdomain sudo[58473]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:53 np0005546420.localdomain sudo[58473]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:54 np0005546420.localdomain sudo[58516]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djekbjtnotyaivokmsiaylfdkluiwhgr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:11:54 np0005546420.localdomain sudo[58516]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:54 np0005546420.localdomain python3[58518]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 05 08:11:54 np0005546420.localdomain podman[58593]: 2025-12-05 08:11:54.941130882 +0000 UTC m=+0.078729687 container create 270092fa75050ad95aec682b7ea7360b14a3c8aafa6c75b31f638a08be5d41ea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, url=https://www.redhat.com, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=nova_virtqemud_init_logs, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, config_id=tripleo_step2, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team)
Dec 05 08:11:54 np0005546420.localdomain podman[58601]: 2025-12-05 08:11:54.983352143 +0000 UTC m=+0.097960190 container create 81644a5cd844483ae0ca84ab5ce95ce54823658ed60022d919cc8c562a7d51ee (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, container_name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: Started libpod-conmon-270092fa75050ad95aec682b7ea7360b14a3c8aafa6c75b31f638a08be5d41ea.scope.
Dec 05 08:11:55 np0005546420.localdomain podman[58593]: 2025-12-05 08:11:54.908936761 +0000 UTC m=+0.046535586 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: Started libpod-conmon-81644a5cd844483ae0ca84ab5ce95ce54823658ed60022d919cc8c562a7d51ee.scope.
Dec 05 08:11:55 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28f9293ec6754804a09c7d9d69f59819a47e4fdd6275f5b72f6e2577ab30af0/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Dec 05 08:11:55 np0005546420.localdomain podman[58601]: 2025-12-05 08:11:54.934257411 +0000 UTC m=+0.048865528 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 05 08:11:55 np0005546420.localdomain podman[58593]: 2025-12-05 08:11:55.035081968 +0000 UTC m=+0.172680803 container init 270092fa75050ad95aec682b7ea7360b14a3c8aafa6c75b31f638a08be5d41ea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step2, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, build-date=2025-11-19T00:35:22Z)
Dec 05 08:11:55 np0005546420.localdomain podman[58593]: 2025-12-05 08:11:55.046224121 +0000 UTC m=+0.183822956 container start 270092fa75050ad95aec682b7ea7360b14a3c8aafa6c75b31f638a08be5d41ea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: libpod-270092fa75050ad95aec682b7ea7360b14a3c8aafa6c75b31f638a08be5d41ea.scope: Deactivated successfully.
Dec 05 08:11:55 np0005546420.localdomain python3[58518]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm
Dec 05 08:11:55 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1f4c636b38260509d0a72095ca55b50ae4106843ada128752b1ecf32659770/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:11:55 np0005546420.localdomain podman[58601]: 2025-12-05 08:11:55.063909526 +0000 UTC m=+0.178517603 container init 81644a5cd844483ae0ca84ab5ce95ce54823658ed60022d919cc8c562a7d51ee (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, config_id=tripleo_step2, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., container_name=nova_compute_init_log, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=)
Dec 05 08:11:55 np0005546420.localdomain podman[58601]: 2025-12-05 08:11:55.07316145 +0000 UTC m=+0.187769527 container start 81644a5cd844483ae0ca84ab5ce95ce54823658ed60022d919cc8c562a7d51ee (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, version=17.1.12, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=nova_compute_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:11:55 np0005546420.localdomain python3[58518]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: libpod-81644a5cd844483ae0ca84ab5ce95ce54823658ed60022d919cc8c562a7d51ee.scope: Deactivated successfully.
Dec 05 08:11:55 np0005546420.localdomain podman[58634]: 2025-12-05 08:11:55.128441624 +0000 UTC m=+0.058519614 container died 270092fa75050ad95aec682b7ea7360b14a3c8aafa6c75b31f638a08be5d41ea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step2, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, version=17.1.12, container_name=nova_virtqemud_init_logs, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 05 08:11:55 np0005546420.localdomain podman[58655]: 2025-12-05 08:11:55.149738661 +0000 UTC m=+0.052070946 container died 81644a5cd844483ae0ca84ab5ce95ce54823658ed60022d919cc8c562a7d51ee (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute_init_log, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git)
Dec 05 08:11:55 np0005546420.localdomain podman[58655]: 2025-12-05 08:11:55.175716511 +0000 UTC m=+0.078048766 container cleanup 81644a5cd844483ae0ca84ab5ce95ce54823658ed60022d919cc8c562a7d51ee (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute_init_log, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: libpod-conmon-81644a5cd844483ae0ca84ab5ce95ce54823658ed60022d919cc8c562a7d51ee.scope: Deactivated successfully.
Dec 05 08:11:55 np0005546420.localdomain podman[58633]: 2025-12-05 08:11:55.304036355 +0000 UTC m=+0.238201531 container cleanup 270092fa75050ad95aec682b7ea7360b14a3c8aafa6c75b31f638a08be5d41ea (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step2, container_name=nova_virtqemud_init_logs, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: libpod-conmon-270092fa75050ad95aec682b7ea7360b14a3c8aafa6c75b31f638a08be5d41ea.scope: Deactivated successfully.
Dec 05 08:11:55 np0005546420.localdomain podman[58781]: 2025-12-05 08:11:55.622135347 +0000 UTC m=+0.060204056 container create 046d47b72f7418c1c97b005fbc51b8f9bde3c2e8bfb56d4e9e82fef2eebd7f5d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=create_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Dec 05 08:11:55 np0005546420.localdomain podman[58787]: 2025-12-05 08:11:55.662280124 +0000 UTC m=+0.087708764 container create 1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, build-date=2025-11-19T00:14:25Z, release=1761123044, url=https://www.redhat.com, container_name=create_haproxy_wrapper, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step2, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: Started libpod-conmon-046d47b72f7418c1c97b005fbc51b8f9bde3c2e8bfb56d4e9e82fef2eebd7f5d.scope.
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:11:55 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6854836afb9692a1e5ddd8b5918f8daabf3b654c918f0b10a3b63996a4e7a72/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: Started libpod-conmon-1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b.scope.
Dec 05 08:11:55 np0005546420.localdomain podman[58781]: 2025-12-05 08:11:55.688905964 +0000 UTC m=+0.126974683 container init 046d47b72f7418c1c97b005fbc51b8f9bde3c2e8bfb56d4e9e82fef2eebd7f5d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-nova-libvirt, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, architecture=x86_64, container_name=create_virtlogd_wrapper)
Dec 05 08:11:55 np0005546420.localdomain podman[58781]: 2025-12-05 08:11:55.589872273 +0000 UTC m=+0.027941002 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:11:55 np0005546420.localdomain podman[58781]: 2025-12-05 08:11:55.695805867 +0000 UTC m=+0.133874586 container start 046d47b72f7418c1c97b005fbc51b8f9bde3c2e8bfb56d4e9e82fef2eebd7f5d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, version=17.1.12, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, vendor=Red Hat, Inc.)
Dec 05 08:11:55 np0005546420.localdomain podman[58781]: 2025-12-05 08:11:55.696187179 +0000 UTC m=+0.134255898 container attach 046d47b72f7418c1c97b005fbc51b8f9bde3c2e8bfb56d4e9e82fef2eebd7f5d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step2, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, container_name=create_virtlogd_wrapper, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:11:55 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9a36f7a024434b36038e901614ce5ff2d94721d9179c8f6d2073bbfe0a9a23/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 08:11:55 np0005546420.localdomain podman[58787]: 2025-12-05 08:11:55.710857261 +0000 UTC m=+0.136285881 container init 1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=create_haproxy_wrapper, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, batch=17.1_20251118.1, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 08:11:55 np0005546420.localdomain podman[58787]: 2025-12-05 08:11:55.718568599 +0000 UTC m=+0.143997219 container start 1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=create_haproxy_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step2)
Dec 05 08:11:55 np0005546420.localdomain podman[58787]: 2025-12-05 08:11:55.718792656 +0000 UTC m=+0.144221276 container attach 1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, container_name=create_haproxy_wrapper, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible)
Dec 05 08:11:55 np0005546420.localdomain podman[58787]: 2025-12-05 08:11:55.621129896 +0000 UTC m=+0.046558536 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 05 08:11:55 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-1a1f4c636b38260509d0a72095ca55b50ae4106843ada128752b1ecf32659770-merged.mount: Deactivated successfully.
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81644a5cd844483ae0ca84ab5ce95ce54823658ed60022d919cc8c562a7d51ee-userdata-shm.mount: Deactivated successfully.
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e28f9293ec6754804a09c7d9d69f59819a47e4fdd6275f5b72f6e2577ab30af0-merged.mount: Deactivated successfully.
Dec 05 08:11:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-270092fa75050ad95aec682b7ea7360b14a3c8aafa6c75b31f638a08be5d41ea-userdata-shm.mount: Deactivated successfully.
Dec 05 08:11:56 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Dec 05 08:11:57 np0005546420.localdomain ovs-vsctl[58907]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Dec 05 08:11:57 np0005546420.localdomain systemd[1]: libpod-046d47b72f7418c1c97b005fbc51b8f9bde3c2e8bfb56d4e9e82fef2eebd7f5d.scope: Deactivated successfully.
Dec 05 08:11:57 np0005546420.localdomain systemd[1]: libpod-046d47b72f7418c1c97b005fbc51b8f9bde3c2e8bfb56d4e9e82fef2eebd7f5d.scope: Consumed 2.011s CPU time.
Dec 05 08:11:57 np0005546420.localdomain podman[59033]: 2025-12-05 08:11:57.770367814 +0000 UTC m=+0.049976701 container died 046d47b72f7418c1c97b005fbc51b8f9bde3c2e8bfb56d4e9e82fef2eebd7f5d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step2, version=17.1.12, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:11:57 np0005546420.localdomain systemd[1]: tmp-crun.JUOd0Q.mount: Deactivated successfully.
Dec 05 08:11:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-046d47b72f7418c1c97b005fbc51b8f9bde3c2e8bfb56d4e9e82fef2eebd7f5d-userdata-shm.mount: Deactivated successfully.
Dec 05 08:11:57 np0005546420.localdomain podman[59033]: 2025-12-05 08:11:57.812580475 +0000 UTC m=+0.092189292 container cleanup 046d47b72f7418c1c97b005fbc51b8f9bde3c2e8bfb56d4e9e82fef2eebd7f5d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=create_virtlogd_wrapper, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step2)
Dec 05 08:11:57 np0005546420.localdomain systemd[1]: libpod-conmon-046d47b72f7418c1c97b005fbc51b8f9bde3c2e8bfb56d4e9e82fef2eebd7f5d.scope: Deactivated successfully.
Dec 05 08:11:57 np0005546420.localdomain python3[58518]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper
Dec 05 08:11:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a6854836afb9692a1e5ddd8b5918f8daabf3b654c918f0b10a3b63996a4e7a72-merged.mount: Deactivated successfully.
Dec 05 08:11:58 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 05 08:11:58 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 05 08:11:58 np0005546420.localdomain systemd[1]: libpod-1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b.scope: Deactivated successfully.
Dec 05 08:11:58 np0005546420.localdomain systemd[1]: libpod-1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b.scope: Consumed 1.999s CPU time.
Dec 05 08:11:58 np0005546420.localdomain podman[58787]: 2025-12-05 08:11:58.940099199 +0000 UTC m=+3.365527869 container died 1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vendor=Red Hat, Inc., version=17.1.12, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, container_name=create_haproxy_wrapper)
Dec 05 08:11:58 np0005546420.localdomain systemd[1]: tmp-crun.JjNJb5.mount: Deactivated successfully.
Dec 05 08:11:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b-userdata-shm.mount: Deactivated successfully.
Dec 05 08:11:59 np0005546420.localdomain podman[59073]: 2025-12-05 08:11:59.015035528 +0000 UTC m=+0.065773977 container cleanup 1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, maintainer=OpenStack TripleO Team, container_name=create_haproxy_wrapper, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step2, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:11:59 np0005546420.localdomain systemd[1]: libpod-conmon-1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b.scope: Deactivated successfully.
Dec 05 08:11:59 np0005546420.localdomain python3[58518]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers
Dec 05 08:11:59 np0005546420.localdomain sudo[58516]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:59 np0005546420.localdomain sudo[59124]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oeaxgrcsycibqcddbbdedgtnnjjodqjb ; /usr/bin/python3
Dec 05 08:11:59 np0005546420.localdomain sudo[59124]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:11:59 np0005546420.localdomain python3[59126]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:11:59 np0005546420.localdomain sudo[59124]: pam_unix(sudo:session): session closed for user root
Dec 05 08:11:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-6c9a36f7a024434b36038e901614ce5ff2d94721d9179c8f6d2073bbfe0a9a23-merged.mount: Deactivated successfully.
Dec 05 08:12:00 np0005546420.localdomain sudo[59172]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hoqjappnlpybawmuzgsrhlgdjwurwzup ; /usr/bin/python3
Dec 05 08:12:00 np0005546420.localdomain sudo[59172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:12:00 np0005546420.localdomain sudo[59172]: pam_unix(sudo:session): session closed for user root
Dec 05 08:12:00 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec 05 08:12:00 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec 05 08:12:00 np0005546420.localdomain sudo[59215]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzrtvccxgwromwqfybhhgzdbqfcbvowu ; /usr/bin/python3
Dec 05 08:12:00 np0005546420.localdomain sudo[59215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:12:00 np0005546420.localdomain sudo[59215]: pam_unix(sudo:session): session closed for user root
Dec 05 08:12:00 np0005546420.localdomain sudo[59245]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xbwcouqomkgllhbjphpzmjdwyywmgvzy ; /usr/bin/python3
Dec 05 08:12:00 np0005546420.localdomain sudo[59245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:12:01 np0005546420.localdomain python3[59247]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005546420 step=2 update_config_hash_only=False
Dec 05 08:12:01 np0005546420.localdomain sudo[59245]: pam_unix(sudo:session): session closed for user root
Dec 05 08:12:01 np0005546420.localdomain sudo[59261]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-glxmkmdfvlcpawbxcdhojvpcccqcugty ; /usr/bin/python3
Dec 05 08:12:01 np0005546420.localdomain sudo[59261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:12:01 np0005546420.localdomain python3[59263]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:12:01 np0005546420.localdomain sudo[59261]: pam_unix(sudo:session): session closed for user root
Dec 05 08:12:01 np0005546420.localdomain sudo[59277]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfnfxsuqawpuqvfkgwbvxxopyflcbyks ; /usr/bin/python3
Dec 05 08:12:01 np0005546420.localdomain sudo[59277]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:12:02 np0005546420.localdomain python3[59279]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 05 08:12:02 np0005546420.localdomain sudo[59277]: pam_unix(sudo:session): session closed for user root
Dec 05 08:12:02 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 05 08:12:02 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 05 08:12:02 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 05 08:12:02 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 05 08:12:03 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 05 08:12:03 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 05 08:12:05 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 05 08:12:05 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 05 08:12:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:12:05 np0005546420.localdomain systemd[1]: tmp-crun.RATPGw.mount: Deactivated successfully.
Dec 05 08:12:05 np0005546420.localdomain podman[59280]: 2025-12-05 08:12:05.507885033 +0000 UTC m=+0.086564248 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-qdrouterd, architecture=x86_64, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:12:05 np0005546420.localdomain podman[59280]: 2025-12-05 08:12:05.709368482 +0000 UTC m=+0.288047717 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, container_name=metrics_qdr)
Dec 05 08:12:05 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:12:05 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 05 08:12:05 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 05 08:12:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:12:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Cumulative writes: 5097 writes, 22K keys, 5097 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5097 writes, 506 syncs, 10.07 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 1839 writes, 6637 keys, 1839 commit groups, 1.0 writes per commit group, ingest: 2.70 MB, 0.00 MB/s
                                                          Interval WAL: 1839 writes, 361 syncs, 5.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 05 08:12:07 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Dec 05 08:12:07 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Dec 05 08:12:08 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 05 08:12:08 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 05 08:12:10 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec 05 08:12:10 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec 05 08:12:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:12:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Cumulative writes: 4224 writes, 19K keys, 4224 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4224 writes, 407 syncs, 10.38 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 965 writes, 3519 keys, 965 commit groups, 1.0 writes per commit group, ingest: 1.89 MB, 0.00 MB/s
                                                          Interval WAL: 965 writes, 262 syncs, 3.68 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 05 08:12:11 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 05 08:12:11 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 05 08:12:11 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 7.5 deep-scrub starts
Dec 05 08:12:11 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 7.5 deep-scrub ok
Dec 05 08:12:13 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 05 08:12:13 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 05 08:12:14 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 05 08:12:14 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 05 08:12:16 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 05 08:12:16 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 05 08:12:18 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 05 08:12:18 np0005546420.localdomain ceph-osd[32907]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 05 08:12:29 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 05 08:12:29 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 05 08:12:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:12:36 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 05 08:12:36 np0005546420.localdomain ceph-osd[31961]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 05 08:12:36 np0005546420.localdomain podman[59309]: 2025-12-05 08:12:36.500105767 +0000 UTC m=+0.082639658 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 05 08:12:36 np0005546420.localdomain podman[59309]: 2025-12-05 08:12:36.745676098 +0000 UTC m=+0.328209999 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, release=1761123044, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:12:36 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:12:37 np0005546420.localdomain sudo[59338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:12:37 np0005546420.localdomain sudo[59338]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:12:37 np0005546420.localdomain sudo[59338]: pam_unix(sudo:session): session closed for user root
Dec 05 08:12:37 np0005546420.localdomain sudo[59353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 08:12:37 np0005546420.localdomain sudo[59353]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:12:37 np0005546420.localdomain sudo[59353]: pam_unix(sudo:session): session closed for user root
Dec 05 08:12:37 np0005546420.localdomain sudo[59389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:12:37 np0005546420.localdomain sudo[59389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:12:37 np0005546420.localdomain sudo[59389]: pam_unix(sudo:session): session closed for user root
Dec 05 08:12:37 np0005546420.localdomain sudo[59404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:12:37 np0005546420.localdomain sudo[59404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:12:38 np0005546420.localdomain sudo[59404]: pam_unix(sudo:session): session closed for user root
Dec 05 08:12:38 np0005546420.localdomain sudo[59451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:12:38 np0005546420.localdomain sudo[59451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:12:38 np0005546420.localdomain sudo[59451]: pam_unix(sudo:session): session closed for user root
Dec 05 08:13:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:13:07 np0005546420.localdomain systemd[1]: tmp-crun.qFk5Yk.mount: Deactivated successfully.
Dec 05 08:13:07 np0005546420.localdomain podman[59466]: 2025-12-05 08:13:07.505994403 +0000 UTC m=+0.084380781 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, container_name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:13:07 np0005546420.localdomain podman[59466]: 2025-12-05 08:13:07.734373023 +0000 UTC m=+0.312759301 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1)
Dec 05 08:13:07 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:13:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:13:38 np0005546420.localdomain podman[59497]: 2025-12-05 08:13:38.507728839 +0000 UTC m=+0.081304388 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Dec 05 08:13:38 np0005546420.localdomain podman[59497]: 2025-12-05 08:13:38.697053313 +0000 UTC m=+0.270628872 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 05 08:13:38 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:13:39 np0005546420.localdomain sudo[59526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:13:39 np0005546420.localdomain sudo[59526]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:13:39 np0005546420.localdomain sudo[59526]: pam_unix(sudo:session): session closed for user root
Dec 05 08:13:39 np0005546420.localdomain sudo[59541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:13:39 np0005546420.localdomain sudo[59541]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:13:39 np0005546420.localdomain sudo[59541]: pam_unix(sudo:session): session closed for user root
Dec 05 08:13:40 np0005546420.localdomain sudo[59587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:13:40 np0005546420.localdomain sudo[59587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:13:40 np0005546420.localdomain sudo[59587]: pam_unix(sudo:session): session closed for user root
Dec 05 08:14:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:14:09 np0005546420.localdomain systemd[1]: tmp-crun.6DKMt1.mount: Deactivated successfully.
Dec 05 08:14:09 np0005546420.localdomain podman[59602]: 2025-12-05 08:14:09.568870569 +0000 UTC m=+0.151688813 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, version=17.1.12, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:14:09 np0005546420.localdomain podman[59602]: 2025-12-05 08:14:09.784341807 +0000 UTC m=+0.367160011 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:14:09 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:14:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:14:40 np0005546420.localdomain podman[59631]: 2025-12-05 08:14:40.498217985 +0000 UTC m=+0.081913587 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd)
Dec 05 08:14:40 np0005546420.localdomain sudo[59660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:14:40 np0005546420.localdomain sudo[59660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:14:40 np0005546420.localdomain sudo[59660]: pam_unix(sudo:session): session closed for user root
Dec 05 08:14:40 np0005546420.localdomain podman[59631]: 2025-12-05 08:14:40.731927246 +0000 UTC m=+0.315622858 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, config_id=tripleo_step1, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 05 08:14:40 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:14:40 np0005546420.localdomain sudo[59675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 08:14:40 np0005546420.localdomain sudo[59675]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:14:41 np0005546420.localdomain podman[59762]: 2025-12-05 08:14:41.563841224 +0000 UTC m=+0.123538417 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, RELEASE=main, release=1763362218)
Dec 05 08:14:41 np0005546420.localdomain podman[59762]: 2025-12-05 08:14:41.668595752 +0000 UTC m=+0.228292985 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, version=7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1763362218, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, build-date=2025-11-26T19:44:28Z)
Dec 05 08:14:41 np0005546420.localdomain sudo[59675]: pam_unix(sudo:session): session closed for user root
Dec 05 08:14:42 np0005546420.localdomain sudo[59824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:14:42 np0005546420.localdomain sudo[59824]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:14:42 np0005546420.localdomain sudo[59824]: pam_unix(sudo:session): session closed for user root
Dec 05 08:14:42 np0005546420.localdomain sudo[59839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:14:42 np0005546420.localdomain sudo[59839]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:14:42 np0005546420.localdomain sudo[59839]: pam_unix(sudo:session): session closed for user root
Dec 05 08:14:43 np0005546420.localdomain sudo[59886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:14:43 np0005546420.localdomain sudo[59886]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:14:43 np0005546420.localdomain sudo[59886]: pam_unix(sudo:session): session closed for user root
Dec 05 08:15:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:15:11 np0005546420.localdomain systemd[1]: tmp-crun.mqwvPb.mount: Deactivated successfully.
Dec 05 08:15:11 np0005546420.localdomain podman[59901]: 2025-12-05 08:15:11.512881388 +0000 UTC m=+0.087256172 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container)
Dec 05 08:15:11 np0005546420.localdomain podman[59901]: 2025-12-05 08:15:11.740972766 +0000 UTC m=+0.315347490 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:15:11 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:15:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:15:42 np0005546420.localdomain podman[59928]: 2025-12-05 08:15:42.501751454 +0000 UTC m=+0.078704829 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:15:42 np0005546420.localdomain podman[59928]: 2025-12-05 08:15:42.688872573 +0000 UTC m=+0.265825938 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, architecture=x86_64)
Dec 05 08:15:42 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:15:43 np0005546420.localdomain sudo[59956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:15:43 np0005546420.localdomain sudo[59956]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:15:43 np0005546420.localdomain sudo[59956]: pam_unix(sudo:session): session closed for user root
Dec 05 08:15:43 np0005546420.localdomain sudo[59971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:15:43 np0005546420.localdomain sudo[59971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:15:44 np0005546420.localdomain sudo[59971]: pam_unix(sudo:session): session closed for user root
Dec 05 08:15:44 np0005546420.localdomain sudo[60017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:15:44 np0005546420.localdomain sudo[60017]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:15:44 np0005546420.localdomain sudo[60017]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:16:13 np0005546420.localdomain podman[60032]: 2025-12-05 08:16:13.489951968 +0000 UTC m=+0.072365784 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T22:49:46Z, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:16:13 np0005546420.localdomain podman[60032]: 2025-12-05 08:16:13.707477171 +0000 UTC m=+0.289890997 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, release=1761123044, architecture=x86_64, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:16:13 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:16:31 np0005546420.localdomain sudo[60106]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcdtummkoaqhedmgizmepquiikolwssm ; /usr/bin/python3
Dec 05 08:16:31 np0005546420.localdomain sudo[60106]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:31 np0005546420.localdomain python3[60108]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:16:31 np0005546420.localdomain sudo[60106]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:31 np0005546420.localdomain sudo[60151]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ifutinjtxxflxckphdossggcdsetgqvx ; /usr/bin/python3
Dec 05 08:16:31 np0005546420.localdomain sudo[60151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:31 np0005546420.localdomain python3[60153]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922591.0185962-99148-171096514626688/source _original_basename=tmppe51_mrd follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:16:31 np0005546420.localdomain sudo[60151]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:32 np0005546420.localdomain sudo[60181]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uebcglmrwwbztijarfjarigrsfnzelkx ; /usr/bin/python3
Dec 05 08:16:32 np0005546420.localdomain sudo[60181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:32 np0005546420.localdomain python3[60183]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:16:32 np0005546420.localdomain sudo[60181]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:33 np0005546420.localdomain sudo[60231]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqgpvjjujgxszpjttyychyxgfjkarmvx ; /usr/bin/python3
Dec 05 08:16:33 np0005546420.localdomain sudo[60231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:33 np0005546420.localdomain sudo[60231]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:33 np0005546420.localdomain sudo[60249]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxmqnunxhddazdfvvbxnpasmoztgekei ; /usr/bin/python3
Dec 05 08:16:33 np0005546420.localdomain sudo[60249]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:33 np0005546420.localdomain sudo[60249]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:34 np0005546420.localdomain sudo[60353]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-apwtokubshzejpetewdcnsosjszmqpum ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922593.8091533-99314-225182330728388/async_wrapper.py 241016258628 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922593.8091533-99314-225182330728388/AnsiballZ_command.py _
Dec 05 08:16:34 np0005546420.localdomain sudo[60353]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 05 08:16:34 np0005546420.localdomain ansible-async_wrapper.py[60355]: Invoked with 241016258628 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922593.8091533-99314-225182330728388/AnsiballZ_command.py _
Dec 05 08:16:34 np0005546420.localdomain ansible-async_wrapper.py[60358]: Starting module and watcher
Dec 05 08:16:34 np0005546420.localdomain ansible-async_wrapper.py[60358]: Start watching 60359 (3600)
Dec 05 08:16:34 np0005546420.localdomain ansible-async_wrapper.py[60359]: Start module (60359)
Dec 05 08:16:34 np0005546420.localdomain ansible-async_wrapper.py[60355]: Return async_wrapper task started.
Dec 05 08:16:34 np0005546420.localdomain sudo[60353]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:34 np0005546420.localdomain sudo[60374]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iayxuquzkoimcddloryrpwwnrlqohbbi ; /usr/bin/python3
Dec 05 08:16:34 np0005546420.localdomain sudo[60374]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:34 np0005546420.localdomain python3[60379]: ansible-ansible.legacy.async_status Invoked with jid=241016258628.60355 mode=status _async_dir=/tmp/.ansible_async
Dec 05 08:16:34 np0005546420.localdomain sudo[60374]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:37 np0005546420.localdomain puppet-user[60378]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:16:37 np0005546420.localdomain puppet-user[60378]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:16:37 np0005546420.localdomain puppet-user[60378]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:16:37 np0005546420.localdomain puppet-user[60378]:    (file & line not available)
Dec 05 08:16:37 np0005546420.localdomain puppet-user[60378]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:16:37 np0005546420.localdomain puppet-user[60378]:    (file & line not available)
Dec 05 08:16:37 np0005546420.localdomain puppet-user[60378]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 05 08:16:37 np0005546420.localdomain puppet-user[60378]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 05 08:16:37 np0005546420.localdomain puppet-user[60378]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.10 seconds
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]: Notice: Applied catalog in 0.04 seconds
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]: Application:
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:    Initial environment: production
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:    Converged environment: production
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:          Run mode: user
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]: Changes:
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]: Events:
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]: Resources:
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:             Total: 10
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]: Time:
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:          Schedule: 0.00
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:              File: 0.00
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:              Exec: 0.01
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:            Augeas: 0.01
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:    Transaction evaluation: 0.02
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:    Catalog application: 0.04
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:    Config retrieval: 0.13
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:          Last run: 1764922598
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:        Filebucket: 0.00
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:             Total: 0.04
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]: Version:
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:            Config: 1764922597
Dec 05 08:16:38 np0005546420.localdomain puppet-user[60378]:            Puppet: 7.10.0
Dec 05 08:16:38 np0005546420.localdomain ansible-async_wrapper.py[60359]: Module complete (60359)
Dec 05 08:16:39 np0005546420.localdomain ansible-async_wrapper.py[60358]: Done in kid B.
Dec 05 08:16:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:16:44 np0005546420.localdomain podman[60490]: 2025-12-05 08:16:44.504459843 +0000 UTC m=+0.081944783 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public)
Dec 05 08:16:44 np0005546420.localdomain podman[60490]: 2025-12-05 08:16:44.717325968 +0000 UTC m=+0.294810968 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc.)
Dec 05 08:16:44 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:16:44 np0005546420.localdomain sudo[60518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:16:44 np0005546420.localdomain sudo[60518]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:16:44 np0005546420.localdomain sudo[60518]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:44 np0005546420.localdomain sudo[60533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:16:44 np0005546420.localdomain sudo[60533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:16:44 np0005546420.localdomain sudo[60560]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwaezofasnyqvvyscqhxepmumopfiagp ; /usr/bin/python3
Dec 05 08:16:44 np0005546420.localdomain sudo[60560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:45 np0005546420.localdomain python3[60563]: ansible-ansible.legacy.async_status Invoked with jid=241016258628.60355 mode=status _async_dir=/tmp/.ansible_async
Dec 05 08:16:45 np0005546420.localdomain sudo[60560]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:45 np0005546420.localdomain sudo[60533]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:45 np0005546420.localdomain sudo[60610]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qpkceqscsldgtqldzsootfihojqknbfs ; /usr/bin/python3
Dec 05 08:16:45 np0005546420.localdomain sudo[60610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:45 np0005546420.localdomain python3[60612]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:16:45 np0005546420.localdomain sudo[60610]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:45 np0005546420.localdomain sudo[60626]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikyqwifpbfkvuzpfbqautffdaunmylqo ; /usr/bin/python3
Dec 05 08:16:45 np0005546420.localdomain sudo[60626]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:46 np0005546420.localdomain sudo[60629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:16:46 np0005546420.localdomain sudo[60629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:16:46 np0005546420.localdomain sudo[60629]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:46 np0005546420.localdomain python3[60628]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:16:46 np0005546420.localdomain sudo[60626]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:46 np0005546420.localdomain sudo[60691]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-inzsfwsvqpxgnscrirhmhsmyshmugrza ; /usr/bin/python3
Dec 05 08:16:46 np0005546420.localdomain sudo[60691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:46 np0005546420.localdomain python3[60693]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:16:46 np0005546420.localdomain sudo[60691]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:46 np0005546420.localdomain sudo[60709]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mizfyjbgvuhvrywlvnflpxqfrfmkwiab ; /usr/bin/python3
Dec 05 08:16:46 np0005546420.localdomain sudo[60709]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:46 np0005546420.localdomain python3[60711]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpf5m39hjp recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:16:46 np0005546420.localdomain sudo[60709]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:47 np0005546420.localdomain sudo[60739]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqzbzczntexcnuqrcfryszolmbhzybmf ; /usr/bin/python3
Dec 05 08:16:47 np0005546420.localdomain sudo[60739]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:47 np0005546420.localdomain python3[60741]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:16:47 np0005546420.localdomain sudo[60739]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:47 np0005546420.localdomain sudo[60755]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fajsrfsuzygenqdfvclkpcswiwvchhnh ; /usr/bin/python3
Dec 05 08:16:47 np0005546420.localdomain sudo[60755]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:48 np0005546420.localdomain sudo[60755]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:48 np0005546420.localdomain sudo[60842]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnczxzrawcwtdcbvnerlzseyqvaxfrlt ; /usr/bin/python3
Dec 05 08:16:48 np0005546420.localdomain sudo[60842]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:48 np0005546420.localdomain python3[60844]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 05 08:16:48 np0005546420.localdomain sudo[60842]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:49 np0005546420.localdomain sudo[60861]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ljfbrrqckdiadkcikbrclofdijntctbp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:49 np0005546420.localdomain sudo[60861]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:49 np0005546420.localdomain python3[60863]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:16:49 np0005546420.localdomain sudo[60861]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:49 np0005546420.localdomain sudo[60877]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zbwggayjfzbcptrhjlbxdjwvsgbttnxz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:49 np0005546420.localdomain sudo[60877]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:50 np0005546420.localdomain sudo[60877]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:50 np0005546420.localdomain sudo[60893]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxseduvlsisymkovksaqipsftjixmuyz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:50 np0005546420.localdomain sudo[60893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:50 np0005546420.localdomain python3[60895]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:16:50 np0005546420.localdomain sudo[60893]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:51 np0005546420.localdomain sudo[60943]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uvxlyhsolejcuhutawibiiactcdrqgzo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:51 np0005546420.localdomain sudo[60943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:51 np0005546420.localdomain python3[60945]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:16:51 np0005546420.localdomain sudo[60943]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:51 np0005546420.localdomain sudo[60961]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rojyulqnzwfdrbnrgxnjeaixwyqtbado ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:51 np0005546420.localdomain sudo[60961]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:51 np0005546420.localdomain python3[60963]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:16:51 np0005546420.localdomain sudo[60961]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:52 np0005546420.localdomain sudo[61023]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvisovwasuczppvuqodeeobuonbwxohs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:52 np0005546420.localdomain sudo[61023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:52 np0005546420.localdomain python3[61025]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:16:52 np0005546420.localdomain sudo[61023]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:52 np0005546420.localdomain sudo[61041]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iyixrkchkeejqaggtuteeiqmmwxjbkkd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:52 np0005546420.localdomain sudo[61041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:52 np0005546420.localdomain python3[61043]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:16:52 np0005546420.localdomain sudo[61041]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:52 np0005546420.localdomain sudo[61103]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xttjssiguxkyqxmuckdonxlsdxjpwupe ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:52 np0005546420.localdomain sudo[61103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:53 np0005546420.localdomain python3[61105]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:16:53 np0005546420.localdomain sudo[61103]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:53 np0005546420.localdomain sudo[61121]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cnbyecwbmjanxeuysrunsohbkbwwdwbx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:53 np0005546420.localdomain sudo[61121]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:53 np0005546420.localdomain python3[61123]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:16:53 np0005546420.localdomain sudo[61121]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:53 np0005546420.localdomain sudo[61183]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzdvmbrkznbabyiljffppjeimlhlwetj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:53 np0005546420.localdomain sudo[61183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:53 np0005546420.localdomain python3[61185]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:16:53 np0005546420.localdomain sudo[61183]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:54 np0005546420.localdomain sudo[61201]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nelworhaaechcstrsidhrlnvotpibhbr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:54 np0005546420.localdomain sudo[61201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:54 np0005546420.localdomain python3[61203]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:16:54 np0005546420.localdomain sudo[61201]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:54 np0005546420.localdomain sudo[61231]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irpllyuxdmpprzfgtbirrpwiwiedfael ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:54 np0005546420.localdomain sudo[61231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:54 np0005546420.localdomain python3[61233]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:16:54 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:16:54 np0005546420.localdomain systemd-rc-local-generator[61261]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:16:54 np0005546420.localdomain systemd-sysv-generator[61264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:16:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:16:54 np0005546420.localdomain systemd[1]: Starting dnf makecache...
Dec 05 08:16:55 np0005546420.localdomain sudo[61231]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:55 np0005546420.localdomain dnf[61271]: Updating Subscription Management repositories.
Dec 05 08:16:55 np0005546420.localdomain sudo[61318]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ibcjefcxqmhlezffwkjvtmuibawndmqe ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:55 np0005546420.localdomain sudo[61318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:55 np0005546420.localdomain python3[61320]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:16:55 np0005546420.localdomain sudo[61318]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:55 np0005546420.localdomain sudo[61336]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vurzidmrgbonboudymweadhnizblwqzp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:55 np0005546420.localdomain sudo[61336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:55 np0005546420.localdomain python3[61338]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:16:55 np0005546420.localdomain sudo[61336]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:56 np0005546420.localdomain sudo[61398]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lhdxiahzfucuxcruwqpgnkijwgiepkhi ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:56 np0005546420.localdomain sudo[61398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:56 np0005546420.localdomain python3[61400]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:16:56 np0005546420.localdomain sudo[61398]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:56 np0005546420.localdomain sudo[61416]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uzitgrqpnlwugttxkviryiewaummfqti ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:56 np0005546420.localdomain sudo[61416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:56 np0005546420.localdomain python3[61418]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:16:56 np0005546420.localdomain sudo[61416]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:56 np0005546420.localdomain sudo[61446]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bunhnmfduowdmdxjygusxvodvudbumvg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:56 np0005546420.localdomain sudo[61446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:56 np0005546420.localdomain dnf[61271]: Metadata cache refreshed recently.
Dec 05 08:16:57 np0005546420.localdomain systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 05 08:16:57 np0005546420.localdomain systemd[1]: Finished dnf makecache.
Dec 05 08:16:57 np0005546420.localdomain systemd[1]: dnf-makecache.service: Consumed 1.967s CPU time.
Dec 05 08:16:57 np0005546420.localdomain python3[61448]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:16:57 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:16:57 np0005546420.localdomain systemd-rc-local-generator[61471]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:16:57 np0005546420.localdomain systemd-sysv-generator[61475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:16:57 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:16:57 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 08:16:57 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 08:16:57 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 08:16:57 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 08:16:57 np0005546420.localdomain sudo[61446]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:57 np0005546420.localdomain sudo[61503]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-szxcyyfwoninyavgmivpwuxadpfmcebm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:57 np0005546420.localdomain sudo[61503]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:57 np0005546420.localdomain python3[61505]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 05 08:16:57 np0005546420.localdomain sudo[61503]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:58 np0005546420.localdomain sudo[61519]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rnvhzlzllgrkxcyppomilrzdugejnbdg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:58 np0005546420.localdomain sudo[61519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:58 np0005546420.localdomain sudo[61519]: pam_unix(sudo:session): session closed for user root
Dec 05 08:16:59 np0005546420.localdomain sudo[61560]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mkdmgqxnsnbxiwwosbsctjwywezuffmu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:16:59 np0005546420.localdomain sudo[61560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:16:59 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 05 08:17:00 np0005546420.localdomain podman[61718]: 2025-12-05 08:17:00.076432136 +0000 UTC m=+0.066812385 container create d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, container_name=rsyslog, architecture=x86_64, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 05 08:17:00 np0005546420.localdomain podman[61731]: 2025-12-05 08:17:00.101731731 +0000 UTC m=+0.083408580 container create 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, container_name=collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libpod-conmon-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a.scope.
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libpod-conmon-63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.scope.
Dec 05 08:17:00 np0005546420.localdomain podman[61741]: 2025-12-05 08:17:00.126048655 +0000 UTC m=+0.095977709 container create a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1a749154e63d40b680bc56b84ad99f9346ef73a071954dcf2dda725e125803/merged/scripts supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1a749154e63d40b680bc56b84ad99f9346ef73a071954dcf2dda725e125803/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain podman[61718]: 2025-12-05 08:17:00.042660838 +0000 UTC m=+0.033041107 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 05 08:17:00 np0005546420.localdomain podman[61731]: 2025-12-05 08:17:00.048677664 +0000 UTC m=+0.030354513 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libpod-conmon-a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a.scope.
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:17:00 np0005546420.localdomain podman[61731]: 2025-12-05 08:17:00.168196782 +0000 UTC m=+0.149873631 container init 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team)
Dec 05 08:17:00 np0005546420.localdomain podman[61741]: 2025-12-05 08:17:00.068729006 +0000 UTC m=+0.038658060 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9b52405571b7dbea88b728550de84377ddb5cebafdc587dadde8e1530aa413/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9b52405571b7dbea88b728550de84377ddb5cebafdc587dadde8e1530aa413/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9b52405571b7dbea88b728550de84377ddb5cebafdc587dadde8e1530aa413/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9b52405571b7dbea88b728550de84377ddb5cebafdc587dadde8e1530aa413/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9b52405571b7dbea88b728550de84377ddb5cebafdc587dadde8e1530aa413/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9b52405571b7dbea88b728550de84377ddb5cebafdc587dadde8e1530aa413/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f9b52405571b7dbea88b728550de84377ddb5cebafdc587dadde8e1530aa413/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:17:00 np0005546420.localdomain podman[61766]: 2025-12-05 08:17:00.183508178 +0000 UTC m=+0.120652645 container create 3436625262c0d6a8d425673ff154c7b6f4d6e143b4ba733fbb7e5532420f42fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_statedir_owner, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, io.openshift.expose-services=, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com)
Dec 05 08:17:00 np0005546420.localdomain sudo[61807]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:00 np0005546420.localdomain systemd-logind[762]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 05 08:17:00 np0005546420.localdomain podman[61718]: 2025-12-05 08:17:00.202461836 +0000 UTC m=+0.192842085 container init d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, container_name=rsyslog, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-rsyslog, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:17:00 np0005546420.localdomain podman[61749]: 2025-12-05 08:17:00.219417952 +0000 UTC m=+0.180313676 container create c56a91a521ca13953e2d4d9c7da780f3481ff312136203d6f38c8a5305c83fa0 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2025-11-19T00:12:45Z, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_init_log, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step3)
Dec 05 08:17:00 np0005546420.localdomain podman[61731]: 2025-12-05 08:17:00.235633316 +0000 UTC m=+0.217310195 container start 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 05 08:17:00 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=4767aaabc3de112d8791c290aa2b669d --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libpod-conmon-3436625262c0d6a8d425673ff154c7b6f4d6e143b4ba733fbb7e5532420f42fa.scope.
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libpod-conmon-c56a91a521ca13953e2d4d9c7da780f3481ff312136203d6f38c8a5305c83fa0.scope.
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 05 08:17:00 np0005546420.localdomain podman[61766]: 2025-12-05 08:17:00.152901378 +0000 UTC m=+0.090045815 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 05 08:17:00 np0005546420.localdomain podman[61718]: 2025-12-05 08:17:00.253337545 +0000 UTC m=+0.243717794 container start d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, distribution-scope=public, container_name=rsyslog, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:00 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=a03f6602210fb500978d9137df7e914f --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5d5b29b19f7b6b07c8152e6495d006ec06094c7b209466fc3f0158f64c00cf/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain sudo[61832]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5d5b29b19f7b6b07c8152e6495d006ec06094c7b209466fc3f0158f64c00cf/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5d5b29b19f7b6b07c8152e6495d006ec06094c7b209466fc3f0158f64c00cf/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain sudo[61832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ea8042eed03f5757776778f40d7e49103f0ad20171558f729fe6c81cd471bb/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain podman[61766]: 2025-12-05 08:17:00.271202119 +0000 UTC m=+0.208346556 container init 3436625262c0d6a8d425673ff154c7b6f4d6e143b4ba733fbb7e5532420f42fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, container_name=nova_statedir_owner, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:17:00 np0005546420.localdomain podman[61749]: 2025-12-05 08:17:00.179354659 +0000 UTC m=+0.140250403 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 05 08:17:00 np0005546420.localdomain podman[61766]: 2025-12-05 08:17:00.282389046 +0000 UTC m=+0.219533483 container start 3436625262c0d6a8d425673ff154c7b6f4d6e143b4ba733fbb7e5532420f42fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, container_name=nova_statedir_owner, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:17:00 np0005546420.localdomain podman[61766]: 2025-12-05 08:17:00.282647774 +0000 UTC m=+0.219792201 container attach 3436625262c0d6a8d425673ff154c7b6f4d6e143b4ba733fbb7e5532420f42fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=nova_statedir_owner, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:00 np0005546420.localdomain podman[61741]: 2025-12-05 08:17:00.285715759 +0000 UTC m=+0.255644833 container init a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_id=tripleo_step3)
Dec 05 08:17:00 np0005546420.localdomain podman[61741]: 2025-12-05 08:17:00.294024687 +0000 UTC m=+0.263953821 container start a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, tcib_managed=true, container_name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 05 08:17:00 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ac0f5be6f71e6f8c16cd05155c4b5429 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:00 np0005546420.localdomain sudo[61861]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:00 np0005546420.localdomain systemd-logind[762]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 05 08:17:00 np0005546420.localdomain podman[61749]: 2025-12-05 08:17:00.329059364 +0000 UTC m=+0.289955078 container init c56a91a521ca13953e2d4d9c7da780f3481ff312136203d6f38c8a5305c83fa0 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_init_log, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: libpod-3436625262c0d6a8d425673ff154c7b6f4d6e143b4ba733fbb7e5532420f42fa.scope: Deactivated successfully.
Dec 05 08:17:00 np0005546420.localdomain sudo[61832]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:00 np0005546420.localdomain podman[61749]: 2025-12-05 08:17:00.335743241 +0000 UTC m=+0.296638955 container start c56a91a521ca13953e2d4d9c7da780f3481ff312136203d6f38c8a5305c83fa0 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_init_log, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step3, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, version=17.1.12, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: libpod-c56a91a521ca13953e2d4d9c7da780f3481ff312136203d6f38c8a5305c83fa0.scope: Deactivated successfully.
Dec 05 08:17:00 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: libpod-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a.scope: Deactivated successfully.
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Queued start job for default target Main User Target.
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Created slice User Application Slice.
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Reached target Paths.
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Reached target Timers.
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Starting D-Bus User Message Bus Socket...
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Starting Create User's Volatile Files and Directories...
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Finished Create User's Volatile Files and Directories.
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Listening on D-Bus User Message Bus Socket.
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Reached target Sockets.
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Reached target Basic System.
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Reached target Main User Target.
Dec 05 08:17:00 np0005546420.localdomain systemd[61830]: Startup finished in 101ms.
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started User Manager for UID 0.
Dec 05 08:17:00 np0005546420.localdomain podman[61895]: 2025-12-05 08:17:00.40597512 +0000 UTC m=+0.053264254 container died c56a91a521ca13953e2d4d9c7da780f3481ff312136203d6f38c8a5305c83fa0 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started Session c1 of User root.
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started Session c2 of User root.
Dec 05 08:17:00 np0005546420.localdomain sudo[61807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:00 np0005546420.localdomain sudo[61861]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:00 np0005546420.localdomain podman[61895]: 2025-12-05 08:17:00.431374719 +0000 UTC m=+0.078663832 container cleanup c56a91a521ca13953e2d4d9c7da780f3481ff312136203d6f38c8a5305c83fa0 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: libpod-conmon-c56a91a521ca13953e2d4d9c7da780f3481ff312136203d6f38c8a5305c83fa0.scope: Deactivated successfully.
Dec 05 08:17:00 np0005546420.localdomain podman[61888]: 2025-12-05 08:17:00.439235123 +0000 UTC m=+0.091710587 container died 3436625262c0d6a8d425673ff154c7b6f4d6e143b4ba733fbb7e5532420f42fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, container_name=nova_statedir_owner, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:17:00 np0005546420.localdomain sudo[61807]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: session-c1.scope: Deactivated successfully.
Dec 05 08:17:00 np0005546420.localdomain podman[61806]: 2025-12-05 08:17:00.419385577 +0000 UTC m=+0.228743189 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=collectd, release=1761123044, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:17:00 np0005546420.localdomain sudo[61861]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: session-c2.scope: Deactivated successfully.
Dec 05 08:17:00 np0005546420.localdomain podman[61806]: 2025-12-05 08:17:00.498585694 +0000 UTC m=+0.307943306 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, name=rhosp17/openstack-collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:17:00 np0005546420.localdomain podman[61806]: unhealthy
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Failed with result 'exit-code'.
Dec 05 08:17:00 np0005546420.localdomain podman[61888]: 2025-12-05 08:17:00.526541541 +0000 UTC m=+0.179016985 container cleanup 3436625262c0d6a8d425673ff154c7b6f4d6e143b4ba733fbb7e5532420f42fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, container_name=nova_statedir_owner, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: libpod-conmon-3436625262c0d6a8d425673ff154c7b6f4d6e143b4ba733fbb7e5532420f42fa.scope: Deactivated successfully.
Dec 05 08:17:00 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py
Dec 05 08:17:00 np0005546420.localdomain podman[61915]: 2025-12-05 08:17:00.615317646 +0000 UTC m=+0.248131850 container died d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_step3, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:17:00 np0005546420.localdomain podman[61915]: 2025-12-05 08:17:00.69568272 +0000 UTC m=+0.328496914 container cleanup d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: libpod-conmon-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a.scope: Deactivated successfully.
Dec 05 08:17:00 np0005546420.localdomain podman[62079]: 2025-12-05 08:17:00.768039136 +0000 UTC m=+0.078062484 container create 280fc05a076c2b76634d8f2eb6427fde96de83699a63efeba89f5ad45b6d7211 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z)
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libpod-conmon-280fc05a076c2b76634d8f2eb6427fde96de83699a63efeba89f5ad45b6d7211.scope.
Dec 05 08:17:00 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea8e4909d423a2f774d29952169dc80a392dd27e19e796f4f6462b620f27970/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea8e4909d423a2f774d29952169dc80a392dd27e19e796f4f6462b620f27970/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea8e4909d423a2f774d29952169dc80a392dd27e19e796f4f6462b620f27970/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea8e4909d423a2f774d29952169dc80a392dd27e19e796f4f6462b620f27970/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:00 np0005546420.localdomain podman[62079]: 2025-12-05 08:17:00.729384626 +0000 UTC m=+0.039408004 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:00 np0005546420.localdomain podman[62079]: 2025-12-05 08:17:00.836211371 +0000 UTC m=+0.146234739 container init 280fc05a076c2b76634d8f2eb6427fde96de83699a63efeba89f5ad45b6d7211 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 05 08:17:00 np0005546420.localdomain podman[62079]: 2025-12-05 08:17:00.841712451 +0000 UTC m=+0.151735789 container start 280fc05a076c2b76634d8f2eb6427fde96de83699a63efeba89f5ad45b6d7211 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, vcs-type=git, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:17:01 np0005546420.localdomain podman[62145]: 2025-12-05 08:17:01.014853264 +0000 UTC m=+0.069811677 container create 2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, container_name=nova_virtsecretd, release=1761123044, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container)
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: Started libpod-conmon-2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130.scope.
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a594ca6f65c5dc922c764b7fba6bddaef9e5a11599ecac6b1adff7ab94f7ceb9/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a594ca6f65c5dc922c764b7fba6bddaef9e5a11599ecac6b1adff7ab94f7ceb9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a594ca6f65c5dc922c764b7fba6bddaef9e5a11599ecac6b1adff7ab94f7ceb9/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a594ca6f65c5dc922c764b7fba6bddaef9e5a11599ecac6b1adff7ab94f7ceb9/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a594ca6f65c5dc922c764b7fba6bddaef9e5a11599ecac6b1adff7ab94f7ceb9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a594ca6f65c5dc922c764b7fba6bddaef9e5a11599ecac6b1adff7ab94f7ceb9/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a594ca6f65c5dc922c764b7fba6bddaef9e5a11599ecac6b1adff7ab94f7ceb9/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain podman[62145]: 2025-12-05 08:17:00.98024962 +0000 UTC m=+0.035208093 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:01 np0005546420.localdomain podman[62145]: 2025-12-05 08:17:01.085404713 +0000 UTC m=+0.140363146 container init 2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtsecretd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16-merged.mount: Deactivated successfully.
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a-userdata-shm.mount: Deactivated successfully.
Dec 05 08:17:01 np0005546420.localdomain podman[62145]: 2025-12-05 08:17:01.098382165 +0000 UTC m=+0.153340608 container start 2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, container_name=nova_virtsecretd, vcs-type=git, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, version=17.1.12, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:17:01 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ac0f5be6f71e6f8c16cd05155c4b5429 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:01 np0005546420.localdomain sudo[62165]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:01 np0005546420.localdomain systemd-logind[762]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: Started Session c3 of User root.
Dec 05 08:17:01 np0005546420.localdomain sudo[62165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:01 np0005546420.localdomain sudo[62165]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: session-c3.scope: Deactivated successfully.
Dec 05 08:17:01 np0005546420.localdomain podman[62273]: 2025-12-05 08:17:01.599647669 +0000 UTC m=+0.081667795 container create 845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, container_name=nova_virtnodedevd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Dec 05 08:17:01 np0005546420.localdomain podman[62296]: 2025-12-05 08:17:01.63834239 +0000 UTC m=+0.075443282 container create a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: Started libpod-conmon-845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986.scope.
Dec 05 08:17:01 np0005546420.localdomain podman[62273]: 2025-12-05 08:17:01.562115485 +0000 UTC m=+0.044135611 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: Started libpod-conmon-a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.scope.
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09081a0f64c6cf9725f53043f5bfef7ea250bf1548c4bcadf49dc8ee839156/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09081a0f64c6cf9725f53043f5bfef7ea250bf1548c4bcadf49dc8ee839156/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09081a0f64c6cf9725f53043f5bfef7ea250bf1548c4bcadf49dc8ee839156/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09081a0f64c6cf9725f53043f5bfef7ea250bf1548c4bcadf49dc8ee839156/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09081a0f64c6cf9725f53043f5bfef7ea250bf1548c4bcadf49dc8ee839156/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09081a0f64c6cf9725f53043f5bfef7ea250bf1548c4bcadf49dc8ee839156/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb09081a0f64c6cf9725f53043f5bfef7ea250bf1548c4bcadf49dc8ee839156/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5576283f602b49fd74c99052bb7baa8b8fd55184846126f29133b6a14b7c4f/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5576283f602b49fd74c99052bb7baa8b8fd55184846126f29133b6a14b7c4f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:01 np0005546420.localdomain podman[62273]: 2025-12-05 08:17:01.678904039 +0000 UTC m=+0.160924125 container init 845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, container_name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:17:01 np0005546420.localdomain podman[62273]: 2025-12-05 08:17:01.684773711 +0000 UTC m=+0.166793797 container start 845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, container_name=nova_virtnodedevd, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:17:01 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ac0f5be6f71e6f8c16cd05155c4b5429 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:01 np0005546420.localdomain podman[62296]: 2025-12-05 08:17:01.594702846 +0000 UTC m=+0.031803788 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 05 08:17:01 np0005546420.localdomain sudo[62322]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:01 np0005546420.localdomain systemd-logind[762]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: Started Session c4 of User root.
Dec 05 08:17:01 np0005546420.localdomain sudo[62322]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:17:01 np0005546420.localdomain podman[62296]: 2025-12-05 08:17:01.754416492 +0000 UTC m=+0.191517394 container init a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container)
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:17:01 np0005546420.localdomain sudo[62340]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:01 np0005546420.localdomain podman[62296]: 2025-12-05 08:17:01.782442441 +0000 UTC m=+0.219543323 container start a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 08:17:01 np0005546420.localdomain systemd-logind[762]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 05 08:17:01 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f466dfc41ade6bb0052985f932e2b61e --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: Started Session c5 of User root.
Dec 05 08:17:01 np0005546420.localdomain sudo[62340]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:01 np0005546420.localdomain sudo[62322]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: session-c4.scope: Deactivated successfully.
Dec 05 08:17:01 np0005546420.localdomain sudo[62340]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: session-c5.scope: Deactivated successfully.
Dec 05 08:17:01 np0005546420.localdomain podman[62341]: 2025-12-05 08:17:01.878078649 +0000 UTC m=+0.074338498 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, build-date=2025-11-18T23:44:13Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 05 08:17:01 np0005546420.localdomain kernel: Loading iSCSI transport class v2.0-870.
Dec 05 08:17:01 np0005546420.localdomain podman[62341]: 2025-12-05 08:17:01.92226232 +0000 UTC m=+0.118522149 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, distribution-scope=public)
Dec 05 08:17:01 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:17:02 np0005546420.localdomain podman[62457]: 2025-12-05 08:17:02.312440576 +0000 UTC m=+0.087833236 container create 3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-19T00:35:22Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtstoraged, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20251118.1)
Dec 05 08:17:02 np0005546420.localdomain systemd[1]: Started libpod-conmon-3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781.scope.
Dec 05 08:17:02 np0005546420.localdomain podman[62457]: 2025-12-05 08:17:02.260168454 +0000 UTC m=+0.035561214 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:02 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dc5ca56cabff6fee2b8a4f6e4dde9258d2fdbc443d9294aabf255694ff62dc/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dc5ca56cabff6fee2b8a4f6e4dde9258d2fdbc443d9294aabf255694ff62dc/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dc5ca56cabff6fee2b8a4f6e4dde9258d2fdbc443d9294aabf255694ff62dc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dc5ca56cabff6fee2b8a4f6e4dde9258d2fdbc443d9294aabf255694ff62dc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dc5ca56cabff6fee2b8a4f6e4dde9258d2fdbc443d9294aabf255694ff62dc/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dc5ca56cabff6fee2b8a4f6e4dde9258d2fdbc443d9294aabf255694ff62dc/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dc5ca56cabff6fee2b8a4f6e4dde9258d2fdbc443d9294aabf255694ff62dc/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain podman[62457]: 2025-12-05 08:17:02.380930701 +0000 UTC m=+0.156323361 container init 3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtstoraged, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 05 08:17:02 np0005546420.localdomain podman[62457]: 2025-12-05 08:17:02.391793018 +0000 UTC m=+0.167185718 container start 3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, build-date=2025-11-19T00:35:22Z, tcib_managed=true, io.buildah.version=1.41.4, container_name=nova_virtstoraged, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, architecture=x86_64, config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20251118.1)
Dec 05 08:17:02 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ac0f5be6f71e6f8c16cd05155c4b5429 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:02 np0005546420.localdomain sudo[62476]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:02 np0005546420.localdomain systemd-logind[762]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 05 08:17:02 np0005546420.localdomain systemd[1]: Started Session c6 of User root.
Dec 05 08:17:02 np0005546420.localdomain sudo[62476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:02 np0005546420.localdomain sudo[62476]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:02 np0005546420.localdomain systemd[1]: session-c6.scope: Deactivated successfully.
Dec 05 08:17:02 np0005546420.localdomain podman[62560]: 2025-12-05 08:17:02.78245004 +0000 UTC m=+0.087124644 container create 7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Dec 05 08:17:02 np0005546420.localdomain systemd[1]: Started libpod-conmon-7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c.scope.
Dec 05 08:17:02 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:02 np0005546420.localdomain podman[62560]: 2025-12-05 08:17:02.846171427 +0000 UTC m=+0.150846051 container init 7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=nova_virtqemud, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, config_id=tripleo_step3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true)
Dec 05 08:17:02 np0005546420.localdomain podman[62560]: 2025-12-05 08:17:02.749335883 +0000 UTC m=+0.054010537 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:02 np0005546420.localdomain podman[62560]: 2025-12-05 08:17:02.856770626 +0000 UTC m=+0.161445260 container start 7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, name=rhosp17/openstack-nova-libvirt, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, maintainer=OpenStack TripleO Team)
Dec 05 08:17:02 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ac0f5be6f71e6f8c16cd05155c4b5429 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:02 np0005546420.localdomain sudo[62580]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:02 np0005546420.localdomain systemd-logind[762]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 05 08:17:02 np0005546420.localdomain systemd[1]: Started Session c7 of User root.
Dec 05 08:17:02 np0005546420.localdomain sudo[62580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:02 np0005546420.localdomain sudo[62580]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:03 np0005546420.localdomain systemd[1]: session-c7.scope: Deactivated successfully.
Dec 05 08:17:03 np0005546420.localdomain podman[62668]: 2025-12-05 08:17:03.286643475 +0000 UTC m=+0.090130708 container create ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, container_name=nova_virtproxyd, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, maintainer=OpenStack TripleO Team)
Dec 05 08:17:03 np0005546420.localdomain systemd[1]: Started libpod-conmon-ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed.scope.
Dec 05 08:17:03 np0005546420.localdomain podman[62668]: 2025-12-05 08:17:03.241718521 +0000 UTC m=+0.045205764 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:03 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb541339826395780260e54eaea5ebe9da0c74cf9b96dae2643192eb4d511174/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb541339826395780260e54eaea5ebe9da0c74cf9b96dae2643192eb4d511174/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb541339826395780260e54eaea5ebe9da0c74cf9b96dae2643192eb4d511174/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb541339826395780260e54eaea5ebe9da0c74cf9b96dae2643192eb4d511174/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb541339826395780260e54eaea5ebe9da0c74cf9b96dae2643192eb4d511174/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb541339826395780260e54eaea5ebe9da0c74cf9b96dae2643192eb4d511174/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb541339826395780260e54eaea5ebe9da0c74cf9b96dae2643192eb4d511174/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:03 np0005546420.localdomain podman[62668]: 2025-12-05 08:17:03.357532134 +0000 UTC m=+0.161019367 container init ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, version=17.1.12, container_name=nova_virtproxyd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt)
Dec 05 08:17:03 np0005546420.localdomain podman[62668]: 2025-12-05 08:17:03.367932877 +0000 UTC m=+0.171420110 container start ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=nova_virtproxyd, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:17:03 np0005546420.localdomain python3[61562]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ac0f5be6f71e6f8c16cd05155c4b5429 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:17:03 np0005546420.localdomain sudo[62688]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:03 np0005546420.localdomain systemd-logind[762]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 05 08:17:03 np0005546420.localdomain systemd[1]: Started Session c8 of User root.
Dec 05 08:17:03 np0005546420.localdomain sudo[62688]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:03 np0005546420.localdomain sudo[62688]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:03 np0005546420.localdomain systemd[1]: session-c8.scope: Deactivated successfully.
Dec 05 08:17:03 np0005546420.localdomain sudo[61560]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:03 np0005546420.localdomain sudo[62748]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kduxuhyrzxuxvsivlypgcckiogywhbkf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:03 np0005546420.localdomain sudo[62748]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:03 np0005546420.localdomain python3[62750]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:04 np0005546420.localdomain sudo[62748]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:04 np0005546420.localdomain sudo[62764]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ekiqtbcstkcioygyxymmrscpewotdjma ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:04 np0005546420.localdomain sudo[62764]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:04 np0005546420.localdomain python3[62766]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:04 np0005546420.localdomain sudo[62764]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:04 np0005546420.localdomain sudo[62780]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xlqfeazkhhlkaqrvplnpuyapyytkqwkv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:04 np0005546420.localdomain sudo[62780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:04 np0005546420.localdomain python3[62782]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:04 np0005546420.localdomain sudo[62780]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:04 np0005546420.localdomain sudo[62796]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mahqcqzqsljohoyztvcxqxcfmkesddml ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:04 np0005546420.localdomain sudo[62796]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:04 np0005546420.localdomain python3[62798]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:04 np0005546420.localdomain sudo[62796]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:04 np0005546420.localdomain sudo[62812]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-srjbulyuvpilveoqgyvqzqlveaqunues ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:04 np0005546420.localdomain sudo[62812]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:05 np0005546420.localdomain python3[62814]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:05 np0005546420.localdomain sudo[62812]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:05 np0005546420.localdomain sudo[62828]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxqrrlhvrbkcjygtzmmbbpoufzplcann ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:05 np0005546420.localdomain sudo[62828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:05 np0005546420.localdomain python3[62830]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:05 np0005546420.localdomain sudo[62828]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:05 np0005546420.localdomain sudo[62844]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjjdotaokffopoujhpbgrezsgepygemg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:05 np0005546420.localdomain sudo[62844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:05 np0005546420.localdomain python3[62846]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:05 np0005546420.localdomain sudo[62844]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:05 np0005546420.localdomain sudo[62860]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhhoiehnpqpolzlbrxzeqosxaxbiybgf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:05 np0005546420.localdomain sudo[62860]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:05 np0005546420.localdomain python3[62862]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:05 np0005546420.localdomain sudo[62860]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:05 np0005546420.localdomain sudo[62876]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ifkuunxqnuscufcbhlhdmhfnnfenqhko ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:05 np0005546420.localdomain sudo[62876]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:05 np0005546420.localdomain python3[62878]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:05 np0005546420.localdomain sudo[62876]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:06 np0005546420.localdomain sudo[62893]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrkglnnkndjazucjjtvqvmupcatukkyz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:06 np0005546420.localdomain sudo[62893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:06 np0005546420.localdomain python3[62895]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:17:06 np0005546420.localdomain sudo[62893]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:06 np0005546420.localdomain sudo[62909]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swmcziydnynnuvnxkwdnskuahgafnggo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:06 np0005546420.localdomain sudo[62909]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:06 np0005546420.localdomain python3[62911]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:17:06 np0005546420.localdomain sudo[62909]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:06 np0005546420.localdomain sudo[62925]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wluiwycbttsbsxvkswmimrlyhbthbyms ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:06 np0005546420.localdomain sudo[62925]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:06 np0005546420.localdomain python3[62927]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:17:06 np0005546420.localdomain sudo[62925]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:06 np0005546420.localdomain sudo[62941]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uetvppcgiolfnoaggyqtzgleqzwnfhuc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:06 np0005546420.localdomain sudo[62941]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:07 np0005546420.localdomain python3[62943]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:17:07 np0005546420.localdomain sudo[62941]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:07 np0005546420.localdomain sudo[62957]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xreaeuqzaoztquotkhhdkmdyhuncweqn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:07 np0005546420.localdomain sudo[62957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:07 np0005546420.localdomain python3[62959]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:17:07 np0005546420.localdomain sudo[62957]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:07 np0005546420.localdomain sudo[62973]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utgvogfrpptsbwijmssqzeceijwiuppu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:07 np0005546420.localdomain sudo[62973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:07 np0005546420.localdomain python3[62975]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:17:07 np0005546420.localdomain sudo[62973]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:07 np0005546420.localdomain sudo[62989]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahdwevpxhzchrwayenuqfcvloxgurdie ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:07 np0005546420.localdomain sudo[62989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:07 np0005546420.localdomain python3[62991]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:17:07 np0005546420.localdomain sudo[62989]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:07 np0005546420.localdomain sudo[63005]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmkyizzqroapmlascqkkcgopcpesvshx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:07 np0005546420.localdomain sudo[63005]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:08 np0005546420.localdomain python3[63007]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:17:08 np0005546420.localdomain sudo[63005]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:08 np0005546420.localdomain sudo[63022]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mhfrcfasfaandeyvbodazshagltbbhtr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:08 np0005546420.localdomain sudo[63022]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:08 np0005546420.localdomain python3[63024]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:17:08 np0005546420.localdomain sudo[63022]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:08 np0005546420.localdomain sudo[63083]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxwpyqiosmzsxyfppqdoyesaqbovdwgn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:08 np0005546420.localdomain sudo[63083]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:08 np0005546420.localdomain python3[63085]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.385245-100561-45633719404742/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:08 np0005546420.localdomain sudo[63083]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:09 np0005546420.localdomain sudo[63112]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fnoewijriuufumckwagaryuitcvywgcl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:09 np0005546420.localdomain sudo[63112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:09 np0005546420.localdomain python3[63114]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.385245-100561-45633719404742/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:09 np0005546420.localdomain sudo[63112]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:09 np0005546420.localdomain sudo[63141]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvhlnyhsbnyxxclpprzgawazwrtctuzt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:09 np0005546420.localdomain sudo[63141]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:10 np0005546420.localdomain python3[63143]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.385245-100561-45633719404742/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:10 np0005546420.localdomain sudo[63141]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:10 np0005546420.localdomain sudo[63170]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhrpdtpuzlsjfeszxgphizipajgnywyu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:10 np0005546420.localdomain sudo[63170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:10 np0005546420.localdomain python3[63172]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.385245-100561-45633719404742/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:10 np0005546420.localdomain sudo[63170]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:10 np0005546420.localdomain sudo[63199]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xukeccjgbsqytbdhhqbppxvehwgmbmmc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:10 np0005546420.localdomain sudo[63199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:11 np0005546420.localdomain python3[63201]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.385245-100561-45633719404742/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:11 np0005546420.localdomain sudo[63199]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:11 np0005546420.localdomain sudo[63228]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfjrnmlswxsatyscnnbcgbievwudbnsy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:11 np0005546420.localdomain sudo[63228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:11 np0005546420.localdomain python3[63230]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.385245-100561-45633719404742/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:11 np0005546420.localdomain sudo[63228]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:12 np0005546420.localdomain sudo[63257]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbizakkzbzwynzgcwcutpglgtwygygit ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:12 np0005546420.localdomain sudo[63257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:12 np0005546420.localdomain python3[63259]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.385245-100561-45633719404742/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:12 np0005546420.localdomain sudo[63257]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:12 np0005546420.localdomain sudo[63286]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dmttetymjhfmwmpaiwmoadachwczsnrm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:12 np0005546420.localdomain sudo[63286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:12 np0005546420.localdomain python3[63288]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.385245-100561-45633719404742/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:12 np0005546420.localdomain sudo[63286]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:13 np0005546420.localdomain sudo[63315]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hmzmjejswxcslqdoykmnhwtwsvkldpgt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:13 np0005546420.localdomain sudo[63315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:13 np0005546420.localdomain python3[63317]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.385245-100561-45633719404742/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:13 np0005546420.localdomain sudo[63315]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:13 np0005546420.localdomain sudo[63331]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uabvatvlshxddpaaofbniurkdiffnngg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:13 np0005546420.localdomain sudo[63331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:13 np0005546420.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Activating special unit Exit the Session...
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Stopped target Main User Target.
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Stopped target Basic System.
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Stopped target Paths.
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Stopped target Sockets.
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Stopped target Timers.
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Closed D-Bus User Message Bus Socket.
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Stopped Create User's Volatile Files and Directories.
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Removed slice User Application Slice.
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Reached target Shutdown.
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Finished Exit the Session.
Dec 05 08:17:13 np0005546420.localdomain systemd[61830]: Reached target Exit the Session.
Dec 05 08:17:13 np0005546420.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 05 08:17:13 np0005546420.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 05 08:17:15 np0005546420.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 05 08:17:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:17:15 np0005546420.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 05 08:17:15 np0005546420.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 05 08:17:15 np0005546420.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 05 08:17:15 np0005546420.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 05 08:17:15 np0005546420.localdomain podman[63335]: 2025-12-05 08:17:15.156237246 +0000 UTC m=+0.092441329 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12)
Dec 05 08:17:15 np0005546420.localdomain python3[63333]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 08:17:15 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:15 np0005546420.localdomain systemd-rc-local-generator[63384]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:15 np0005546420.localdomain systemd-sysv-generator[63391]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:15 np0005546420.localdomain podman[63335]: 2025-12-05 08:17:15.356485729 +0000 UTC m=+0.292689812 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, release=1761123044)
Dec 05 08:17:15 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:15 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:17:15 np0005546420.localdomain sudo[63331]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:15 np0005546420.localdomain sudo[63414]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fmbypspachvbnaturxmwaawutmxextzl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:15 np0005546420.localdomain sudo[63414]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:16 np0005546420.localdomain python3[63416]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:17:16 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:16 np0005546420.localdomain systemd-sysv-generator[63448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:16 np0005546420.localdomain systemd-rc-local-generator[63442]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:16 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:16 np0005546420.localdomain systemd[1]: Starting collectd container...
Dec 05 08:17:16 np0005546420.localdomain systemd[1]: Started collectd container.
Dec 05 08:17:16 np0005546420.localdomain sudo[63414]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:16 np0005546420.localdomain sudo[63480]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qeskoltxsewjneosgtilzlyhkhuheaoc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:16 np0005546420.localdomain sudo[63480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:17 np0005546420.localdomain python3[63482]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:17:18 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:18 np0005546420.localdomain systemd-rc-local-generator[63509]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:18 np0005546420.localdomain systemd-sysv-generator[63514]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:18 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:18 np0005546420.localdomain systemd[1]: Starting iscsid container...
Dec 05 08:17:18 np0005546420.localdomain systemd[1]: Started iscsid container.
Dec 05 08:17:18 np0005546420.localdomain sudo[63480]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:18 np0005546420.localdomain sudo[63548]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-miffseydtpcuxwynkjkuqmlixdzdcnzc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:18 np0005546420.localdomain sudo[63548]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:19 np0005546420.localdomain python3[63550]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:17:19 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:19 np0005546420.localdomain systemd-rc-local-generator[63578]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:19 np0005546420.localdomain systemd-sysv-generator[63583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:19 np0005546420.localdomain systemd[1]: Starting nova_virtlogd_wrapper container...
Dec 05 08:17:19 np0005546420.localdomain systemd[1]: Started nova_virtlogd_wrapper container.
Dec 05 08:17:19 np0005546420.localdomain sudo[63548]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:20 np0005546420.localdomain sudo[63616]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzqgbimzptpothsaslcdievvsfqgjmqm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:20 np0005546420.localdomain sudo[63616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:20 np0005546420.localdomain python3[63618]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:17:20 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:20 np0005546420.localdomain systemd-rc-local-generator[63646]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:20 np0005546420.localdomain systemd-sysv-generator[63649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:20 np0005546420.localdomain systemd[1]: Starting nova_virtnodedevd container...
Dec 05 08:17:20 np0005546420.localdomain tripleo-start-podman-container[63658]: Creating additional drop-in dependency for "nova_virtnodedevd" (845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986)
Dec 05 08:17:20 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:21 np0005546420.localdomain systemd-rc-local-generator[63717]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:21 np0005546420.localdomain systemd-sysv-generator[63720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:21 np0005546420.localdomain systemd[1]: Started nova_virtnodedevd container.
Dec 05 08:17:21 np0005546420.localdomain sudo[63616]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:21 np0005546420.localdomain sudo[63739]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gupzctlhyyiysrscrlpqvvlubskjbjls ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:21 np0005546420.localdomain sudo[63739]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:21 np0005546420.localdomain python3[63741]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:17:21 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:22 np0005546420.localdomain systemd-sysv-generator[63768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:22 np0005546420.localdomain systemd-rc-local-generator[63764]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:22 np0005546420.localdomain systemd[1]: Starting nova_virtproxyd container...
Dec 05 08:17:22 np0005546420.localdomain tripleo-start-podman-container[63781]: Creating additional drop-in dependency for "nova_virtproxyd" (ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed)
Dec 05 08:17:22 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:22 np0005546420.localdomain systemd-rc-local-generator[63834]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:22 np0005546420.localdomain systemd-sysv-generator[63838]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:22 np0005546420.localdomain systemd[1]: Started nova_virtproxyd container.
Dec 05 08:17:22 np0005546420.localdomain sudo[63739]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:23 np0005546420.localdomain sudo[63864]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwmgrdnkumklbsphpwtwqiqzfjwkccql ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:23 np0005546420.localdomain sudo[63864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:23 np0005546420.localdomain python3[63866]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:17:24 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:24 np0005546420.localdomain systemd-rc-local-generator[63893]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:24 np0005546420.localdomain systemd-sysv-generator[63897]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:24 np0005546420.localdomain systemd[1]: Starting nova_virtqemud container...
Dec 05 08:17:24 np0005546420.localdomain tripleo-start-podman-container[63906]: Creating additional drop-in dependency for "nova_virtqemud" (7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c)
Dec 05 08:17:24 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:24 np0005546420.localdomain systemd-rc-local-generator[63961]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:24 np0005546420.localdomain systemd-sysv-generator[63967]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:25 np0005546420.localdomain systemd[1]: Started nova_virtqemud container.
Dec 05 08:17:25 np0005546420.localdomain sudo[63864]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:25 np0005546420.localdomain sudo[63989]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fayzvdpsszouosihnlshwpyiyxztpsjz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:25 np0005546420.localdomain sudo[63989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:25 np0005546420.localdomain python3[63991]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:17:25 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:25 np0005546420.localdomain systemd-sysv-generator[64020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:25 np0005546420.localdomain systemd-rc-local-generator[64016]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:26 np0005546420.localdomain systemd[1]: Starting nova_virtsecretd container...
Dec 05 08:17:26 np0005546420.localdomain tripleo-start-podman-container[64031]: Creating additional drop-in dependency for "nova_virtsecretd" (2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130)
Dec 05 08:17:26 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:26 np0005546420.localdomain systemd-sysv-generator[64094]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:26 np0005546420.localdomain systemd-rc-local-generator[64089]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:26 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:26 np0005546420.localdomain systemd[1]: Started nova_virtsecretd container.
Dec 05 08:17:26 np0005546420.localdomain sudo[63989]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:26 np0005546420.localdomain sudo[64114]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-siqvltyflsxmvkobmfftbzdaopynvyil ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:26 np0005546420.localdomain sudo[64114]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:27 np0005546420.localdomain python3[64116]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:17:28 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:28 np0005546420.localdomain systemd-sysv-generator[64149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:28 np0005546420.localdomain systemd-rc-local-generator[64146]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:28 np0005546420.localdomain systemd[1]: Starting nova_virtstoraged container...
Dec 05 08:17:28 np0005546420.localdomain tripleo-start-podman-container[64156]: Creating additional drop-in dependency for "nova_virtstoraged" (3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781)
Dec 05 08:17:28 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:28 np0005546420.localdomain systemd-rc-local-generator[64214]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:28 np0005546420.localdomain systemd-sysv-generator[64218]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:29 np0005546420.localdomain systemd[1]: Started nova_virtstoraged container.
Dec 05 08:17:29 np0005546420.localdomain sudo[64114]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:29 np0005546420.localdomain sudo[64238]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efxtnwwrrhycohbajutamdklhevlbbwb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:17:29 np0005546420.localdomain sudo[64238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:29 np0005546420.localdomain python3[64240]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:17:29 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:17:29 np0005546420.localdomain systemd-rc-local-generator[64264]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:17:29 np0005546420.localdomain systemd-sysv-generator[64269]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: Starting rsyslog container...
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:30 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:30 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:30 np0005546420.localdomain podman[64280]: 2025-12-05 08:17:30.331284292 +0000 UTC m=+0.135714143 container init d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public)
Dec 05 08:17:30 np0005546420.localdomain podman[64280]: 2025-12-05 08:17:30.341259221 +0000 UTC m=+0.145689072 container start d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git, build-date=2025-11-18T22:49:49Z)
Dec 05 08:17:30 np0005546420.localdomain podman[64280]: rsyslog
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: Started rsyslog container.
Dec 05 08:17:30 np0005546420.localdomain sudo[64298]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:30 np0005546420.localdomain sudo[64298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:30 np0005546420.localdomain sudo[64238]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:30 np0005546420.localdomain sudo[64298]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: libpod-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a.scope: Deactivated successfully.
Dec 05 08:17:30 np0005546420.localdomain podman[64309]: 2025-12-05 08:17:30.519211873 +0000 UTC m=+0.060140938 container died d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:17:30 np0005546420.localdomain podman[64309]: 2025-12-05 08:17:30.548386378 +0000 UTC m=+0.089315413 container cleanup d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp17/openstack-rsyslog)
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:17:30 np0005546420.localdomain sudo[64355]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqvvjibaxzdjijowguhrbwbpbzevuqzm ; /usr/bin/python3
Dec 05 08:17:30 np0005546420.localdomain sudo[64355]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:30 np0005546420.localdomain podman[64328]: 2025-12-05 08:17:30.669101913 +0000 UTC m=+0.109227190 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, container_name=collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:17:30 np0005546420.localdomain podman[64329]: 2025-12-05 08:17:30.692784879 +0000 UTC m=+0.121780751 container cleanup d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 05 08:17:30 np0005546420.localdomain podman[64329]: rsyslog
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 05 08:17:30 np0005546420.localdomain podman[64328]: 2025-12-05 08:17:30.734848494 +0000 UTC m=+0.174973811 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, release=1761123044, io.openshift.expose-services=)
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:17:30 np0005546420.localdomain python3[64365]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:30 np0005546420.localdomain sudo[64355]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1.
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: Stopped rsyslog container.
Dec 05 08:17:30 np0005546420.localdomain systemd[1]: Starting rsyslog container...
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:31 np0005546420.localdomain podman[64373]: 2025-12-05 08:17:31.046192284 +0000 UTC m=+0.117896569 container init d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:49:49Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, batch=17.1_20251118.1)
Dec 05 08:17:31 np0005546420.localdomain podman[64373]: 2025-12-05 08:17:31.054999087 +0000 UTC m=+0.126703372 container start d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']})
Dec 05 08:17:31 np0005546420.localdomain podman[64373]: rsyslog
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: Started rsyslog container.
Dec 05 08:17:31 np0005546420.localdomain sudo[64393]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:31 np0005546420.localdomain sudo[64393]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:31 np0005546420.localdomain sudo[64393]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: libpod-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a.scope: Deactivated successfully.
Dec 05 08:17:31 np0005546420.localdomain podman[64418]: 2025-12-05 08:17:31.231627838 +0000 UTC m=+0.058639211 container died d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, build-date=2025-11-18T22:49:49Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16-merged.mount: Deactivated successfully.
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a-userdata-shm.mount: Deactivated successfully.
Dec 05 08:17:31 np0005546420.localdomain podman[64418]: 2025-12-05 08:17:31.26198623 +0000 UTC m=+0.088997563 container cleanup d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, container_name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64)
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:17:31 np0005546420.localdomain sudo[64455]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iwldmouivnjwyfngfbmiuwdlcurxdaxt ; /usr/bin/python3
Dec 05 08:17:31 np0005546420.localdomain sudo[64455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:31 np0005546420.localdomain podman[64457]: 2025-12-05 08:17:31.350606589 +0000 UTC m=+0.058167485 container cleanup d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Dec 05 08:17:31 np0005546420.localdomain podman[64457]: rsyslog
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 05 08:17:31 np0005546420.localdomain sudo[64455]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2.
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: Stopped rsyslog container.
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: Starting rsyslog container...
Dec 05 08:17:31 np0005546420.localdomain sudo[64509]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbbpzbmjxjbmbchqcqaxoznyjhkkbxeh ; /usr/bin/python3
Dec 05 08:17:31 np0005546420.localdomain sudo[64509]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: tmp-crun.FCRDHW.mount: Deactivated successfully.
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:31 np0005546420.localdomain podman[64511]: 2025-12-05 08:17:31.804454312 +0000 UTC m=+0.124589957 container init d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1761123044)
Dec 05 08:17:31 np0005546420.localdomain podman[64511]: 2025-12-05 08:17:31.811388057 +0000 UTC m=+0.131523712 container start d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-rsyslog, tcib_managed=true, vcs-type=git, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T22:49:49Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 05 08:17:31 np0005546420.localdomain podman[64511]: rsyslog
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: Started rsyslog container.
Dec 05 08:17:31 np0005546420.localdomain sudo[64531]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:31 np0005546420.localdomain sudo[64531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:31 np0005546420.localdomain sudo[64509]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:31 np0005546420.localdomain sudo[64531]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: libpod-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a.scope: Deactivated successfully.
Dec 05 08:17:31 np0005546420.localdomain podman[64536]: 2025-12-05 08:17:31.965574172 +0000 UTC m=+0.053932275 container died d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, release=1761123044, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 05 08:17:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:17:31 np0005546420.localdomain podman[64536]: 2025-12-05 08:17:31.994006514 +0000 UTC m=+0.082364577 container cleanup d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, container_name=rsyslog, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']})
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:17:32 np0005546420.localdomain podman[64561]: 2025-12-05 08:17:32.083893243 +0000 UTC m=+0.083221793 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vendor=Red Hat, Inc.)
Dec 05 08:17:32 np0005546420.localdomain podman[64561]: 2025-12-05 08:17:32.098298869 +0000 UTC m=+0.097627429 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, release=1761123044)
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:17:32 np0005546420.localdomain sudo[64604]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhvjlotyqovzafgkigikykkdeuhdortj ; /usr/bin/python3
Dec 05 08:17:32 np0005546420.localdomain podman[64567]: 2025-12-05 08:17:32.130709306 +0000 UTC m=+0.117487047 container cleanup d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z)
Dec 05 08:17:32 np0005546420.localdomain podman[64567]: rsyslog
Dec 05 08:17:32 np0005546420.localdomain sudo[64604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16-merged.mount: Deactivated successfully.
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a-userdata-shm.mount: Deactivated successfully.
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3.
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: Stopped rsyslog container.
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: Starting rsyslog container...
Dec 05 08:17:32 np0005546420.localdomain python3[64609]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005546420 step=3 update_config_hash_only=False
Dec 05 08:17:32 np0005546420.localdomain sudo[64604]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:32 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:32 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:32 np0005546420.localdomain podman[64610]: 2025-12-05 08:17:32.372838478 +0000 UTC m=+0.093817321 container init d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-rsyslog, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=rsyslog, config_id=tripleo_step3, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, build-date=2025-11-18T22:49:49Z, tcib_managed=true)
Dec 05 08:17:32 np0005546420.localdomain podman[64610]: 2025-12-05 08:17:32.385346467 +0000 UTC m=+0.106325310 container start d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, config_id=tripleo_step3, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=rsyslog, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:17:32 np0005546420.localdomain podman[64610]: rsyslog
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: Started rsyslog container.
Dec 05 08:17:32 np0005546420.localdomain sudo[64629]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:32 np0005546420.localdomain sudo[64629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:32 np0005546420.localdomain sudo[64629]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: libpod-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a.scope: Deactivated successfully.
Dec 05 08:17:32 np0005546420.localdomain podman[64633]: 2025-12-05 08:17:32.546246529 +0000 UTC m=+0.052998516 container died d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, name=rhosp17/openstack-rsyslog, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64)
Dec 05 08:17:32 np0005546420.localdomain podman[64633]: 2025-12-05 08:17:32.569714378 +0000 UTC m=+0.076466315 container cleanup d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T22:49:49Z, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container)
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:17:32 np0005546420.localdomain sudo[64665]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtqpbetaempewzypcvfjkngwrkqncbdg ; /usr/bin/python3
Dec 05 08:17:32 np0005546420.localdomain sudo[64665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:32 np0005546420.localdomain podman[64646]: 2025-12-05 08:17:32.651819835 +0000 UTC m=+0.055228845 container cleanup d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, config_id=tripleo_step3, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:17:32 np0005546420.localdomain podman[64646]: rsyslog
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 05 08:17:32 np0005546420.localdomain python3[64673]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:17:32 np0005546420.localdomain sudo[64665]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4.
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: Stopped rsyslog container.
Dec 05 08:17:32 np0005546420.localdomain systemd[1]: Starting rsyslog container...
Dec 05 08:17:32 np0005546420.localdomain sudo[64698]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcufjymmchulnadvawunemeshbkqwdfh ; /usr/bin/python3
Dec 05 08:17:33 np0005546420.localdomain sudo[64698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:17:33 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:33 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 05 08:17:33 np0005546420.localdomain podman[64674]: 2025-12-05 08:17:33.057573155 +0000 UTC m=+0.137101645 container init d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-rsyslog, config_id=tripleo_step3)
Dec 05 08:17:33 np0005546420.localdomain podman[64674]: 2025-12-05 08:17:33.067523743 +0000 UTC m=+0.147052223 container start d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-rsyslog-container, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Dec 05 08:17:33 np0005546420.localdomain podman[64674]: rsyslog
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: Started rsyslog container.
Dec 05 08:17:33 np0005546420.localdomain sudo[64709]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:17:33 np0005546420.localdomain sudo[64709]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:17:33 np0005546420.localdomain sudo[64709]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: libpod-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a.scope: Deactivated successfully.
Dec 05 08:17:33 np0005546420.localdomain python3[64702]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 05 08:17:33 np0005546420.localdomain sudo[64698]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:33 np0005546420.localdomain podman[64712]: 2025-12-05 08:17:33.230747819 +0000 UTC m=+0.054539834 container died d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, container_name=rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1761123044)
Dec 05 08:17:33 np0005546420.localdomain podman[64712]: 2025-12-05 08:17:33.25109874 +0000 UTC m=+0.074890735 container cleanup d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, container_name=rsyslog, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, url=https://www.redhat.com)
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16-merged.mount: Deactivated successfully.
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a-userdata-shm.mount: Deactivated successfully.
Dec 05 08:17:33 np0005546420.localdomain podman[64725]: 2025-12-05 08:17:33.320309828 +0000 UTC m=+0.046008069 container cleanup d03951315e9c1d3fe5e9191b56c35033136a8b30e85d18a90e1fca5bb34c692a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a03f6602210fb500978d9137df7e914f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 05 08:17:33 np0005546420.localdomain podman[64725]: rsyslog
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5.
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: Stopped rsyslog container.
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly.
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 05 08:17:33 np0005546420.localdomain systemd[1]: Failed to start rsyslog container.
Dec 05 08:17:46 np0005546420.localdomain sudo[64736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:17:46 np0005546420.localdomain sudo[64736]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:17:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:17:46 np0005546420.localdomain sudo[64736]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:46 np0005546420.localdomain sudo[64757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:17:46 np0005546420.localdomain sudo[64757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:17:46 np0005546420.localdomain systemd[1]: tmp-crun.KEQEPI.mount: Deactivated successfully.
Dec 05 08:17:46 np0005546420.localdomain podman[64750]: 2025-12-05 08:17:46.414815585 +0000 UTC m=+0.088230899 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:17:46 np0005546420.localdomain podman[64750]: 2025-12-05 08:17:46.60447171 +0000 UTC m=+0.277886984 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step1, distribution-scope=public, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:17:46 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:17:47 np0005546420.localdomain sudo[64757]: pam_unix(sudo:session): session closed for user root
Dec 05 08:17:47 np0005546420.localdomain sudo[64825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:17:47 np0005546420.localdomain sudo[64825]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:17:47 np0005546420.localdomain sudo[64825]: pam_unix(sudo:session): session closed for user root
Dec 05 08:18:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:18:01 np0005546420.localdomain podman[64840]: 2025-12-05 08:18:01.469976193 +0000 UTC m=+0.056199055 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 05 08:18:01 np0005546420.localdomain podman[64840]: 2025-12-05 08:18:01.507423065 +0000 UTC m=+0.093645957 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, url=https://www.redhat.com, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container)
Dec 05 08:18:01 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:18:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:18:02 np0005546420.localdomain systemd[1]: tmp-crun.FJEtBr.mount: Deactivated successfully.
Dec 05 08:18:02 np0005546420.localdomain podman[64861]: 2025-12-05 08:18:02.499784928 +0000 UTC m=+0.079243911 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team)
Dec 05 08:18:02 np0005546420.localdomain podman[64861]: 2025-12-05 08:18:02.512367148 +0000 UTC m=+0.091826131 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, container_name=iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12)
Dec 05 08:18:02 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:18:14 np0005546420.localdomain sshd[64879]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:18:17 np0005546420.localdomain sshd[64879]: Connection reset by authenticating user root 91.202.233.33 port 34610 [preauth]
Dec 05 08:18:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:18:17 np0005546420.localdomain podman[64881]: 2025-12-05 08:18:17.29354687 +0000 UTC m=+0.086130002 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, version=17.1.12, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, release=1761123044, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4)
Dec 05 08:18:17 np0005546420.localdomain sshd[64910]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:18:17 np0005546420.localdomain podman[64881]: 2025-12-05 08:18:17.469890612 +0000 UTC m=+0.262473734 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container)
Dec 05 08:18:17 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:18:19 np0005546420.localdomain sshd[64910]: Invalid user user from 91.202.233.33 port 34620
Dec 05 08:18:19 np0005546420.localdomain sshd[64910]: Connection reset by invalid user user 91.202.233.33 port 34620 [preauth]
Dec 05 08:18:20 np0005546420.localdomain sshd[64912]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:18:21 np0005546420.localdomain sshd[64912]: Connection reset by authenticating user root 91.202.233.33 port 34634 [preauth]
Dec 05 08:18:22 np0005546420.localdomain sshd[64914]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:18:24 np0005546420.localdomain sshd[64914]: Connection reset by authenticating user root 91.202.233.33 port 23636 [preauth]
Dec 05 08:18:24 np0005546420.localdomain sshd[64916]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:18:26 np0005546420.localdomain sshd[64916]: Connection reset by authenticating user root 91.202.233.33 port 23652 [preauth]
Dec 05 08:18:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:18:32 np0005546420.localdomain systemd[1]: tmp-crun.FPYtPB.mount: Deactivated successfully.
Dec 05 08:18:32 np0005546420.localdomain podman[64918]: 2025-12-05 08:18:32.505303967 +0000 UTC m=+0.079401456 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-collectd, container_name=collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:18:32 np0005546420.localdomain podman[64918]: 2025-12-05 08:18:32.517273918 +0000 UTC m=+0.091371437 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:18:32 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:18:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:18:32 np0005546420.localdomain podman[64937]: 2025-12-05 08:18:32.614176724 +0000 UTC m=+0.062991995 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z)
Dec 05 08:18:32 np0005546420.localdomain podman[64937]: 2025-12-05 08:18:32.625192166 +0000 UTC m=+0.074007417 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Dec 05 08:18:32 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:18:47 np0005546420.localdomain sudo[64957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:18:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:18:47 np0005546420.localdomain sudo[64957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:18:47 np0005546420.localdomain sudo[64957]: pam_unix(sudo:session): session closed for user root
Dec 05 08:18:47 np0005546420.localdomain podman[64971]: 2025-12-05 08:18:47.937363716 +0000 UTC m=+0.081962584 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=)
Dec 05 08:18:47 np0005546420.localdomain sudo[64978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:18:47 np0005546420.localdomain sudo[64978]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:18:48 np0005546420.localdomain podman[64971]: 2025-12-05 08:18:48.127540957 +0000 UTC m=+0.272139845 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:18:48 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:18:48 np0005546420.localdomain sudo[64978]: pam_unix(sudo:session): session closed for user root
Dec 05 08:18:49 np0005546420.localdomain sudo[65047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:18:49 np0005546420.localdomain sudo[65047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:18:49 np0005546420.localdomain sudo[65047]: pam_unix(sudo:session): session closed for user root
Dec 05 08:19:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:19:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:19:03 np0005546420.localdomain systemd[1]: tmp-crun.Vr7LYr.mount: Deactivated successfully.
Dec 05 08:19:03 np0005546420.localdomain podman[65063]: 2025-12-05 08:19:03.535993125 +0000 UTC m=+0.109700984 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, release=1761123044, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_id=tripleo_step3, vendor=Red Hat, Inc., container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1)
Dec 05 08:19:03 np0005546420.localdomain podman[65063]: 2025-12-05 08:19:03.572460667 +0000 UTC m=+0.146168506 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, container_name=iscsid, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12)
Dec 05 08:19:03 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:19:03 np0005546420.localdomain podman[65062]: 2025-12-05 08:19:03.626247217 +0000 UTC m=+0.201923977 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step3, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:19:03 np0005546420.localdomain podman[65062]: 2025-12-05 08:19:03.638347262 +0000 UTC m=+0.214024002 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:19:03 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:19:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:19:18 np0005546420.localdomain podman[65100]: 2025-12-05 08:19:18.505478254 +0000 UTC m=+0.081985985 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=metrics_qdr, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 05 08:19:18 np0005546420.localdomain podman[65100]: 2025-12-05 08:19:18.730499116 +0000 UTC m=+0.307006837 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:19:18 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:19:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:19:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:19:34 np0005546420.localdomain podman[65129]: 2025-12-05 08:19:34.524434386 +0000 UTC m=+0.099859919 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, version=17.1.12, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:19:34 np0005546420.localdomain podman[65129]: 2025-12-05 08:19:34.562105495 +0000 UTC m=+0.137531088 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, version=17.1.12)
Dec 05 08:19:34 np0005546420.localdomain systemd[1]: tmp-crun.ZlEGQQ.mount: Deactivated successfully.
Dec 05 08:19:34 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:19:34 np0005546420.localdomain podman[65130]: 2025-12-05 08:19:34.567462901 +0000 UTC m=+0.139538461 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, architecture=x86_64, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_id=tripleo_step3, container_name=iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4)
Dec 05 08:19:34 np0005546420.localdomain podman[65130]: 2025-12-05 08:19:34.651487469 +0000 UTC m=+0.223563029 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:19:34 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:19:49 np0005546420.localdomain sudo[65168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:19:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:19:49 np0005546420.localdomain sudo[65168]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:19:49 np0005546420.localdomain sudo[65168]: pam_unix(sudo:session): session closed for user root
Dec 05 08:19:49 np0005546420.localdomain sudo[65189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:19:49 np0005546420.localdomain sudo[65189]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:19:49 np0005546420.localdomain podman[65182]: 2025-12-05 08:19:49.421051474 +0000 UTC m=+0.084789363 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1)
Dec 05 08:19:49 np0005546420.localdomain podman[65182]: 2025-12-05 08:19:49.609445239 +0000 UTC m=+0.273183058 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, architecture=x86_64, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:19:49 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:19:49 np0005546420.localdomain sudo[65189]: pam_unix(sudo:session): session closed for user root
Dec 05 08:19:51 np0005546420.localdomain sudo[65258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:19:51 np0005546420.localdomain sudo[65258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:19:51 np0005546420.localdomain sudo[65258]: pam_unix(sudo:session): session closed for user root
Dec 05 08:20:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:20:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:20:05 np0005546420.localdomain podman[65274]: 2025-12-05 08:20:05.496341592 +0000 UTC m=+0.071727347 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:20:05 np0005546420.localdomain podman[65274]: 2025-12-05 08:20:05.530375308 +0000 UTC m=+0.105761033 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step3, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 05 08:20:05 np0005546420.localdomain podman[65273]: 2025-12-05 08:20:05.479756947 +0000 UTC m=+0.060622541 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, release=1761123044)
Dec 05 08:20:05 np0005546420.localdomain podman[65273]: 2025-12-05 08:20:05.565480047 +0000 UTC m=+0.146345671 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, architecture=x86_64, tcib_managed=true)
Dec 05 08:20:05 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:20:05 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:20:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:20:20 np0005546420.localdomain podman[65312]: 2025-12-05 08:20:20.502369893 +0000 UTC m=+0.082536233 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team)
Dec 05 08:20:20 np0005546420.localdomain podman[65312]: 2025-12-05 08:20:20.673274276 +0000 UTC m=+0.253440496 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1)
Dec 05 08:20:20 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:20:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:20:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:20:36 np0005546420.localdomain systemd[1]: tmp-crun.E8B2I7.mount: Deactivated successfully.
Dec 05 08:20:36 np0005546420.localdomain podman[65339]: 2025-12-05 08:20:36.511484809 +0000 UTC m=+0.088307571 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=collectd, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, version=17.1.12)
Dec 05 08:20:36 np0005546420.localdomain podman[65339]: 2025-12-05 08:20:36.547352062 +0000 UTC m=+0.124174884 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 05 08:20:36 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:20:36 np0005546420.localdomain podman[65340]: 2025-12-05 08:20:36.566065853 +0000 UTC m=+0.140449919 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible)
Dec 05 08:20:36 np0005546420.localdomain podman[65340]: 2025-12-05 08:20:36.574207326 +0000 UTC m=+0.148591332 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, architecture=x86_64, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 05 08:20:36 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:20:51 np0005546420.localdomain sudo[65381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:20:51 np0005546420.localdomain sudo[65381]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:20:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:20:51 np0005546420.localdomain sudo[65381]: pam_unix(sudo:session): session closed for user root
Dec 05 08:20:51 np0005546420.localdomain sudo[65405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:20:51 np0005546420.localdomain systemd[1]: tmp-crun.NoSVxL.mount: Deactivated successfully.
Dec 05 08:20:51 np0005546420.localdomain sudo[65405]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:20:51 np0005546420.localdomain podman[65396]: 2025-12-05 08:20:51.40893221 +0000 UTC m=+0.084638081 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, container_name=metrics_qdr, release=1761123044, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:20:51 np0005546420.localdomain podman[65396]: 2025-12-05 08:20:51.582907569 +0000 UTC m=+0.258613400 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., container_name=metrics_qdr, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1)
Dec 05 08:20:51 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:20:52 np0005546420.localdomain sudo[65405]: pam_unix(sudo:session): session closed for user root
Dec 05 08:20:52 np0005546420.localdomain sudo[65470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:20:52 np0005546420.localdomain sudo[65470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:20:52 np0005546420.localdomain sudo[65470]: pam_unix(sudo:session): session closed for user root
Dec 05 08:20:52 np0005546420.localdomain sudo[65485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 05 08:20:52 np0005546420.localdomain sudo[65485]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:20:52 np0005546420.localdomain sudo[65485]: pam_unix(sudo:session): session closed for user root
Dec 05 08:20:57 np0005546420.localdomain sudo[65519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:20:57 np0005546420.localdomain sudo[65519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:20:57 np0005546420.localdomain sudo[65519]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:21:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:21:07 np0005546420.localdomain systemd[1]: tmp-crun.Mw80dw.mount: Deactivated successfully.
Dec 05 08:21:07 np0005546420.localdomain podman[65535]: 2025-12-05 08:21:07.509138861 +0000 UTC m=+0.085912401 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com)
Dec 05 08:21:07 np0005546420.localdomain podman[65535]: 2025-12-05 08:21:07.517860115 +0000 UTC m=+0.094633685 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, tcib_managed=true, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4)
Dec 05 08:21:07 np0005546420.localdomain podman[65536]: 2025-12-05 08:21:07.558845438 +0000 UTC m=+0.133337166 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, release=1761123044, io.openshift.expose-services=)
Dec 05 08:21:07 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:21:07 np0005546420.localdomain podman[65536]: 2025-12-05 08:21:07.595368842 +0000 UTC m=+0.169860580 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:21:07 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:21:18 np0005546420.localdomain sudo[65617]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mbuxqbxqrkkuljeizwvtwlfiowkexnjt ; /usr/bin/python3
Dec 05 08:21:18 np0005546420.localdomain sudo[65617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:19 np0005546420.localdomain python3[65619]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:21:19 np0005546420.localdomain sudo[65617]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:19 np0005546420.localdomain sudo[65662]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bxrjzyfrrwuegvstvoibesqpnzqsrnnp ; /usr/bin/python3
Dec 05 08:21:19 np0005546420.localdomain sudo[65662]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:19 np0005546420.localdomain python3[65664]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922878.7255363-107427-228408436538647/source _original_basename=tmpn63f0d2p follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:21:19 np0005546420.localdomain sudo[65662]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:20 np0005546420.localdomain sudo[65724]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jnposoikwspaoatmqeotiitpwifkadju ; /usr/bin/python3
Dec 05 08:21:20 np0005546420.localdomain sudo[65724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:20 np0005546420.localdomain python3[65726]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:21:20 np0005546420.localdomain sudo[65724]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:20 np0005546420.localdomain sudo[65767]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjzvpwesymafqqwnkgpjxykplxejfbkk ; /usr/bin/python3
Dec 05 08:21:20 np0005546420.localdomain sudo[65767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:20 np0005546420.localdomain python3[65769]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922880.2654843-107587-125274933768922/source _original_basename=tmp6i7j2ifp follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:21:21 np0005546420.localdomain sudo[65767]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:21 np0005546420.localdomain sudo[65829]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-szrxthgikfpzzsaplderkpkgsodhfbur ; /usr/bin/python3
Dec 05 08:21:21 np0005546420.localdomain sudo[65829]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:21 np0005546420.localdomain python3[65831]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:21:21 np0005546420.localdomain sudo[65829]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:21 np0005546420.localdomain sudo[65872]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hmtfpaglcommshpnisanbnrbkllyqmdl ; /usr/bin/python3
Dec 05 08:21:21 np0005546420.localdomain sudo[65872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:21:21 np0005546420.localdomain podman[65875]: 2025-12-05 08:21:21.85765237 +0000 UTC m=+0.082638818 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=metrics_qdr, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:21:21 np0005546420.localdomain python3[65874]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922881.225586-107641-55267393119581/source _original_basename=tmpas8ebjqx follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:21:21 np0005546420.localdomain sudo[65872]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:22 np0005546420.localdomain podman[65875]: 2025-12-05 08:21:22.081015636 +0000 UTC m=+0.306002064 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, release=1761123044, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr)
Dec 05 08:21:22 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:21:22 np0005546420.localdomain sudo[65963]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rgviwmgasrvhxkhpryndshnvpjujytgs ; /usr/bin/python3
Dec 05 08:21:22 np0005546420.localdomain sudo[65963]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:22 np0005546420.localdomain python3[65965]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:21:22 np0005546420.localdomain sudo[65963]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:22 np0005546420.localdomain sudo[66006]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tgsptknvunpukbyanhdfofkwtsmoyegf ; /usr/bin/python3
Dec 05 08:21:22 np0005546420.localdomain sudo[66006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:22 np0005546420.localdomain python3[66008]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922882.1475544-107701-86869829401858/source _original_basename=tmp1lj6kow9 follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:21:22 np0005546420.localdomain sudo[66006]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:23 np0005546420.localdomain sudo[66036]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ddlnwcvukhddmlfslbnytvgwtdunuhff ; /usr/bin/python3
Dec 05 08:21:23 np0005546420.localdomain sudo[66036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:23 np0005546420.localdomain python3[66038]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 08:21:23 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:21:23 np0005546420.localdomain systemd-rc-local-generator[66062]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:21:23 np0005546420.localdomain systemd-sysv-generator[66067]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:21:23 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:21:24 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:21:24 np0005546420.localdomain systemd-rc-local-generator[66105]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:21:24 np0005546420.localdomain systemd-sysv-generator[66109]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:21:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:21:24 np0005546420.localdomain sudo[66036]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:24 np0005546420.localdomain sudo[66127]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wslqtsduwszcvfdhvcahpeudwpperfgr ; /usr/bin/python3
Dec 05 08:21:24 np0005546420.localdomain sudo[66127]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:24 np0005546420.localdomain python3[66129]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:21:24 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:21:24 np0005546420.localdomain systemd-rc-local-generator[66151]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:21:24 np0005546420.localdomain systemd-sysv-generator[66155]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:21:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:21:25 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:21:25 np0005546420.localdomain systemd-sysv-generator[66199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:21:25 np0005546420.localdomain systemd-rc-local-generator[66195]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:21:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:21:25 np0005546420.localdomain systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m.
Dec 05 08:21:25 np0005546420.localdomain sudo[66127]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:25 np0005546420.localdomain sudo[66218]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hnfdgwchvlymnihddxprvkfufndhfejp ; /usr/bin/python3
Dec 05 08:21:25 np0005546420.localdomain sudo[66218]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:25 np0005546420.localdomain python3[66220]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:21:25 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:21:25 np0005546420.localdomain systemd-rc-local-generator[66245]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:21:25 np0005546420.localdomain systemd-sysv-generator[66250]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:21:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:21:26 np0005546420.localdomain sudo[66218]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:26 np0005546420.localdomain sudo[66302]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vyqntdibwqphxzgykojiwyndpcubxcqs ; /usr/bin/python3
Dec 05 08:21:26 np0005546420.localdomain sudo[66302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:26 np0005546420.localdomain python3[66304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:21:26 np0005546420.localdomain sudo[66302]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:26 np0005546420.localdomain sudo[66345]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kemlrqtsyfewrpjeerurumwxybhnbqrs ; /usr/bin/python3
Dec 05 08:21:26 np0005546420.localdomain sudo[66345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:26 np0005546420.localdomain python3[66347]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922886.2030914-107803-99716237451117/source _original_basename=tmpn_gfg21y follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:21:26 np0005546420.localdomain sudo[66345]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:27 np0005546420.localdomain sudo[66375]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-evdspjsdmccdgyckreqwuylqkkttstzs ; /usr/bin/python3
Dec 05 08:21:27 np0005546420.localdomain sudo[66375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:27 np0005546420.localdomain python3[66377]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:21:27 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:21:27 np0005546420.localdomain systemd-rc-local-generator[66401]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:21:27 np0005546420.localdomain systemd-sysv-generator[66405]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:21:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:21:27 np0005546420.localdomain systemd[1]: Reached target tripleo_nova_libvirt.target.
Dec 05 08:21:27 np0005546420.localdomain sudo[66375]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:27 np0005546420.localdomain sudo[66430]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ndsepsundlzrtintgfjzkknpzfzzlsdg ; /usr/bin/python3
Dec 05 08:21:27 np0005546420.localdomain sudo[66430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:28 np0005546420.localdomain python3[66432]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:21:28 np0005546420.localdomain sudo[66430]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:28 np0005546420.localdomain sudo[66480]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlzhpaujbopwdufbrmerzvrvuzyefsrx ; /usr/bin/python3
Dec 05 08:21:28 np0005546420.localdomain sudo[66480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:28 np0005546420.localdomain sudo[66480]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:28 np0005546420.localdomain sudo[66498]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ogshthsyozwnnjqvmpsekipglbvpscii ; /usr/bin/python3
Dec 05 08:21:28 np0005546420.localdomain sudo[66498]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:29 np0005546420.localdomain sudo[66498]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:29 np0005546420.localdomain sudo[66602]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zyqdnawvyodxlpmxlbuykvwqgkcpvbiq ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922889.1940062-107900-271244976348059/async_wrapper.py 510648958381 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922889.1940062-107900-271244976348059/AnsiballZ_command.py _
Dec 05 08:21:29 np0005546420.localdomain sudo[66602]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 05 08:21:29 np0005546420.localdomain ansible-async_wrapper.py[66604]: Invoked with 510648958381 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922889.1940062-107900-271244976348059/AnsiballZ_command.py _
Dec 05 08:21:29 np0005546420.localdomain ansible-async_wrapper.py[66607]: Starting module and watcher
Dec 05 08:21:29 np0005546420.localdomain ansible-async_wrapper.py[66607]: Start watching 66608 (3600)
Dec 05 08:21:29 np0005546420.localdomain ansible-async_wrapper.py[66608]: Start module (66608)
Dec 05 08:21:29 np0005546420.localdomain ansible-async_wrapper.py[66604]: Return async_wrapper task started.
Dec 05 08:21:29 np0005546420.localdomain sudo[66602]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:29 np0005546420.localdomain sudo[66623]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhtglcjzepzdslobzkbhwpqgjguarixw ; /usr/bin/python3
Dec 05 08:21:29 np0005546420.localdomain sudo[66623]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:29 np0005546420.localdomain python3[66625]: ansible-ansible.legacy.async_status Invoked with jid=510648958381.66604 mode=status _async_dir=/tmp/.ansible_async
Dec 05 08:21:29 np0005546420.localdomain sudo[66623]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:    (file & line not available)
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:    (file & line not available)
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:                     with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:                     with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:                     with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:                     with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:                     with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 05 08:21:33 np0005546420.localdomain puppet-user[66628]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.20 seconds
Dec 05 08:21:34 np0005546420.localdomain ansible-async_wrapper.py[66607]: 66608 still running (3600)
Dec 05 08:21:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:21:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:21:38 np0005546420.localdomain systemd[1]: tmp-crun.tI4zxL.mount: Deactivated successfully.
Dec 05 08:21:38 np0005546420.localdomain podman[66749]: 2025-12-05 08:21:38.517405315 +0000 UTC m=+0.090962139 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, container_name=collectd)
Dec 05 08:21:38 np0005546420.localdomain podman[66749]: 2025-12-05 08:21:38.554525538 +0000 UTC m=+0.128082422 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, config_id=tripleo_step3, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:21:38 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:21:38 np0005546420.localdomain podman[66750]: 2025-12-05 08:21:38.577206458 +0000 UTC m=+0.147631334 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.buildah.version=1.41.4)
Dec 05 08:21:38 np0005546420.localdomain podman[66750]: 2025-12-05 08:21:38.612359119 +0000 UTC m=+0.182783975 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:21:38 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:21:39 np0005546420.localdomain ansible-async_wrapper.py[66607]: 66608 still running (3595)
Dec 05 08:21:40 np0005546420.localdomain sudo[66858]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-losrcmxwhozwsmipmjochonoegkilabn ; /usr/bin/python3
Dec 05 08:21:40 np0005546420.localdomain sudo[66858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:40 np0005546420.localdomain python3[66863]: ansible-ansible.legacy.async_status Invoked with jid=510648958381.66604 mode=status _async_dir=/tmp/.ansible_async
Dec 05 08:21:40 np0005546420.localdomain sudo[66858]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:42 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 08:21:42 np0005546420.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 05 08:21:42 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:21:42 np0005546420.localdomain systemd-sysv-generator[66981]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:21:42 np0005546420.localdomain systemd-rc-local-generator[66977]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:21:42 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:21:42 np0005546420.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 08:21:42 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 08:21:42 np0005546420.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 05 08:21:42 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.005s CPU time.
Dec 05 08:21:42 np0005546420.localdomain systemd[1]: run-r11541edf975a4b838a731e7733689f58.service: Deactivated successfully.
Dec 05 08:21:43 np0005546420.localdomain puppet-user[66628]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created
Dec 05 08:21:43 np0005546420.localdomain puppet-user[66628]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}c9e108a71773dbe98d6c0a6302d6d222ce3bb184ea5c89b4bfc4d96b778d00f7'
Dec 05 08:21:43 np0005546420.localdomain puppet-user[66628]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd'
Dec 05 08:21:43 np0005546420.localdomain puppet-user[66628]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea'
Dec 05 08:21:43 np0005546420.localdomain puppet-user[66628]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97'
Dec 05 08:21:43 np0005546420.localdomain puppet-user[66628]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events
Dec 05 08:21:44 np0005546420.localdomain ansible-async_wrapper.py[66607]: 66608 still running (3590)
Dec 05 08:21:48 np0005546420.localdomain puppet-user[66628]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully
Dec 05 08:21:48 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:21:48 np0005546420.localdomain systemd-sysv-generator[68001]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:21:48 np0005546420.localdomain systemd-rc-local-generator[67997]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:21:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:21:49 np0005546420.localdomain systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon....
Dec 05 08:21:49 np0005546420.localdomain snmpd[68010]: Can't find directory of RPM packages
Dec 05 08:21:49 np0005546420.localdomain snmpd[68010]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB
Dec 05 08:21:49 np0005546420.localdomain systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon..
Dec 05 08:21:49 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:21:49 np0005546420.localdomain systemd-rc-local-generator[68033]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:21:49 np0005546420.localdomain systemd-sysv-generator[68039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:21:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:21:49 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:21:49 np0005546420.localdomain ansible-async_wrapper.py[66607]: 66608 still running (3585)
Dec 05 08:21:49 np0005546420.localdomain systemd-sysv-generator[68075]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:21:49 np0005546420.localdomain systemd-rc-local-generator[68071]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:21:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]: Notice: Applied catalog in 16.41 seconds
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]: Application:
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:    Initial environment: production
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:    Converged environment: production
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:          Run mode: user
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]: Changes:
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:             Total: 8
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]: Events:
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:           Success: 8
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:             Total: 8
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]: Resources:
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:         Restarted: 1
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:           Changed: 8
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:       Out of sync: 8
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:             Total: 19
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]: Time:
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:        Filebucket: 0.00
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:          Schedule: 0.00
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:            Augeas: 0.01
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:              File: 0.08
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:    Config retrieval: 0.26
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:           Service: 1.24
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:    Transaction evaluation: 16.40
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:    Catalog application: 16.41
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:          Last run: 1764922909
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:              Exec: 5.07
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:           Package: 9.86
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:             Total: 16.41
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]: Version:
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:            Config: 1764922893
Dec 05 08:21:49 np0005546420.localdomain puppet-user[66628]:            Puppet: 7.10.0
Dec 05 08:21:50 np0005546420.localdomain ansible-async_wrapper.py[66608]: Module complete (66608)
Dec 05 08:21:50 np0005546420.localdomain sudo[68098]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjobhafdywcnsamzpnglcbmydywyawie ; /usr/bin/python3
Dec 05 08:21:50 np0005546420.localdomain sudo[68098]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:50 np0005546420.localdomain python3[68100]: ansible-ansible.legacy.async_status Invoked with jid=510648958381.66604 mode=status _async_dir=/tmp/.ansible_async
Dec 05 08:21:50 np0005546420.localdomain sudo[68098]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:50 np0005546420.localdomain sudo[68114]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbwjrpoehzhoywuhnynsvkhtttshnfxm ; /usr/bin/python3
Dec 05 08:21:50 np0005546420.localdomain sudo[68114]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:51 np0005546420.localdomain python3[68116]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:21:51 np0005546420.localdomain sudo[68114]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:51 np0005546420.localdomain sudo[68130]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vqawipfnggrbvjjfsbcxdyvlbfdfvvjr ; /usr/bin/python3
Dec 05 08:21:51 np0005546420.localdomain sudo[68130]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:51 np0005546420.localdomain python3[68132]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:21:51 np0005546420.localdomain sudo[68130]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:51 np0005546420.localdomain sudo[68180]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdjolgutubznppuswhpyboxmoihfdxsi ; /usr/bin/python3
Dec 05 08:21:51 np0005546420.localdomain sudo[68180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:52 np0005546420.localdomain python3[68182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:21:52 np0005546420.localdomain sudo[68180]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:52 np0005546420.localdomain sudo[68198]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfvmrscqczttjglfobvwmmpybscmvwyj ; /usr/bin/python3
Dec 05 08:21:52 np0005546420.localdomain sudo[68198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:21:52 np0005546420.localdomain systemd[1]: tmp-crun.5tK3Y8.mount: Deactivated successfully.
Dec 05 08:21:52 np0005546420.localdomain podman[68201]: 2025-12-05 08:21:52.295875419 +0000 UTC m=+0.085298553 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team)
Dec 05 08:21:52 np0005546420.localdomain python3[68200]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp0i7it021 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:21:52 np0005546420.localdomain sudo[68198]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:52 np0005546420.localdomain podman[68201]: 2025-12-05 08:21:52.4839751 +0000 UTC m=+0.273398224 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, release=1761123044, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible)
Dec 05 08:21:52 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:21:52 np0005546420.localdomain sudo[68258]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gmheipychwhgswizuxtzryeqaplyvemf ; /usr/bin/python3
Dec 05 08:21:52 np0005546420.localdomain sudo[68258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:52 np0005546420.localdomain python3[68260]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:21:52 np0005546420.localdomain sudo[68258]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:52 np0005546420.localdomain sudo[68274]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxlxianzghtisvjgoaowbvexqwpfolhx ; /usr/bin/python3
Dec 05 08:21:52 np0005546420.localdomain sudo[68274]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:53 np0005546420.localdomain sudo[68274]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:53 np0005546420.localdomain sudo[68361]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jdlrxudjprkqwrwoxvylbzivnkklxyrb ; /usr/bin/python3
Dec 05 08:21:53 np0005546420.localdomain sudo[68361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:53 np0005546420.localdomain python3[68363]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 05 08:21:53 np0005546420.localdomain sudo[68361]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:54 np0005546420.localdomain sudo[68380]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-azmunjdrqovaqcnuwhbnaucaxsjrbjqa ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:54 np0005546420.localdomain sudo[68380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:54 np0005546420.localdomain python3[68382]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:21:54 np0005546420.localdomain sudo[68380]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:54 np0005546420.localdomain ansible-async_wrapper.py[66607]: Done in kid B.
Dec 05 08:21:54 np0005546420.localdomain sudo[68396]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzqjzrynucwxadxlmwnoqkxidlrtvqco ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:54 np0005546420.localdomain sudo[68396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:54 np0005546420.localdomain sudo[68396]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:55 np0005546420.localdomain sudo[68412]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omorhysahrmpklgqhvcewigptqsgmszi ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:55 np0005546420.localdomain sudo[68412]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:55 np0005546420.localdomain python3[68414]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:21:55 np0005546420.localdomain sudo[68412]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:55 np0005546420.localdomain sudo[68462]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utyuxhaawwenrblxdfvsxdslglgkdntg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:55 np0005546420.localdomain sudo[68462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:55 np0005546420.localdomain python3[68464]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:21:55 np0005546420.localdomain sudo[68462]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:56 np0005546420.localdomain sudo[68480]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmnnsjkmlapeisvpjwgrqpyvqeehozua ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:56 np0005546420.localdomain sudo[68480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:56 np0005546420.localdomain python3[68482]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:21:56 np0005546420.localdomain sudo[68480]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:56 np0005546420.localdomain sudo[68542]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwqvtfckywexfxjjxruyzorccykkcdsz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:56 np0005546420.localdomain sudo[68542]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:56 np0005546420.localdomain python3[68544]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:21:56 np0005546420.localdomain sudo[68542]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:56 np0005546420.localdomain sudo[68560]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zzfkkdvcrrllqdvdgkeairkbncanxzer ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:56 np0005546420.localdomain sudo[68560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:56 np0005546420.localdomain python3[68562]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:21:56 np0005546420.localdomain sudo[68560]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:57 np0005546420.localdomain sudo[68622]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xkqkikjzhbhjaazkaepckyncsuymsbhk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:57 np0005546420.localdomain sudo[68622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:57 np0005546420.localdomain python3[68624]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:21:57 np0005546420.localdomain sudo[68625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:21:57 np0005546420.localdomain sudo[68625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:21:57 np0005546420.localdomain sudo[68625]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:57 np0005546420.localdomain sudo[68622]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:57 np0005546420.localdomain sudo[68642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:21:57 np0005546420.localdomain sudo[68642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:21:57 np0005546420.localdomain sudo[68670]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqqsxwdavmzrfieaxczmievkftdkmxzp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:57 np0005546420.localdomain sudo[68670]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:57 np0005546420.localdomain python3[68672]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:21:57 np0005546420.localdomain sudo[68670]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:58 np0005546420.localdomain sudo[68758]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qrharnafavmqamvlrknnczjwremmgvya ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:58 np0005546420.localdomain sudo[68758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:58 np0005546420.localdomain sudo[68642]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:58 np0005546420.localdomain python3[68766]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:21:58 np0005546420.localdomain sudo[68758]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:58 np0005546420.localdomain sudo[68782]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rqmpofsbwwgyhyoibvydostgxmvkshbd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:58 np0005546420.localdomain sudo[68782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:58 np0005546420.localdomain python3[68784]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:21:58 np0005546420.localdomain sudo[68782]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:58 np0005546420.localdomain sudo[68812]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-batkibuheptqwhmggrpmmwsvradccbqq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:58 np0005546420.localdomain sudo[68812]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:21:59 np0005546420.localdomain python3[68814]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:21:59 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:21:59 np0005546420.localdomain systemd-rc-local-generator[68841]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:21:59 np0005546420.localdomain systemd-sysv-generator[68845]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:21:59 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:21:59 np0005546420.localdomain sudo[68812]: pam_unix(sudo:session): session closed for user root
Dec 05 08:21:59 np0005546420.localdomain sudo[68898]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwxcpxtvgrmwigkxaaojombnirbvwekm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:21:59 np0005546420.localdomain sudo[68898]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:00 np0005546420.localdomain python3[68900]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:22:00 np0005546420.localdomain sudo[68898]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:00 np0005546420.localdomain sudo[68916]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hqdgqexgkxrkklrnxrsuyqtvdwyhzfqs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:00 np0005546420.localdomain sudo[68916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:00 np0005546420.localdomain python3[68918]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:00 np0005546420.localdomain sudo[68916]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:00 np0005546420.localdomain sudo[68978]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xocrztdmujphuwnrmnlricjlznoabodm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:00 np0005546420.localdomain sudo[68978]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:00 np0005546420.localdomain python3[68980]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:22:00 np0005546420.localdomain sudo[68978]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:00 np0005546420.localdomain sudo[68996]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubflarvtuxelyhoxepybpownychahqxv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:00 np0005546420.localdomain sudo[68996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:01 np0005546420.localdomain python3[68998]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:01 np0005546420.localdomain sudo[68996]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:01 np0005546420.localdomain sudo[69013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:22:01 np0005546420.localdomain sudo[69013]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:22:01 np0005546420.localdomain sudo[69013]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:01 np0005546420.localdomain sudo[69041]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xjbpewyykdvqpgpwacfplbupcsflgshs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:01 np0005546420.localdomain sudo[69041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:01 np0005546420.localdomain python3[69043]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:22:01 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:22:01 np0005546420.localdomain systemd-rc-local-generator[69066]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:22:01 np0005546420.localdomain systemd-sysv-generator[69073]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:22:01 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:22:01 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 08:22:01 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 08:22:01 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 08:22:01 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 08:22:01 np0005546420.localdomain sudo[69041]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:02 np0005546420.localdomain sudo[69097]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uaercxvsfwkkdvvoelssrlrawduhoznh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:02 np0005546420.localdomain sudo[69097]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:02 np0005546420.localdomain python3[69099]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 05 08:22:02 np0005546420.localdomain sudo[69097]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:02 np0005546420.localdomain sudo[69113]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdmwyanoolrdqjvfhttmwfakaywqlovv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:02 np0005546420.localdomain sudo[69113]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:03 np0005546420.localdomain sudo[69113]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:03 np0005546420.localdomain sudo[69155]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nkokgsznwfxfmrncuehdgcehkxxebiwr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:03 np0005546420.localdomain sudo[69155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:03 np0005546420.localdomain python3[69157]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 05 08:22:04 np0005546420.localdomain podman[69314]: 2025-12-05 08:22:04.220251457 +0000 UTC m=+0.069600631 container create fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1)
Dec 05 08:22:04 np0005546420.localdomain podman[69321]: 2025-12-05 08:22:04.243231787 +0000 UTC m=+0.087504202 container create e65b0232b73bce3f06085ad4d896588a20c05f840104fc680eb85ec5dd892d3f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, batch=17.1_20251118.1, container_name=configure_cms_options, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4)
Dec 05 08:22:04 np0005546420.localdomain podman[69326]: 2025-12-05 08:22:04.265298637 +0000 UTC m=+0.104529224 container create 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started libpod-conmon-fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.scope.
Dec 05 08:22:04 np0005546420.localdomain podman[69359]: 2025-12-05 08:22:04.278275984 +0000 UTC m=+0.085056714 container create 162a75551ba739cd4c6e1f915806d262fab80c7bd9d85c181e3d13b48d9fe544 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_libvirt_init_secret, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started libpod-conmon-e65b0232b73bce3f06085ad4d896588a20c05f840104fc680eb85ec5dd892d3f.scope.
Dec 05 08:22:04 np0005546420.localdomain podman[69314]: 2025-12-05 08:22:04.184972102 +0000 UTC m=+0.034321286 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:22:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05f78bfd8a8e2cb0bd70f6d604b3d8f88b9a205bde766603d2ab894593606d9/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:04 np0005546420.localdomain podman[69321]: 2025-12-05 08:22:04.192009412 +0000 UTC m=+0.036281857 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 05 08:22:04 np0005546420.localdomain podman[69326]: 2025-12-05 08:22:04.195329506 +0000 UTC m=+0.034560093 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started libpod-conmon-1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.scope.
Dec 05 08:22:04 np0005546420.localdomain podman[69333]: 2025-12-05 08:22:04.29920907 +0000 UTC m=+0.132551153 container create 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, batch=17.1_20251118.1, name=rhosp17/openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:22:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67c0121ab2e02c08e681d8a85898c08bf802edfec3fbfb45ad79be05f6aa5dc4/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:04 np0005546420.localdomain podman[69333]: 2025-12-05 08:22:04.208767117 +0000 UTC m=+0.042109220 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started libpod-conmon-162a75551ba739cd4c6e1f915806d262fab80c7bd9d85c181e3d13b48d9fe544.scope.
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started libpod-conmon-11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.scope.
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:22:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626528a751d21d59e66c79e0e8f19b9b9ae5356c5571af7f106b1aee9d855ee/merged/etc/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626528a751d21d59e66c79e0e8f19b9b9ae5356c5571af7f106b1aee9d855ee/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7626528a751d21d59e66c79e0e8f19b9b9ae5356c5571af7f106b1aee9d855ee/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:22:04 np0005546420.localdomain podman[69359]: 2025-12-05 08:22:04.333932027 +0000 UTC m=+0.140712767 container init 162a75551ba739cd4c6e1f915806d262fab80c7bd9d85c181e3d13b48d9fe544 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.buildah.version=1.41.4, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_libvirt_init_secret, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 05 08:22:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522dca5b0897edc142dfc46111f3114c06dbf23dda84b5305bf810fad13843cc/merged/var/log/containers supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:04 np0005546420.localdomain podman[69359]: 2025-12-05 08:22:04.233223393 +0000 UTC m=+0.040004143 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:22:04 np0005546420.localdomain podman[69326]: 2025-12-05 08:22:04.341699011 +0000 UTC m=+0.180929598 container init 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 05 08:22:04 np0005546420.localdomain sudo[69417]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:22:04 np0005546420.localdomain sudo[69417]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:22:04 np0005546420.localdomain podman[69333]: 2025-12-05 08:22:04.364281858 +0000 UTC m=+0.197623961 container init 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:22:04 np0005546420.localdomain podman[69326]: 2025-12-05 08:22:04.376611554 +0000 UTC m=+0.215842141 container start 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, container_name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:22:04 np0005546420.localdomain python3[69157]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=2a14d146ce921397a1b78b68c853c045 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 05 08:22:04 np0005546420.localdomain sudo[69424]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:22:04 np0005546420.localdomain podman[69314]: 2025-12-05 08:22:04.382676374 +0000 UTC m=+0.232025568 container init fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:22:04 np0005546420.localdomain sudo[69424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:22:04 np0005546420.localdomain podman[69359]: 2025-12-05 08:22:04.39980418 +0000 UTC m=+0.206584910 container start 162a75551ba739cd4c6e1f915806d262fab80c7bd9d85c181e3d13b48d9fe544 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_libvirt_init_secret, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:22:04 np0005546420.localdomain podman[69359]: 2025-12-05 08:22:04.400003526 +0000 UTC m=+0.206784266 container attach 162a75551ba739cd4c6e1f915806d262fab80c7bd9d85c181e3d13b48d9fe544 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, container_name=nova_libvirt_init_secret)
Dec 05 08:22:04 np0005546420.localdomain sudo[69443]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:22:04 np0005546420.localdomain sudo[69443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:22:04 np0005546420.localdomain podman[69333]: 2025-12-05 08:22:04.422033546 +0000 UTC m=+0.255375619 container start 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044)
Dec 05 08:22:04 np0005546420.localdomain sudo[69417]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:04 np0005546420.localdomain python3[69157]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:22:04 np0005546420.localdomain sudo[69424]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:04 np0005546420.localdomain crond[69423]: (CRON) STARTUP (1.5.7)
Dec 05 08:22:04 np0005546420.localdomain crond[69423]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 30% if used.)
Dec 05 08:22:04 np0005546420.localdomain crond[69423]: (CRON) INFO (running with inotify support)
Dec 05 08:22:04 np0005546420.localdomain sudo[69443]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:04 np0005546420.localdomain podman[69314]: 2025-12-05 08:22:04.457633051 +0000 UTC m=+0.306982225 container start fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Dec 05 08:22:04 np0005546420.localdomain python3[69157]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=2a14d146ce921397a1b78b68c853c045 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: libpod-162a75551ba739cd4c6e1f915806d262fab80c7bd9d85c181e3d13b48d9fe544.scope: Deactivated successfully.
Dec 05 08:22:04 np0005546420.localdomain podman[69450]: 2025-12-05 08:22:04.496744936 +0000 UTC m=+0.080991707 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron)
Dec 05 08:22:04 np0005546420.localdomain podman[69321]: 2025-12-05 08:22:04.5035682 +0000 UTC m=+0.347840615 container init e65b0232b73bce3f06085ad4d896588a20c05f840104fc680eb85ec5dd892d3f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, release=1761123044, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=configure_cms_options, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, version=17.1.12, url=https://www.redhat.com)
Dec 05 08:22:04 np0005546420.localdomain podman[69450]: 2025-12-05 08:22:04.506271884 +0000 UTC m=+0.090518655 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, container_name=logrotate_crond, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:22:04 np0005546420.localdomain podman[69425]: 2025-12-05 08:22:04.54889033 +0000 UTC m=+0.167650262 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public)
Dec 05 08:22:04 np0005546420.localdomain podman[69321]: 2025-12-05 08:22:04.559859664 +0000 UTC m=+0.404132079 container start e65b0232b73bce3f06085ad4d896588a20c05f840104fc680eb85ec5dd892d3f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller)
Dec 05 08:22:04 np0005546420.localdomain podman[69321]: 2025-12-05 08:22:04.56008173 +0000 UTC m=+0.404354165 container attach e65b0232b73bce3f06085ad4d896588a20c05f840104fc680eb85ec5dd892d3f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., container_name=configure_cms_options, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Dec 05 08:22:04 np0005546420.localdomain podman[69425]: 2025-12-05 08:22:04.561855005 +0000 UTC m=+0.180614917 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team)
Dec 05 08:22:04 np0005546420.localdomain podman[69425]: unhealthy
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Failed with result 'exit-code'.
Dec 05 08:22:04 np0005546420.localdomain ovs-vsctl[69542]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options
Dec 05 08:22:04 np0005546420.localdomain podman[69359]: 2025-12-05 08:22:04.600436894 +0000 UTC m=+0.407217634 container died 162a75551ba739cd4c6e1f915806d262fab80c7bd9d85c181e3d13b48d9fe544 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, container_name=nova_libvirt_init_secret, config_id=tripleo_step4, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: libpod-e65b0232b73bce3f06085ad4d896588a20c05f840104fc680eb85ec5dd892d3f.scope: Deactivated successfully.
Dec 05 08:22:04 np0005546420.localdomain podman[69321]: 2025-12-05 08:22:04.60574412 +0000 UTC m=+0.450016535 container died e65b0232b73bce3f06085ad4d896588a20c05f840104fc680eb85ec5dd892d3f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, container_name=configure_cms_options, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc.)
Dec 05 08:22:04 np0005546420.localdomain podman[69467]: 2025-12-05 08:22:04.608999252 +0000 UTC m=+0.166702261 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, version=17.1.12, config_id=tripleo_step4, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64)
Dec 05 08:22:04 np0005546420.localdomain podman[69508]: 2025-12-05 08:22:04.668856466 +0000 UTC m=+0.159294739 container cleanup 162a75551ba739cd4c6e1f915806d262fab80c7bd9d85c181e3d13b48d9fe544 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=nova_libvirt_init_secret, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, release=1761123044, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:22:04 np0005546420.localdomain podman[69467]: 2025-12-05 08:22:04.671589132 +0000 UTC m=+0.229292161 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, container_name=ceilometer_agent_compute, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: libpod-conmon-162a75551ba739cd4c6e1f915806d262fab80c7bd9d85c181e3d13b48d9fe544.scope: Deactivated successfully.
Dec 05 08:22:04 np0005546420.localdomain podman[69467]: unhealthy
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Failed with result 'exit-code'.
Dec 05 08:22:04 np0005546420.localdomain podman[69548]: 2025-12-05 08:22:04.732617893 +0000 UTC m=+0.117011165 container cleanup e65b0232b73bce3f06085ad4d896588a20c05f840104fc680eb85ec5dd892d3f (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:22:04 np0005546420.localdomain systemd[1]: libpod-conmon-e65b0232b73bce3f06085ad4d896588a20c05f840104fc680eb85ec5dd892d3f.scope: Deactivated successfully.
Dec 05 08:22:04 np0005546420.localdomain python3[69157]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi
Dec 05 08:22:04 np0005546420.localdomain python3[69157]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=ac0f5be6f71e6f8c16cd05155c4b5429 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack
Dec 05 08:22:04 np0005546420.localdomain podman[69700]: 2025-12-05 08:22:04.995031742 +0000 UTC m=+0.062523019 container create a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public)
Dec 05 08:22:05 np0005546420.localdomain systemd[1]: Started libpod-conmon-a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.scope.
Dec 05 08:22:05 np0005546420.localdomain podman[69701]: 2025-12-05 08:22:05.036761309 +0000 UTC m=+0.101834121 container create ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=setup_ovs_manager, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']})
Dec 05 08:22:05 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:22:05 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81af632ac7b1bb30b73d3b843d9ead4231843a2eced4d0ef746349ae454b4194/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:05 np0005546420.localdomain podman[69700]: 2025-12-05 08:22:04.963511125 +0000 UTC m=+0.031002422 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 05 08:22:05 np0005546420.localdomain systemd[1]: Started libpod-conmon-ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e.scope.
Dec 05 08:22:05 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:22:05 np0005546420.localdomain podman[69701]: 2025-12-05 08:22:04.989706095 +0000 UTC m=+0.054778937 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 05 08:22:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:22:05 np0005546420.localdomain podman[69700]: 2025-12-05 08:22:05.091379999 +0000 UTC m=+0.158871306 container init a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-compute, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:22:05 np0005546420.localdomain podman[69701]: 2025-12-05 08:22:05.107781953 +0000 UTC m=+0.172854765 container init ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-19T00:14:25Z, version=17.1.12, container_name=setup_ovs_manager)
Dec 05 08:22:05 np0005546420.localdomain sudo[69737]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:22:05 np0005546420.localdomain sudo[69737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:22:05 np0005546420.localdomain podman[69701]: 2025-12-05 08:22:05.11788523 +0000 UTC m=+0.182958042 container start ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=setup_ovs_manager, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Dec 05 08:22:05 np0005546420.localdomain podman[69701]: 2025-12-05 08:22:05.118139928 +0000 UTC m=+0.183212750 container attach ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, batch=17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc.)
Dec 05 08:22:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:22:05 np0005546420.localdomain podman[69700]: 2025-12-05 08:22:05.132355453 +0000 UTC m=+0.199846760 container start a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Dec 05 08:22:05 np0005546420.localdomain python3[69157]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ac0f5be6f71e6f8c16cd05155c4b5429 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 05 08:22:05 np0005546420.localdomain sudo[69737]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:05 np0005546420.localdomain sshd[69768]: Server listening on 0.0.0.0 port 2022.
Dec 05 08:22:05 np0005546420.localdomain sshd[69768]: Server listening on :: port 2022.
Dec 05 08:22:05 np0005546420.localdomain podman[69742]: 2025-12-05 08:22:05.223098995 +0000 UTC m=+0.084350173 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target)
Dec 05 08:22:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f3c115c686a4e871b821c782f5b4cfb35a8dcd215e49958df3fb5148fa2c1e76-merged.mount: Deactivated successfully.
Dec 05 08:22:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e65b0232b73bce3f06085ad4d896588a20c05f840104fc680eb85ec5dd892d3f-userdata-shm.mount: Deactivated successfully.
Dec 05 08:22:05 np0005546420.localdomain sudo[69786]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp32xieads/privsep.sock
Dec 05 08:22:05 np0005546420.localdomain sudo[69786]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 08:22:05 np0005546420.localdomain podman[69742]: 2025-12-05 08:22:05.553510133 +0000 UTC m=+0.414761311 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z)
Dec 05 08:22:05 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:22:05 np0005546420.localdomain kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 05 08:22:05 np0005546420.localdomain sudo[69786]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:22:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1800.1 total, 600.0 interval
                                                          Cumulative writes: 5161 writes, 23K keys, 5161 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5161 writes, 538 syncs, 9.59 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 64 writes, 108 keys, 64 commit groups, 1.0 writes per commit group, ingest: 0.03 MB, 0.00 MB/s
                                                          Interval WAL: 64 writes, 32 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 08:22:07 np0005546420.localdomain ovs-vsctl[69917]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: libpod-ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e.scope: Deactivated successfully.
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: libpod-ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e.scope: Consumed 2.937s CPU time.
Dec 05 08:22:08 np0005546420.localdomain podman[69918]: 2025-12-05 08:22:08.237536321 +0000 UTC m=+0.049550242 container died ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e-userdata-shm.mount: Deactivated successfully.
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-aff96401da9550f267dc9e7f47ea63cc3ba29a151559ef0a7447672bdf20407f-merged.mount: Deactivated successfully.
Dec 05 08:22:08 np0005546420.localdomain podman[69918]: 2025-12-05 08:22:08.278663209 +0000 UTC m=+0.090677100 container cleanup ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: libpod-conmon-ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e.scope: Deactivated successfully.
Dec 05 08:22:08 np0005546420.localdomain python3[69157]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata
Dec 05 08:22:08 np0005546420.localdomain podman[70023]: 2025-12-05 08:22:08.683911341 +0000 UTC m=+0.078285593 container create dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4)
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Started libpod-conmon-dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.scope.
Dec 05 08:22:08 np0005546420.localdomain podman[70032]: 2025-12-05 08:22:08.726084782 +0000 UTC m=+0.106226198 container create 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:22:08 np0005546420.localdomain podman[70023]: 2025-12-05 08:22:08.649871755 +0000 UTC m=+0.044246057 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 05 08:22:08 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d567292acb3ce2216020d33f5af2ad32fea36c49bc00cd4399244553285869/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:08 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d567292acb3ce2216020d33f5af2ad32fea36c49bc00cd4399244553285869/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:08 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d567292acb3ce2216020d33f5af2ad32fea36c49bc00cd4399244553285869/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Started libpod-conmon-1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.scope.
Dec 05 08:22:08 np0005546420.localdomain podman[70032]: 2025-12-05 08:22:08.667038643 +0000 UTC m=+0.047180069 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:22:08 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/407e666a727972fae5871c994186b9ead4079502f92535d717006e20e7650b6a/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:08 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/407e666a727972fae5871c994186b9ead4079502f92535d717006e20e7650b6a/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:08 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/407e666a727972fae5871c994186b9ead4079502f92535d717006e20e7650b6a/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff)
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:22:08 np0005546420.localdomain podman[70023]: 2025-12-05 08:22:08.787918018 +0000 UTC m=+0.182292290 container init dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:22:08 np0005546420.localdomain podman[70032]: 2025-12-05 08:22:08.804518188 +0000 UTC m=+0.184659614 container init 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1)
Dec 05 08:22:08 np0005546420.localdomain sudo[70083]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:22:08 np0005546420.localdomain sudo[70083]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:22:08 np0005546420.localdomain systemd-logind[762]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 05 08:22:08 np0005546420.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 05 08:22:08 np0005546420.localdomain podman[70051]: 2025-12-05 08:22:08.865030823 +0000 UTC m=+0.146294883 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64)
Dec 05 08:22:08 np0005546420.localdomain systemd[70110]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:22:08 np0005546420.localdomain podman[70023]: 2025-12-05 08:22:08.878740652 +0000 UTC m=+0.273114904 container start dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_metadata_agent, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 08:22:08 np0005546420.localdomain python3[69157]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d6812e1160bfb2e956bcab4e760845cf --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 05 08:22:08 np0005546420.localdomain podman[70032]: 2025-12-05 08:22:08.891665638 +0000 UTC m=+0.271807034 container start 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, architecture=x86_64)
Dec 05 08:22:08 np0005546420.localdomain python3[69157]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 05 08:22:08 np0005546420.localdomain sudo[70083]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:08 np0005546420.localdomain podman[70092]: 2025-12-05 08:22:08.957140888 +0000 UTC m=+0.123201010 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, tcib_managed=true)
Dec 05 08:22:08 np0005546420.localdomain podman[70092]: 2025-12-05 08:22:08.993740394 +0000 UTC m=+0.159800476 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 05 08:22:09 np0005546420.localdomain podman[70092]: unhealthy
Dec 05 08:22:09 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:22:09 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Queued start job for default target Main User Target.
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Created slice User Application Slice.
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Reached target Paths.
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Reached target Timers.
Dec 05 08:22:09 np0005546420.localdomain podman[70098]: 2025-12-05 08:22:08.923912097 +0000 UTC m=+0.079904243 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible)
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Starting D-Bus User Message Bus Socket...
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Starting Create User's Volatile Files and Directories...
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Listening on D-Bus User Message Bus Socket.
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Reached target Sockets.
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Finished Create User's Volatile Files and Directories.
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Reached target Basic System.
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Reached target Main User Target.
Dec 05 08:22:09 np0005546420.localdomain systemd[70110]: Startup finished in 160ms.
Dec 05 08:22:09 np0005546420.localdomain systemd[1]: Started User Manager for UID 0.
Dec 05 08:22:09 np0005546420.localdomain podman[70051]: 2025-12-05 08:22:09.050905845 +0000 UTC m=+0.332169845 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:22:09 np0005546420.localdomain systemd[1]: Started Session c9 of User root.
Dec 05 08:22:09 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:22:09 np0005546420.localdomain podman[70098]: 2025-12-05 08:22:09.102747578 +0000 UTC m=+0.258739734 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4)
Dec 05 08:22:09 np0005546420.localdomain podman[70098]: unhealthy
Dec 05 08:22:09 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:22:09 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:22:09 np0005546420.localdomain systemd[1]: session-c9.scope: Deactivated successfully.
Dec 05 08:22:09 np0005546420.localdomain sudo[69155]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:09 np0005546420.localdomain podman[70050]: 2025-12-05 08:22:09.004295295 +0000 UTC m=+0.285776421 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:22:09 np0005546420.localdomain kernel: device br-int entered promiscuous mode
Dec 05 08:22:09 np0005546420.localdomain NetworkManager[5963]: <info>  [1764922929.1892] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11)
Dec 05 08:22:09 np0005546420.localdomain systemd-udevd[70212]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 08:22:09 np0005546420.localdomain podman[70050]: 2025-12-05 08:22:09.200158209 +0000 UTC m=+0.481639325 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-collectd, tcib_managed=true, architecture=x86_64, release=1761123044, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4)
Dec 05 08:22:09 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:22:09 np0005546420.localdomain sudo[70230]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xnovjsirkncqhtojrusmywhgiuxqzzaq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:09 np0005546420.localdomain sudo[70230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:09 np0005546420.localdomain python3[70232]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:09 np0005546420.localdomain sudo[70230]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:09 np0005546420.localdomain sudo[70246]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xolndimoavahdqusbuirogvdoxfqmyrq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:09 np0005546420.localdomain sudo[70246]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:09 np0005546420.localdomain python3[70248]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:09 np0005546420.localdomain sudo[70246]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:09 np0005546420.localdomain sudo[70262]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvowekjkpxzodikfguesouggjxqqgfpj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:09 np0005546420.localdomain sudo[70262]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:10 np0005546420.localdomain python3[70264]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:10 np0005546420.localdomain sudo[70262]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:10 np0005546420.localdomain sudo[70278]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-shuqydrcemrwtztongekhexrlscvljzo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:10 np0005546420.localdomain sudo[70278]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:10 np0005546420.localdomain NetworkManager[5963]: <info>  [1764922930.2481] device (genev_sys_6081): carrier: link connected
Dec 05 08:22:10 np0005546420.localdomain kernel: device genev_sys_6081 entered promiscuous mode
Dec 05 08:22:10 np0005546420.localdomain NetworkManager[5963]: <info>  [1764922930.2511] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12)
Dec 05 08:22:10 np0005546420.localdomain python3[70280]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:10 np0005546420.localdomain sudo[70278]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:10 np0005546420.localdomain sudo[70297]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fmalwqramtjvbebbxghyzpvbhsnfcvty ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:10 np0005546420.localdomain sudo[70297]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:10 np0005546420.localdomain sudo[70301]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --privsep_context neutron.privileged.default --privsep_sock_path /tmp/tmp5g0et3ay/privsep.sock
Dec 05 08:22:10 np0005546420.localdomain sudo[70301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Dec 05 08:22:10 np0005546420.localdomain python3[70299]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:10 np0005546420.localdomain sudo[70297]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:10 np0005546420.localdomain sudo[70317]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wvvcqylmwntroxdqwbwtrtniftfoqxlg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:10 np0005546420.localdomain sudo[70317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:22:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1800.1 total, 600.0 interval
                                                          Cumulative writes: 4290 writes, 19K keys, 4290 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4290 writes, 440 syncs, 9.75 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 66 writes, 115 keys, 66 commit groups, 1.0 writes per commit group, ingest: 0.03 MB, 0.00 MB/s
                                                          Interval WAL: 66 writes, 33 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 08:22:10 np0005546420.localdomain python3[70319]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:10 np0005546420.localdomain sudo[70317]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:10 np0005546420.localdomain sudo[70333]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vgeuxryyeubzbowcfezcvmycgktropyu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:10 np0005546420.localdomain sudo[70333]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:11 np0005546420.localdomain python3[70335]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:22:11 np0005546420.localdomain sudo[70333]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:11 np0005546420.localdomain sudo[70350]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lizkiuemvkudzddzdiahuzushqtbxtkd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:11 np0005546420.localdomain sudo[70350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:11 np0005546420.localdomain sudo[70301]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:11 np0005546420.localdomain python3[70352]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:22:11 np0005546420.localdomain sudo[70350]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:11 np0005546420.localdomain sudo[70367]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vuxsfjmfnevvqlhctygyvwsupzzupkaz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:11 np0005546420.localdomain sudo[70367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:11 np0005546420.localdomain python3[70369]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:22:11 np0005546420.localdomain sudo[70367]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:11 np0005546420.localdomain sudo[70385]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxmjthmofpbhcpgjkgaxvsijkbrpocrc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:11 np0005546420.localdomain sudo[70385]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:11 np0005546420.localdomain python3[70387]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:22:11 np0005546420.localdomain sudo[70385]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:11 np0005546420.localdomain sudo[70401]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hetdqehpfiwltlksoocgjdzedszyddxx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:11 np0005546420.localdomain sudo[70401]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:12 np0005546420.localdomain python3[70403]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:22:12 np0005546420.localdomain sudo[70401]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:12 np0005546420.localdomain sudo[70417]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nrushklwiehgfbsytxfgbvxejiyzqcej ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:12 np0005546420.localdomain sudo[70417]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:12 np0005546420.localdomain python3[70419]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:22:12 np0005546420.localdomain sudo[70417]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:12 np0005546420.localdomain sudo[70478]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihtnwcrluqvoyvggrtdjdvcfbfqgywqx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:12 np0005546420.localdomain sudo[70478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:13 np0005546420.localdomain python3[70480]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.467974-109420-72333599994720/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:13 np0005546420.localdomain sudo[70478]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:13 np0005546420.localdomain sudo[70507]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dszqyqqedaknezeyfqwqwaagfoffgokh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:13 np0005546420.localdomain sudo[70507]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:13 np0005546420.localdomain python3[70509]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.467974-109420-72333599994720/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:13 np0005546420.localdomain sudo[70507]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:13 np0005546420.localdomain sudo[70536]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pcuqvqryxiwngwbzhbbxpbnibtmsrzjs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:13 np0005546420.localdomain sudo[70536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:14 np0005546420.localdomain python3[70538]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.467974-109420-72333599994720/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:14 np0005546420.localdomain sudo[70536]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:14 np0005546420.localdomain sudo[70565]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cullkcyywitkbsikmodyjfssgdqstajo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:14 np0005546420.localdomain sudo[70565]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:14 np0005546420.localdomain python3[70567]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.467974-109420-72333599994720/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:14 np0005546420.localdomain sudo[70565]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:14 np0005546420.localdomain sudo[70594]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zsbbitsgwydxpccvxuhgpatvrmcsloww ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:14 np0005546420.localdomain sudo[70594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:15 np0005546420.localdomain python3[70596]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.467974-109420-72333599994720/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:15 np0005546420.localdomain sudo[70594]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:15 np0005546420.localdomain sudo[70623]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utsmopifwtrcptzaujhureyjqddimtap ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:15 np0005546420.localdomain sudo[70623]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:15 np0005546420.localdomain python3[70625]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.467974-109420-72333599994720/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:15 np0005546420.localdomain sudo[70623]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:15 np0005546420.localdomain sudo[70639]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfovnboomptqgknqkikhcpfychqkdxpg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:15 np0005546420.localdomain sudo[70639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:16 np0005546420.localdomain python3[70641]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 08:22:16 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:22:16 np0005546420.localdomain systemd-sysv-generator[70669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:22:16 np0005546420.localdomain systemd-rc-local-generator[70666]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:22:16 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:22:16 np0005546420.localdomain sudo[70639]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:16 np0005546420.localdomain sudo[70691]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qyefrafxruqjihqdmtvqeokblpshxfev ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:16 np0005546420.localdomain sudo[70691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:17 np0005546420.localdomain python3[70693]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:22:17 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:22:17 np0005546420.localdomain systemd-sysv-generator[70721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:22:17 np0005546420.localdomain systemd-rc-local-generator[70717]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:22:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:22:17 np0005546420.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Dec 05 08:22:17 np0005546420.localdomain tripleo-start-podman-container[70733]: Creating additional drop-in dependency for "ceilometer_agent_compute" (fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe)
Dec 05 08:22:17 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:22:17 np0005546420.localdomain systemd-sysv-generator[70794]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:22:17 np0005546420.localdomain systemd-rc-local-generator[70790]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:22:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:22:18 np0005546420.localdomain systemd[1]: Started ceilometer_agent_compute container.
Dec 05 08:22:18 np0005546420.localdomain sudo[70691]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:18 np0005546420.localdomain sudo[70815]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjhhgkvdzivattvatjlckblyoeckzndd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:18 np0005546420.localdomain sudo[70815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:18 np0005546420.localdomain python3[70817]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:22:19 np0005546420.localdomain systemd-sysv-generator[70848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:22:19 np0005546420.localdomain systemd-rc-local-generator[70844]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Activating special unit Exit the Session...
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Stopped target Main User Target.
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Stopped target Basic System.
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Stopped target Paths.
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Stopped target Sockets.
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Stopped target Timers.
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Closed D-Bus User Message Bus Socket.
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Stopped Create User's Volatile Files and Directories.
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Removed slice User Application Slice.
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Reached target Shutdown.
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Finished Exit the Session.
Dec 05 08:22:19 np0005546420.localdomain systemd[70110]: Reached target Exit the Session.
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 05 08:22:19 np0005546420.localdomain systemd[1]: Started ceilometer_agent_ipmi container.
Dec 05 08:22:19 np0005546420.localdomain sudo[70815]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:19 np0005546420.localdomain sudo[70885]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkxarsvsoteuhoirwhtlflvsvcqdaxqy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:19 np0005546420.localdomain sudo[70885]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:20 np0005546420.localdomain python3[70887]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:22:20 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:22:20 np0005546420.localdomain systemd-sysv-generator[70914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:22:20 np0005546420.localdomain systemd-rc-local-generator[70908]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:22:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:22:20 np0005546420.localdomain systemd[1]: Starting logrotate_crond container...
Dec 05 08:22:20 np0005546420.localdomain systemd[1]: Started logrotate_crond container.
Dec 05 08:22:20 np0005546420.localdomain sudo[70885]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:20 np0005546420.localdomain sudo[70952]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqklbwcsyhqljijwgmsnugoepazzdfmr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:20 np0005546420.localdomain sudo[70952]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:21 np0005546420.localdomain python3[70954]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:22:21 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:22:21 np0005546420.localdomain systemd-rc-local-generator[70982]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:22:21 np0005546420.localdomain systemd-sysv-generator[70987]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:22:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:22:21 np0005546420.localdomain systemd[1]: Starting nova_migration_target container...
Dec 05 08:22:21 np0005546420.localdomain systemd[1]: Started nova_migration_target container.
Dec 05 08:22:21 np0005546420.localdomain sudo[70952]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:22 np0005546420.localdomain sudo[71019]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qctidlgnpunmmjaxyvfdctdqpdzntoqc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:22 np0005546420.localdomain sudo[71019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:22 np0005546420.localdomain python3[71021]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:22:22 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:22:22 np0005546420.localdomain systemd-sysv-generator[71052]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:22:22 np0005546420.localdomain systemd-rc-local-generator[71046]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:22:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:22:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:22:22 np0005546420.localdomain systemd[1]: Starting ovn_controller container...
Dec 05 08:22:22 np0005546420.localdomain podman[71060]: 2025-12-05 08:22:22.906359671 +0000 UTC m=+0.108779918 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step1, architecture=x86_64, tcib_managed=true, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:22:23 np0005546420.localdomain tripleo-start-podman-container[71066]: Creating additional drop-in dependency for "ovn_controller" (1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb)
Dec 05 08:22:23 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:22:23 np0005546420.localdomain podman[71060]: 2025-12-05 08:22:23.105280021 +0000 UTC m=+0.307700238 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:22:23 np0005546420.localdomain systemd-sysv-generator[71153]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:22:23 np0005546420.localdomain systemd-rc-local-generator[71149]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:22:23 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:22:23 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:22:23 np0005546420.localdomain systemd[1]: Started ovn_controller container.
Dec 05 08:22:23 np0005546420.localdomain sudo[71019]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:23 np0005546420.localdomain sudo[71172]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bcbkviscfnsfdbkztaazzsgmcuoqihmz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:22:23 np0005546420.localdomain sudo[71172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:24 np0005546420.localdomain python3[71174]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:22:24 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:22:24 np0005546420.localdomain systemd-rc-local-generator[71201]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:22:24 np0005546420.localdomain systemd-sysv-generator[71205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:22:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:22:24 np0005546420.localdomain systemd[1]: Starting ovn_metadata_agent container...
Dec 05 08:22:24 np0005546420.localdomain systemd[1]: Started ovn_metadata_agent container.
Dec 05 08:22:24 np0005546420.localdomain sudo[71172]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:25 np0005546420.localdomain sudo[71255]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ycbzmqbmbuajhzbxzmgffqlnnraotpkp ; /usr/bin/python3
Dec 05 08:22:25 np0005546420.localdomain sudo[71255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:25 np0005546420.localdomain python3[71257]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:25 np0005546420.localdomain sudo[71255]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:25 np0005546420.localdomain sudo[71303]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wllrqkfnvdekzkiqerbozebairzadfmd ; /usr/bin/python3
Dec 05 08:22:25 np0005546420.localdomain sudo[71303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:25 np0005546420.localdomain sudo[71303]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:26 np0005546420.localdomain sudo[71346]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wneafcrjfupweqeemqecnwiomeponmaw ; /usr/bin/python3
Dec 05 08:22:26 np0005546420.localdomain sudo[71346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:26 np0005546420.localdomain sudo[71346]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:26 np0005546420.localdomain sudo[71376]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odfusqqxvuwsgrivaajrfjgosaxccers ; /usr/bin/python3
Dec 05 08:22:26 np0005546420.localdomain sudo[71376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:26 np0005546420.localdomain python3[71378]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005546420 step=4 update_config_hash_only=False
Dec 05 08:22:26 np0005546420.localdomain sudo[71376]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:27 np0005546420.localdomain sudo[71392]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xiupplinrdpwgljiwyzrxdjafizpxgyv ; /usr/bin/python3
Dec 05 08:22:27 np0005546420.localdomain sudo[71392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:27 np0005546420.localdomain python3[71394]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:22:27 np0005546420.localdomain sudo[71392]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:27 np0005546420.localdomain sudo[71408]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdjtbsocafnwzllcqwzyynefrokguewo ; /usr/bin/python3
Dec 05 08:22:27 np0005546420.localdomain sudo[71408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:22:27 np0005546420.localdomain python3[71410]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 05 08:22:27 np0005546420.localdomain sudo[71408]: pam_unix(sudo:session): session closed for user root
Dec 05 08:22:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:22:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:22:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:22:35 np0005546420.localdomain systemd[1]: tmp-crun.ZbE3Pj.mount: Deactivated successfully.
Dec 05 08:22:35 np0005546420.localdomain podman[71415]: 2025-12-05 08:22:35.519664273 +0000 UTC m=+0.096081940 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 05 08:22:35 np0005546420.localdomain podman[71415]: 2025-12-05 08:22:35.573114097 +0000 UTC m=+0.149531724 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:22:35 np0005546420.localdomain systemd[1]: tmp-crun.HzRj0Q.mount: Deactivated successfully.
Dec 05 08:22:35 np0005546420.localdomain podman[71413]: 2025-12-05 08:22:35.608768914 +0000 UTC m=+0.185253773 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, release=1761123044, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:22:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:22:35 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:22:35 np0005546420.localdomain podman[71414]: 2025-12-05 08:22:35.713206765 +0000 UTC m=+0.289784977 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:22:35 np0005546420.localdomain podman[71413]: 2025-12-05 08:22:35.719401618 +0000 UTC m=+0.295886467 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-cron, io.buildah.version=1.41.4)
Dec 05 08:22:35 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:22:35 np0005546420.localdomain podman[71470]: 2025-12-05 08:22:35.783798665 +0000 UTC m=+0.081884765 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, version=17.1.12, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team)
Dec 05 08:22:35 np0005546420.localdomain podman[71414]: 2025-12-05 08:22:35.799029713 +0000 UTC m=+0.375607915 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:22:35 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:22:36 np0005546420.localdomain podman[71470]: 2025-12-05 08:22:36.168640829 +0000 UTC m=+0.466726949 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:22:36 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:22:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:22:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:22:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:22:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:22:39 np0005546420.localdomain systemd[1]: tmp-crun.ssi4oH.mount: Deactivated successfully.
Dec 05 08:22:39 np0005546420.localdomain podman[71512]: 2025-12-05 08:22:39.533799681 +0000 UTC m=+0.109061227 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container)
Dec 05 08:22:39 np0005546420.localdomain podman[71515]: 2025-12-05 08:22:39.581467594 +0000 UTC m=+0.146582661 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public)
Dec 05 08:22:39 np0005546420.localdomain podman[71512]: 2025-12-05 08:22:39.605759185 +0000 UTC m=+0.181020791 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=)
Dec 05 08:22:39 np0005546420.localdomain podman[71514]: 2025-12-05 08:22:39.61678267 +0000 UTC m=+0.187167903 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=iscsid, managed_by=tripleo_ansible)
Dec 05 08:22:39 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:22:39 np0005546420.localdomain podman[71515]: 2025-12-05 08:22:39.625580986 +0000 UTC m=+0.190696103 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, container_name=ovn_metadata_agent, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:22:39 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:22:39 np0005546420.localdomain podman[71514]: 2025-12-05 08:22:39.658581839 +0000 UTC m=+0.228967072 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044)
Dec 05 08:22:39 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:22:39 np0005546420.localdomain podman[71513]: 2025-12-05 08:22:39.67715459 +0000 UTC m=+0.248741941 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, architecture=x86_64, container_name=collectd, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:22:39 np0005546420.localdomain podman[71513]: 2025-12-05 08:22:39.711496176 +0000 UTC m=+0.283083517 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., container_name=collectd, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team)
Dec 05 08:22:39 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:22:49 np0005546420.localdomain snmpd[68010]: empty variable list in _query
Dec 05 08:22:49 np0005546420.localdomain snmpd[68010]: empty variable list in _query
Dec 05 08:22:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:22:53 np0005546420.localdomain podman[71598]: 2025-12-05 08:22:53.484329866 +0000 UTC m=+0.062390355 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64)
Dec 05 08:22:53 np0005546420.localdomain podman[71598]: 2025-12-05 08:22:53.722484555 +0000 UTC m=+0.300545064 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git)
Dec 05 08:22:53 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:23:01 np0005546420.localdomain sudo[71628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:23:01 np0005546420.localdomain sudo[71628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:23:01 np0005546420.localdomain sudo[71628]: pam_unix(sudo:session): session closed for user root
Dec 05 08:23:01 np0005546420.localdomain sudo[71643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 08:23:01 np0005546420.localdomain sudo[71643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:23:01 np0005546420.localdomain sudo[71643]: pam_unix(sudo:session): session closed for user root
Dec 05 08:23:02 np0005546420.localdomain sudo[71679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:23:02 np0005546420.localdomain sudo[71679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:23:02 np0005546420.localdomain sudo[71679]: pam_unix(sudo:session): session closed for user root
Dec 05 08:23:02 np0005546420.localdomain sudo[71694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:23:02 np0005546420.localdomain sudo[71694]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:23:03 np0005546420.localdomain sudo[71694]: pam_unix(sudo:session): session closed for user root
Dec 05 08:23:03 np0005546420.localdomain sudo[71741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:23:03 np0005546420.localdomain sudo[71741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:23:03 np0005546420.localdomain sudo[71741]: pam_unix(sudo:session): session closed for user root
Dec 05 08:23:03 np0005546420.localdomain sudo[71756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b -- inventory --format=json-pretty --filter-for-batch
Dec 05 08:23:03 np0005546420.localdomain sudo[71756]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:23:04 np0005546420.localdomain podman[71811]: 
Dec 05 08:23:04 np0005546420.localdomain podman[71811]: 2025-12-05 08:23:04.117609109 +0000 UTC m=+0.064417648 container create 65e704441a1599fdd317b6c290ee1f612921c5901a0c1b831ad580c3dc2f303f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_boyd, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, RELEASE=main, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 08:23:04 np0005546420.localdomain systemd[1]: Started libpod-conmon-65e704441a1599fdd317b6c290ee1f612921c5901a0c1b831ad580c3dc2f303f.scope.
Dec 05 08:23:04 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:23:04 np0005546420.localdomain podman[71811]: 2025-12-05 08:23:04.085454572 +0000 UTC m=+0.032263171 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 08:23:04 np0005546420.localdomain podman[71811]: 2025-12-05 08:23:04.197896823 +0000 UTC m=+0.144705382 container init 65e704441a1599fdd317b6c290ee1f612921c5901a0c1b831ad580c3dc2f303f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_boyd, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, ceph=True, release=1763362218, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, version=7, name=rhceph)
Dec 05 08:23:04 np0005546420.localdomain podman[71811]: 2025-12-05 08:23:04.212822121 +0000 UTC m=+0.159630710 container start 65e704441a1599fdd317b6c290ee1f612921c5901a0c1b831ad580c3dc2f303f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_boyd, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, RELEASE=main, ceph=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container)
Dec 05 08:23:04 np0005546420.localdomain podman[71811]: 2025-12-05 08:23:04.213200993 +0000 UTC m=+0.160009602 container attach 65e704441a1599fdd317b6c290ee1f612921c5901a0c1b831ad580c3dc2f303f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_boyd, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 08:23:04 np0005546420.localdomain sharp_boyd[71826]: 167 167
Dec 05 08:23:04 np0005546420.localdomain systemd[1]: libpod-65e704441a1599fdd317b6c290ee1f612921c5901a0c1b831ad580c3dc2f303f.scope: Deactivated successfully.
Dec 05 08:23:04 np0005546420.localdomain podman[71811]: 2025-12-05 08:23:04.222137823 +0000 UTC m=+0.168946382 container died 65e704441a1599fdd317b6c290ee1f612921c5901a0c1b831ad580c3dc2f303f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_boyd, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, description=Red Hat Ceph Storage 7, release=1763362218, GIT_BRANCH=main, io.buildah.version=1.41.4, version=7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container)
Dec 05 08:23:04 np0005546420.localdomain podman[71831]: 2025-12-05 08:23:04.327867034 +0000 UTC m=+0.092181068 container remove 65e704441a1599fdd317b6c290ee1f612921c5901a0c1b831ad580c3dc2f303f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_boyd, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, ceph=True)
Dec 05 08:23:04 np0005546420.localdomain systemd[1]: libpod-conmon-65e704441a1599fdd317b6c290ee1f612921c5901a0c1b831ad580c3dc2f303f.scope: Deactivated successfully.
Dec 05 08:23:04 np0005546420.localdomain podman[71854]: 
Dec 05 08:23:04 np0005546420.localdomain podman[71854]: 2025-12-05 08:23:04.530109658 +0000 UTC m=+0.071840541 container create b4a864b94b6069e201bf6dcd63cfaf4e96613372d4ac6a45ef421b0590ff3fbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_chaum, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, release=1763362218, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.41.4, version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=)
Dec 05 08:23:04 np0005546420.localdomain systemd[1]: Started libpod-conmon-b4a864b94b6069e201bf6dcd63cfaf4e96613372d4ac6a45ef421b0590ff3fbd.scope.
Dec 05 08:23:04 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:23:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9e2ff1fc81378995fd9d74aee4c8ec167f99a7f500ede21ec6b08875f6f080/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 08:23:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9e2ff1fc81378995fd9d74aee4c8ec167f99a7f500ede21ec6b08875f6f080/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 08:23:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9e2ff1fc81378995fd9d74aee4c8ec167f99a7f500ede21ec6b08875f6f080/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 08:23:04 np0005546420.localdomain podman[71854]: 2025-12-05 08:23:04.498016592 +0000 UTC m=+0.039747505 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 08:23:04 np0005546420.localdomain podman[71854]: 2025-12-05 08:23:04.601472033 +0000 UTC m=+0.143202916 container init b4a864b94b6069e201bf6dcd63cfaf4e96613372d4ac6a45ef421b0590ff3fbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_chaum, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.41.4)
Dec 05 08:23:04 np0005546420.localdomain podman[71854]: 2025-12-05 08:23:04.615086329 +0000 UTC m=+0.156817212 container start b4a864b94b6069e201bf6dcd63cfaf4e96613372d4ac6a45ef421b0590ff3fbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_chaum, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1763362218, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_CLEAN=True, architecture=x86_64, version=7)
Dec 05 08:23:04 np0005546420.localdomain podman[71854]: 2025-12-05 08:23:04.615335887 +0000 UTC m=+0.157066830 container attach b4a864b94b6069e201bf6dcd63cfaf4e96613372d4ac6a45ef421b0590ff3fbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_chaum, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., version=7)
Dec 05 08:23:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c300c18ad897813a0a5c3b8f83c337b43d35b276607829b1918777ee29ef0560-merged.mount: Deactivated successfully.
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]: [
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:     {
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:         "available": false,
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:         "ceph_device": false,
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:         "lsm_data": {},
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:         "lvs": [],
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:         "path": "/dev/sr0",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:         "rejected_reasons": [
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "Has a FileSystem",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "Insufficient space (<5GB)"
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:         ],
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:         "sys_api": {
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "actuators": null,
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "device_nodes": "sr0",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "human_readable_size": "482.00 KB",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "id_bus": "ata",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "model": "QEMU DVD-ROM",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "nr_requests": "2",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "partitions": {},
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "path": "/dev/sr0",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "removable": "1",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "rev": "2.5+",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "ro": "0",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "rotational": "1",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "sas_address": "",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "sas_device_handle": "",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "scheduler_mode": "mq-deadline",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "sectors": 0,
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "sectorsize": "2048",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "size": 493568.0,
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "support_discard": "0",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "type": "disk",
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:             "vendor": "QEMU"
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:         }
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]:     }
Dec 05 08:23:05 np0005546420.localdomain gifted_chaum[71870]: ]
Dec 05 08:23:05 np0005546420.localdomain systemd[1]: libpod-b4a864b94b6069e201bf6dcd63cfaf4e96613372d4ac6a45ef421b0590ff3fbd.scope: Deactivated successfully.
Dec 05 08:23:05 np0005546420.localdomain podman[73713]: 2025-12-05 08:23:05.622396307 +0000 UTC m=+0.054984933 container died b4a864b94b6069e201bf6dcd63cfaf4e96613372d4ac6a45ef421b0590ff3fbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_chaum, release=1763362218, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, version=7, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 08:23:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3b9e2ff1fc81378995fd9d74aee4c8ec167f99a7f500ede21ec6b08875f6f080-merged.mount: Deactivated successfully.
Dec 05 08:23:05 np0005546420.localdomain podman[73713]: 2025-12-05 08:23:05.663339099 +0000 UTC m=+0.095927675 container remove b4a864b94b6069e201bf6dcd63cfaf4e96613372d4ac6a45ef421b0590ff3fbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_chaum, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:23:05 np0005546420.localdomain systemd[1]: libpod-conmon-b4a864b94b6069e201bf6dcd63cfaf4e96613372d4ac6a45ef421b0590ff3fbd.scope: Deactivated successfully.
Dec 05 08:23:05 np0005546420.localdomain sudo[71756]: pam_unix(sudo:session): session closed for user root
Dec 05 08:23:06 np0005546420.localdomain sudo[73728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:23:06 np0005546420.localdomain sudo[73728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:23:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:23:06 np0005546420.localdomain sudo[73728]: pam_unix(sudo:session): session closed for user root
Dec 05 08:23:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:23:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:23:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:23:06 np0005546420.localdomain systemd[1]: tmp-crun.SffrNK.mount: Deactivated successfully.
Dec 05 08:23:06 np0005546420.localdomain podman[73743]: 2025-12-05 08:23:06.277547946 +0000 UTC m=+0.094269343 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, container_name=logrotate_crond, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, release=1761123044)
Dec 05 08:23:06 np0005546420.localdomain podman[73744]: 2025-12-05 08:23:06.278899858 +0000 UTC m=+0.090003520 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Dec 05 08:23:06 np0005546420.localdomain podman[73744]: 2025-12-05 08:23:06.31152776 +0000 UTC m=+0.122631422 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible)
Dec 05 08:23:06 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:23:06 np0005546420.localdomain podman[73745]: 2025-12-05 08:23:06.337042749 +0000 UTC m=+0.143734462 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_id=tripleo_step4)
Dec 05 08:23:06 np0005546420.localdomain podman[73751]: 2025-12-05 08:23:06.390268816 +0000 UTC m=+0.192576182 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_migration_target, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, vcs-type=git, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:23:06 np0005546420.localdomain podman[73745]: 2025-12-05 08:23:06.397397789 +0000 UTC m=+0.204089482 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:23:06 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:23:06 np0005546420.localdomain podman[73743]: 2025-12-05 08:23:06.40920014 +0000 UTC m=+0.225921567 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Dec 05 08:23:06 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:23:06 np0005546420.localdomain podman[73751]: 2025-12-05 08:23:06.734136466 +0000 UTC m=+0.536443772 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:23:06 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:23:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:23:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:23:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:23:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:23:10 np0005546420.localdomain systemd[1]: tmp-crun.YPCGPB.mount: Deactivated successfully.
Dec 05 08:23:10 np0005546420.localdomain podman[73843]: 2025-12-05 08:23:10.521797812 +0000 UTC m=+0.083989861 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 05 08:23:10 np0005546420.localdomain podman[73839]: 2025-12-05 08:23:10.572254052 +0000 UTC m=+0.136252958 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 05 08:23:10 np0005546420.localdomain podman[73839]: 2025-12-05 08:23:10.580060196 +0000 UTC m=+0.144059072 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:23:10 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:23:10 np0005546420.localdomain podman[73843]: 2025-12-05 08:23:10.5996777 +0000 UTC m=+0.161869799 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step4)
Dec 05 08:23:10 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:23:10 np0005546420.localdomain podman[73837]: 2025-12-05 08:23:10.650260696 +0000 UTC m=+0.224567265 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:23:10 np0005546420.localdomain podman[73837]: 2025-12-05 08:23:10.699326092 +0000 UTC m=+0.273632661 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com)
Dec 05 08:23:10 np0005546420.localdomain podman[73838]: 2025-12-05 08:23:10.500079492 +0000 UTC m=+0.068059663 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, config_id=tripleo_step3)
Dec 05 08:23:10 np0005546420.localdomain podman[73838]: 2025-12-05 08:23:10.735241457 +0000 UTC m=+0.303221668 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 05 08:23:10 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:23:10 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:23:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:23:24 np0005546420.localdomain podman[73926]: 2025-12-05 08:23:24.507751126 +0000 UTC m=+0.085993825 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1761123044, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc.)
Dec 05 08:23:24 np0005546420.localdomain podman[73926]: 2025-12-05 08:23:24.686150333 +0000 UTC m=+0.264393012 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step1, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team)
Dec 05 08:23:24 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:23:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:23:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:23:36 np0005546420.localdomain podman[73956]: 2025-12-05 08:23:36.514187014 +0000 UTC m=+0.092135487 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, distribution-scope=public)
Dec 05 08:23:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:23:36 np0005546420.localdomain podman[73957]: 2025-12-05 08:23:36.575601327 +0000 UTC m=+0.150140284 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044)
Dec 05 08:23:36 np0005546420.localdomain podman[73957]: 2025-12-05 08:23:36.60953712 +0000 UTC m=+0.184076127 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4)
Dec 05 08:23:36 np0005546420.localdomain systemd[1]: tmp-crun.qRLok8.mount: Deactivated successfully.
Dec 05 08:23:36 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:23:36 np0005546420.localdomain podman[73987]: 2025-12-05 08:23:36.631523519 +0000 UTC m=+0.096689160 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible)
Dec 05 08:23:36 np0005546420.localdomain podman[73956]: 2025-12-05 08:23:36.646659282 +0000 UTC m=+0.224607775 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 05 08:23:36 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:23:36 np0005546420.localdomain podman[73987]: 2025-12-05 08:23:36.668363432 +0000 UTC m=+0.133529033 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:32Z, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.component=openstack-cron-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team)
Dec 05 08:23:36 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:23:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:23:37 np0005546420.localdomain podman[74027]: 2025-12-05 08:23:37.500229715 +0000 UTC m=+0.079503371 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:23:37 np0005546420.localdomain podman[74027]: 2025-12-05 08:23:37.830385285 +0000 UTC m=+0.409658881 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, release=1761123044, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z)
Dec 05 08:23:37 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:23:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:23:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:23:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:23:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:23:41 np0005546420.localdomain podman[74050]: 2025-12-05 08:23:41.507301771 +0000 UTC m=+0.088349027 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.openshift.expose-services=, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, url=https://www.redhat.com)
Dec 05 08:23:41 np0005546420.localdomain systemd[1]: tmp-crun.7P07AF.mount: Deactivated successfully.
Dec 05 08:23:41 np0005546420.localdomain podman[74050]: 2025-12-05 08:23:41.554367676 +0000 UTC m=+0.135414862 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044)
Dec 05 08:23:41 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:23:41 np0005546420.localdomain podman[74057]: 2025-12-05 08:23:41.557320818 +0000 UTC m=+0.129627360 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044)
Dec 05 08:23:41 np0005546420.localdomain podman[74051]: 2025-12-05 08:23:41.613386494 +0000 UTC m=+0.188177104 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z)
Dec 05 08:23:41 np0005546420.localdomain podman[74058]: 2025-12-05 08:23:41.666950171 +0000 UTC m=+0.233435261 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044)
Dec 05 08:23:41 np0005546420.localdomain podman[74057]: 2025-12-05 08:23:41.692806722 +0000 UTC m=+0.265113284 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., container_name=iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vcs-type=git, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:23:41 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:23:41 np0005546420.localdomain podman[74058]: 2025-12-05 08:23:41.706559913 +0000 UTC m=+0.273044963 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:23:41 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:23:41 np0005546420.localdomain podman[74051]: 2025-12-05 08:23:41.747245656 +0000 UTC m=+0.322036266 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, vcs-type=git, version=17.1.12, release=1761123044, name=rhosp17/openstack-collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Dec 05 08:23:41 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:23:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:23:55 np0005546420.localdomain systemd[1]: tmp-crun.rQT7gz.mount: Deactivated successfully.
Dec 05 08:23:55 np0005546420.localdomain podman[74132]: 2025-12-05 08:23:55.484668396 +0000 UTC m=+0.066780773 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.buildah.version=1.41.4, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:23:55 np0005546420.localdomain podman[74132]: 2025-12-05 08:23:55.68274735 +0000 UTC m=+0.264859707 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4)
Dec 05 08:23:55 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:24:06 np0005546420.localdomain sudo[74162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:24:06 np0005546420.localdomain sudo[74162]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:24:06 np0005546420.localdomain sudo[74162]: pam_unix(sudo:session): session closed for user root
Dec 05 08:24:06 np0005546420.localdomain sudo[74177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:24:06 np0005546420.localdomain sudo[74177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:24:07 np0005546420.localdomain sudo[74177]: pam_unix(sudo:session): session closed for user root
Dec 05 08:24:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:24:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:24:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:24:07 np0005546420.localdomain systemd[1]: tmp-crun.20kTHH.mount: Deactivated successfully.
Dec 05 08:24:07 np0005546420.localdomain podman[74227]: 2025-12-05 08:24:07.514137484 +0000 UTC m=+0.089642649 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, build-date=2025-11-19T00:11:48Z, architecture=x86_64, config_id=tripleo_step4, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Dec 05 08:24:07 np0005546420.localdomain podman[74227]: 2025-12-05 08:24:07.539918731 +0000 UTC m=+0.115423906 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 05 08:24:07 np0005546420.localdomain podman[74226]: 2025-12-05 08:24:07.552156984 +0000 UTC m=+0.127629868 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:24:07 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:24:07 np0005546420.localdomain podman[74226]: 2025-12-05 08:24:07.589852455 +0000 UTC m=+0.165325349 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:24:07 np0005546420.localdomain podman[74225]: 2025-12-05 08:24:07.601930163 +0000 UTC m=+0.177113278 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:24:07 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:24:07 np0005546420.localdomain podman[74225]: 2025-12-05 08:24:07.612250296 +0000 UTC m=+0.187433431 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, container_name=logrotate_crond, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:24:07 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:24:07 np0005546420.localdomain sudo[74299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:24:07 np0005546420.localdomain sudo[74299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:24:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:24:07 np0005546420.localdomain sudo[74299]: pam_unix(sudo:session): session closed for user root
Dec 05 08:24:08 np0005546420.localdomain podman[74314]: 2025-12-05 08:24:08.012160971 +0000 UTC m=+0.098355881 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:24:08 np0005546420.localdomain podman[74314]: 2025-12-05 08:24:08.388562729 +0000 UTC m=+0.474757679 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container)
Dec 05 08:24:08 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:24:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:24:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:24:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:24:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:24:12 np0005546420.localdomain systemd[1]: tmp-crun.GSrfVh.mount: Deactivated successfully.
Dec 05 08:24:12 np0005546420.localdomain systemd[1]: tmp-crun.S8kbCB.mount: Deactivated successfully.
Dec 05 08:24:12 np0005546420.localdomain podman[74347]: 2025-12-05 08:24:12.589628782 +0000 UTC m=+0.152455316 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 05 08:24:12 np0005546420.localdomain podman[74340]: 2025-12-05 08:24:12.540157042 +0000 UTC m=+0.112973738 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team)
Dec 05 08:24:12 np0005546420.localdomain podman[74341]: 2025-12-05 08:24:12.568209681 +0000 UTC m=+0.137287571 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, release=1761123044, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=iscsid, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12)
Dec 05 08:24:12 np0005546420.localdomain podman[74339]: 2025-12-05 08:24:12.677236406 +0000 UTC m=+0.254599805 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, container_name=ovn_controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 05 08:24:12 np0005546420.localdomain podman[74339]: 2025-12-05 08:24:12.698386418 +0000 UTC m=+0.275749877 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller)
Dec 05 08:24:12 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:24:12 np0005546420.localdomain podman[74340]: 2025-12-05 08:24:12.727395127 +0000 UTC m=+0.300211783 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, architecture=x86_64, release=1761123044)
Dec 05 08:24:12 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:24:12 np0005546420.localdomain podman[74341]: 2025-12-05 08:24:12.750342315 +0000 UTC m=+0.319420195 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4)
Dec 05 08:24:12 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:24:12 np0005546420.localdomain podman[74347]: 2025-12-05 08:24:12.801879719 +0000 UTC m=+0.364706253 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 05 08:24:12 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:24:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:24:26 np0005546420.localdomain podman[74425]: 2025-12-05 08:24:26.493281418 +0000 UTC m=+0.071029557 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com)
Dec 05 08:24:26 np0005546420.localdomain podman[74425]: 2025-12-05 08:24:26.754414116 +0000 UTC m=+0.332162265 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com)
Dec 05 08:24:26 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:24:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:24:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:24:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:24:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:24:38 np0005546420.localdomain systemd[1]: tmp-crun.gS3xIC.mount: Deactivated successfully.
Dec 05 08:24:38 np0005546420.localdomain podman[74454]: 2025-12-05 08:24:38.530899001 +0000 UTC m=+0.095484712 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_migration_target, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team)
Dec 05 08:24:38 np0005546420.localdomain systemd[1]: tmp-crun.SQFRvD.mount: Deactivated successfully.
Dec 05 08:24:38 np0005546420.localdomain podman[74453]: 2025-12-05 08:24:38.585932604 +0000 UTC m=+0.155408238 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:12:45Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com)
Dec 05 08:24:38 np0005546420.localdomain podman[74460]: 2025-12-05 08:24:38.633492453 +0000 UTC m=+0.193650876 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, container_name=ceilometer_agent_compute, tcib_managed=true)
Dec 05 08:24:38 np0005546420.localdomain podman[74460]: 2025-12-05 08:24:38.669062578 +0000 UTC m=+0.229220981 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc.)
Dec 05 08:24:38 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:24:38 np0005546420.localdomain podman[74452]: 2025-12-05 08:24:38.684308346 +0000 UTC m=+0.254863044 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container)
Dec 05 08:24:38 np0005546420.localdomain podman[74453]: 2025-12-05 08:24:38.695427404 +0000 UTC m=+0.264903048 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:24:38 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:24:38 np0005546420.localdomain podman[74452]: 2025-12-05 08:24:38.720162938 +0000 UTC m=+0.290717646 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:24:38 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:24:38 np0005546420.localdomain podman[74454]: 2025-12-05 08:24:38.874291385 +0000 UTC m=+0.438877106 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4)
Dec 05 08:24:38 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:24:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:24:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:24:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:24:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:24:43 np0005546420.localdomain podman[74548]: 2025-12-05 08:24:43.507824282 +0000 UTC m=+0.084410335 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, vcs-type=git, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, release=1761123044, architecture=x86_64, url=https://www.redhat.com)
Dec 05 08:24:43 np0005546420.localdomain podman[74548]: 2025-12-05 08:24:43.546556985 +0000 UTC m=+0.123143058 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12)
Dec 05 08:24:43 np0005546420.localdomain podman[74549]: 2025-12-05 08:24:43.567318826 +0000 UTC m=+0.140992208 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, tcib_managed=true)
Dec 05 08:24:43 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:24:43 np0005546420.localdomain podman[74549]: 2025-12-05 08:24:43.605516661 +0000 UTC m=+0.179190093 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:24:43 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:24:43 np0005546420.localdomain podman[74547]: 2025-12-05 08:24:43.621543763 +0000 UTC m=+0.197218847 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, version=17.1.12, release=1761123044, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:24:43 np0005546420.localdomain podman[74550]: 2025-12-05 08:24:43.673802491 +0000 UTC m=+0.243088735 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 05 08:24:43 np0005546420.localdomain podman[74550]: 2025-12-05 08:24:43.714419832 +0000 UTC m=+0.283706056 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 05 08:24:43 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:24:43 np0005546420.localdomain podman[74547]: 2025-12-05 08:24:43.728622587 +0000 UTC m=+0.304297631 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, container_name=ovn_controller, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container)
Dec 05 08:24:43 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:24:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:24:57 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:24:57 np0005546420.localdomain recover_tripleo_nova_virtqemud[74643]: 62579
Dec 05 08:24:57 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:24:57 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:24:57 np0005546420.localdomain podman[74636]: 2025-12-05 08:24:57.507175457 +0000 UTC m=+0.089442032 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_id=tripleo_step1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, tcib_managed=true)
Dec 05 08:24:57 np0005546420.localdomain podman[74636]: 2025-12-05 08:24:57.677103548 +0000 UTC m=+0.259370053 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Dec 05 08:24:57 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:25:07 np0005546420.localdomain sudo[74712]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pmhdyfkzfpnxuehivkznzxmoudvhfngr ; /usr/bin/python3
Dec 05 08:25:07 np0005546420.localdomain sudo[74712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:07 np0005546420.localdomain python3[74714]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:25:07 np0005546420.localdomain sudo[74712]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:07 np0005546420.localdomain sudo[74757]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ijeewjvpmfilbdiiolaybjpvmptjdozv ; /usr/bin/python3
Dec 05 08:25:07 np0005546420.localdomain sudo[74757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:08 np0005546420.localdomain python3[74759]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764923107.2837183-113388-217974587044114/source _original_basename=tmpy9yxxppd follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:25:08 np0005546420.localdomain sudo[74757]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:08 np0005546420.localdomain sudo[74760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:25:08 np0005546420.localdomain sudo[74760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:25:08 np0005546420.localdomain sudo[74760]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:08 np0005546420.localdomain sudo[74785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 08:25:08 np0005546420.localdomain sudo[74785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:25:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:25:08 np0005546420.localdomain sudo[74863]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vuvtokhqhmoaumblgxregfqqibguemhu ; /usr/bin/python3
Dec 05 08:25:08 np0005546420.localdomain sudo[74863]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:25:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:25:08 np0005546420.localdomain podman[74859]: 2025-12-05 08:25:08.813163466 +0000 UTC m=+0.091527907 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, release=1761123044, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team)
Dec 05 08:25:08 np0005546420.localdomain systemd[1]: tmp-crun.BDAuZw.mount: Deactivated successfully.
Dec 05 08:25:08 np0005546420.localdomain podman[74886]: 2025-12-05 08:25:08.862921256 +0000 UTC m=+0.094693128 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:25:08 np0005546420.localdomain podman[74886]: 2025-12-05 08:25:08.873450616 +0000 UTC m=+0.105222488 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-cron, url=https://www.redhat.com)
Dec 05 08:25:08 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:25:08 np0005546420.localdomain podman[74888]: 2025-12-05 08:25:08.847131201 +0000 UTC m=+0.073921637 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4)
Dec 05 08:25:08 np0005546420.localdomain podman[74859]: 2025-12-05 08:25:08.924455783 +0000 UTC m=+0.202820264 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible)
Dec 05 08:25:08 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:25:08 np0005546420.localdomain python3[74887]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:25:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:25:08 np0005546420.localdomain podman[74888]: 2025-12-05 08:25:08.976390131 +0000 UTC m=+0.203180587 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, container_name=ceilometer_agent_ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:25:08 np0005546420.localdomain sudo[74863]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:08 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:25:09 np0005546420.localdomain podman[74961]: 2025-12-05 08:25:09.05199489 +0000 UTC m=+0.084956934 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.expose-services=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container)
Dec 05 08:25:09 np0005546420.localdomain podman[74959]: 2025-12-05 08:25:09.123756857 +0000 UTC m=+0.160867600 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 05 08:25:09 np0005546420.localdomain podman[74961]: 2025-12-05 08:25:09.158519157 +0000 UTC m=+0.191481271 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218)
Dec 05 08:25:09 np0005546420.localdomain sudo[75082]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vknnjaebaaxazriuikdwpotbrfckivor ; /usr/bin/python3
Dec 05 08:25:09 np0005546420.localdomain sudo[75082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:09 np0005546420.localdomain sudo[74785]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:09 np0005546420.localdomain podman[74959]: 2025-12-05 08:25:09.523421349 +0000 UTC m=+0.560531992 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Dec 05 08:25:09 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:25:09 np0005546420.localdomain sudo[75097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:25:09 np0005546420.localdomain sudo[75097]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:25:09 np0005546420.localdomain sudo[75082]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:09 np0005546420.localdomain sudo[75097]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:09 np0005546420.localdomain sudo[75114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:25:09 np0005546420.localdomain sudo[75114]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:25:09 np0005546420.localdomain sudo[75141]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tofesdymqiydtzfpwtdneouaekawsqra ; /usr/bin/python3
Dec 05 08:25:09 np0005546420.localdomain sudo[75141]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:09 np0005546420.localdomain sudo[75141]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:10 np0005546420.localdomain sudo[75114]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:10 np0005546420.localdomain sudo[75279]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bgplkvnopxfmgvqftwvcqnhrvhpoagne ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764923110.0932035-113817-236056494936905/async_wrapper.py 371542241873 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764923110.0932035-113817-236056494936905/AnsiballZ_command.py _
Dec 05 08:25:10 np0005546420.localdomain sudo[75279]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 05 08:25:10 np0005546420.localdomain ansible-async_wrapper.py[75281]: Invoked with 371542241873 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764923110.0932035-113817-236056494936905/AnsiballZ_command.py _
Dec 05 08:25:10 np0005546420.localdomain ansible-async_wrapper.py[75284]: Starting module and watcher
Dec 05 08:25:10 np0005546420.localdomain ansible-async_wrapper.py[75284]: Start watching 75285 (3600)
Dec 05 08:25:10 np0005546420.localdomain ansible-async_wrapper.py[75285]: Start module (75285)
Dec 05 08:25:10 np0005546420.localdomain ansible-async_wrapper.py[75281]: Return async_wrapper task started.
Dec 05 08:25:10 np0005546420.localdomain sudo[75279]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:10 np0005546420.localdomain sudo[75287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:25:10 np0005546420.localdomain sudo[75287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:25:10 np0005546420.localdomain sudo[75287]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:10 np0005546420.localdomain sudo[75318]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ywnojtcrhnadakkakmoedzbazupkadjm ; /usr/bin/python3
Dec 05 08:25:10 np0005546420.localdomain sudo[75318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:11 np0005546420.localdomain python3[75320]: ansible-ansible.legacy.async_status Invoked with jid=371542241873.75281 mode=status _async_dir=/tmp/.ansible_async
Dec 05 08:25:11 np0005546420.localdomain sudo[75318]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:25:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:25:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:25:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:25:14 np0005546420.localdomain podman[75437]: 2025-12-05 08:25:14.495734588 +0000 UTC m=+0.070818260 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:25:14 np0005546420.localdomain podman[75434]: 2025-12-05 08:25:14.562601833 +0000 UTC m=+0.135407403 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:25:14 np0005546420.localdomain podman[75435]: 2025-12-05 08:25:14.614406545 +0000 UTC m=+0.188626960 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, container_name=collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64)
Dec 05 08:25:14 np0005546420.localdomain podman[75434]: 2025-12-05 08:25:14.618463783 +0000 UTC m=+0.191269333 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 05 08:25:14 np0005546420.localdomain podman[75435]: 2025-12-05 08:25:14.62731163 +0000 UTC m=+0.201532055 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3)
Dec 05 08:25:14 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:25:14 np0005546420.localdomain podman[75437]: 2025-12-05 08:25:14.639249664 +0000 UTC m=+0.214333366 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 05 08:25:14 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:25:14 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:    (file: /etc/puppet/hiera.yaml)
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Warning: Undefined variable '::deploy_config_name';
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:    (file & line not available)
Dec 05 08:25:14 np0005546420.localdomain podman[75436]: 2025-12-05 08:25:14.718093084 +0000 UTC m=+0.292220036 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, tcib_managed=true)
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:    (file & line not available)
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 05 08:25:14 np0005546420.localdomain podman[75436]: 2025-12-05 08:25:14.732278268 +0000 UTC m=+0.306405210 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container)
Dec 05 08:25:14 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:                     with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:                     with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:                     with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:                     with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:                     with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 05 08:25:14 np0005546420.localdomain puppet-user[75313]: Notice: Compiled catalog for np0005546420.localdomain in environment production in 0.21 seconds
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]: Notice: Applied catalog in 0.30 seconds
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]: Application:
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:    Initial environment: production
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:    Converged environment: production
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:          Run mode: user
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]: Changes:
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]: Events:
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]: Resources:
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:             Total: 19
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]: Time:
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:           Package: 0.00
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:          Schedule: 0.00
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:              Exec: 0.01
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:            Augeas: 0.01
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:              File: 0.02
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:           Service: 0.07
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:    Config retrieval: 0.27
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:    Transaction evaluation: 0.29
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:    Catalog application: 0.30
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:          Last run: 1764923115
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:        Filebucket: 0.00
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:             Total: 0.30
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]: Version:
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:            Config: 1764923114
Dec 05 08:25:15 np0005546420.localdomain puppet-user[75313]:            Puppet: 7.10.0
Dec 05 08:25:15 np0005546420.localdomain ansible-async_wrapper.py[75285]: Module complete (75285)
Dec 05 08:25:15 np0005546420.localdomain ansible-async_wrapper.py[75284]: Done in kid B.
Dec 05 08:25:21 np0005546420.localdomain sudo[75541]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdzimpqmnncvowdlexuozhfwvqtfkeqv ; /usr/bin/python3
Dec 05 08:25:21 np0005546420.localdomain sudo[75541]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:21 np0005546420.localdomain python3[75543]: ansible-ansible.legacy.async_status Invoked with jid=371542241873.75281 mode=status _async_dir=/tmp/.ansible_async
Dec 05 08:25:21 np0005546420.localdomain sudo[75541]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:21 np0005546420.localdomain sudo[75557]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqndtjixlgimgbfmdjsqllqhpyrodvug ; /usr/bin/python3
Dec 05 08:25:21 np0005546420.localdomain sudo[75557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:22 np0005546420.localdomain python3[75559]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:25:22 np0005546420.localdomain sudo[75557]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:22 np0005546420.localdomain sudo[75573]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxjyozpmrqurerholyekmdrepanaitro ; /usr/bin/python3
Dec 05 08:25:22 np0005546420.localdomain sudo[75573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:22 np0005546420.localdomain python3[75575]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:25:22 np0005546420.localdomain sudo[75573]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:22 np0005546420.localdomain sudo[75623]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvnjiziojrvybhkeajikkawxurkmspjt ; /usr/bin/python3
Dec 05 08:25:22 np0005546420.localdomain sudo[75623]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:23 np0005546420.localdomain python3[75625]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:25:23 np0005546420.localdomain sudo[75623]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:23 np0005546420.localdomain sudo[75641]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-usdboxtdtyuklykayhvygzvlxbqrbjnb ; /usr/bin/python3
Dec 05 08:25:23 np0005546420.localdomain sudo[75641]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:23 np0005546420.localdomain python3[75643]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpcq967rnj recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 08:25:23 np0005546420.localdomain sudo[75641]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:23 np0005546420.localdomain sudo[75671]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wzhkgkjmxkgekdbmgivygswmuswtccvy ; /usr/bin/python3
Dec 05 08:25:23 np0005546420.localdomain sudo[75671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:23 np0005546420.localdomain python3[75673]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:25:23 np0005546420.localdomain sudo[75671]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:24 np0005546420.localdomain sudo[75687]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ncefxgdodauktimbudjwmahzyvspqgsw ; /usr/bin/python3
Dec 05 08:25:24 np0005546420.localdomain sudo[75687]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:24 np0005546420.localdomain sudo[75687]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:24 np0005546420.localdomain sudo[75776]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhpgvybqcxdugbhppwrfiheivanpnrpw ; /usr/bin/python3
Dec 05 08:25:24 np0005546420.localdomain sudo[75776]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:25 np0005546420.localdomain python3[75778]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 05 08:25:25 np0005546420.localdomain sudo[75776]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:25 np0005546420.localdomain sudo[75795]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ogfftnlvxjkgemkvrzbjlqijneidekjh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:25 np0005546420.localdomain sudo[75795]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:25 np0005546420.localdomain python3[75797]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:25:25 np0005546420.localdomain sudo[75795]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:26 np0005546420.localdomain sudo[75811]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ncgwfbgqmqnnefwmyeoqmzearucgrgxo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:26 np0005546420.localdomain sudo[75811]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:26 np0005546420.localdomain sudo[75811]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:26 np0005546420.localdomain sudo[75827]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bmyurbkwptaqrxvyskbpxlijqwgrpuwu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:26 np0005546420.localdomain sudo[75827]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:26 np0005546420.localdomain python3[75829]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:25:26 np0005546420.localdomain sudo[75827]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:27 np0005546420.localdomain sudo[75877]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-frwgoiskrzuwocksamzknsxfffetcucm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:27 np0005546420.localdomain sudo[75877]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:27 np0005546420.localdomain python3[75879]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:25:27 np0005546420.localdomain sudo[75877]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:27 np0005546420.localdomain sudo[75895]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nphcjfodsuqlgtwlzxqpzaxpoxavmdvz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:27 np0005546420.localdomain sudo[75895]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:27 np0005546420.localdomain python3[75897]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:25:27 np0005546420.localdomain sudo[75895]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:27 np0005546420.localdomain sudo[75957]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzkwjdkpumwqssmdvfikgkxpgxatrqtw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:27 np0005546420.localdomain sudo[75957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:25:27 np0005546420.localdomain podman[75960]: 2025-12-05 08:25:27.918397851 +0000 UTC m=+0.083997093 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:25:27 np0005546420.localdomain python3[75959]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:25:27 np0005546420.localdomain sudo[75957]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:28 np0005546420.localdomain sudo[76002]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cctcssjzymfztuvsjpqdvzczjdwvxvgn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:28 np0005546420.localdomain sudo[76002]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:28 np0005546420.localdomain podman[75960]: 2025-12-05 08:25:28.145794966 +0000 UTC m=+0.311394128 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=metrics_qdr, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:25:28 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:25:28 np0005546420.localdomain python3[76004]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:25:28 np0005546420.localdomain sudo[76002]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:28 np0005546420.localdomain sudo[76064]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ezhhywbculchyywoevsevuatsalnyrit ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:28 np0005546420.localdomain sudo[76064]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:28 np0005546420.localdomain python3[76066]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:25:28 np0005546420.localdomain sudo[76064]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:28 np0005546420.localdomain sudo[76082]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hcqlmlqukmtnxunojjxkpguflnemyhbf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:28 np0005546420.localdomain sudo[76082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:28 np0005546420.localdomain python3[76084]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:25:28 np0005546420.localdomain sudo[76082]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:29 np0005546420.localdomain sudo[76144]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhsvcrrhynyyiqhanaruutfmdvrmokmk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:29 np0005546420.localdomain sudo[76144]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:29 np0005546420.localdomain python3[76146]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:25:29 np0005546420.localdomain sudo[76144]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:29 np0005546420.localdomain sudo[76162]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xckfkkmhynzlpvysajezxpvooihfbdgk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:29 np0005546420.localdomain sudo[76162]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:29 np0005546420.localdomain python3[76164]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:25:29 np0005546420.localdomain sudo[76162]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:30 np0005546420.localdomain sudo[76192]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tikwmdxgmzjltqtklizigjulfruycpxl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:30 np0005546420.localdomain sudo[76192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:30 np0005546420.localdomain python3[76194]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:25:30 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:25:30 np0005546420.localdomain systemd-sysv-generator[76220]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:25:30 np0005546420.localdomain systemd-rc-local-generator[76216]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:25:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:25:30 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:25:30 np0005546420.localdomain recover_tripleo_nova_virtqemud[76232]: 62579
Dec 05 08:25:30 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:25:30 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:25:30 np0005546420.localdomain sudo[76192]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:30 np0005546420.localdomain sudo[76279]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tdpuazluyemshakqwdaubaeqvnmavnat ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:30 np0005546420.localdomain sudo[76279]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:31 np0005546420.localdomain python3[76281]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:25:31 np0005546420.localdomain sudo[76279]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:31 np0005546420.localdomain sudo[76297]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lyarauhmfpnrzmksnxgjtqahwoiitibu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:31 np0005546420.localdomain sudo[76297]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:31 np0005546420.localdomain python3[76299]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:25:31 np0005546420.localdomain sudo[76297]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:31 np0005546420.localdomain sudo[76359]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lzqydmthifqcrazxzxmcjqjvhxmczmxg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:31 np0005546420.localdomain sudo[76359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:32 np0005546420.localdomain python3[76361]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 08:25:32 np0005546420.localdomain sudo[76359]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:32 np0005546420.localdomain sudo[76377]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vkirwdxlcvpgknvhtkrbsrqodncxkdbn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:32 np0005546420.localdomain sudo[76377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:32 np0005546420.localdomain python3[76379]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:25:32 np0005546420.localdomain sudo[76377]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:32 np0005546420.localdomain sudo[76407]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzoetywaezmouybvzzartqpzywgmnrem ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:32 np0005546420.localdomain sudo[76407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:32 np0005546420.localdomain python3[76409]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:25:32 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:25:32 np0005546420.localdomain systemd-rc-local-generator[76431]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:25:32 np0005546420.localdomain systemd-sysv-generator[76434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:25:33 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:25:33 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 08:25:33 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 08:25:33 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 08:25:33 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 08:25:33 np0005546420.localdomain sudo[76407]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:33 np0005546420.localdomain sudo[76464]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udjatsbotwmhfiftiddmdqmattybuase ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:33 np0005546420.localdomain sudo[76464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:33 np0005546420.localdomain python3[76466]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 05 08:25:33 np0005546420.localdomain sudo[76464]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:34 np0005546420.localdomain sudo[76480]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zvytxdvynijirzttvxovzkoocshknxjy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:34 np0005546420.localdomain sudo[76480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:34 np0005546420.localdomain sudo[76480]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:35 np0005546420.localdomain sudo[76522]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhdszipooevezezezomfeyevogbmzxrj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:25:35 np0005546420.localdomain sudo[76522]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:25:35 np0005546420.localdomain python3[76524]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 05 08:25:36 np0005546420.localdomain podman[76563]: 2025-12-05 08:25:36.193349779 +0000 UTC m=+0.094116239 container create ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, release=1761123044, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, managed_by=tripleo_ansible, container_name=nova_compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Started libpod-conmon-ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.scope.
Dec 05 08:25:36 np0005546420.localdomain podman[76563]: 2025-12-05 08:25:36.144007184 +0000 UTC m=+0.044773664 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:25:36 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16c2ca79882c79a16bfd6ec33c860677688d3a70e4e2506da76095f804b00d2/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 08:25:36 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16c2ca79882c79a16bfd6ec33c860677688d3a70e4e2506da76095f804b00d2/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:25:36 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16c2ca79882c79a16bfd6ec33c860677688d3a70e4e2506da76095f804b00d2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 08:25:36 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16c2ca79882c79a16bfd6ec33c860677688d3a70e4e2506da76095f804b00d2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:25:36 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16c2ca79882c79a16bfd6ec33c860677688d3a70e4e2506da76095f804b00d2/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:25:36 np0005546420.localdomain podman[76563]: 2025-12-05 08:25:36.298476463 +0000 UTC m=+0.199242953 container init ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute)
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: tmp-crun.Udpn7r.mount: Deactivated successfully.
Dec 05 08:25:36 np0005546420.localdomain sudo[76583]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:25:36 np0005546420.localdomain systemd-logind[762]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 05 08:25:36 np0005546420.localdomain podman[76563]: 2025-12-05 08:25:36.340807709 +0000 UTC m=+0.241574169 container start ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step5, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 05 08:25:36 np0005546420.localdomain python3[76524]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 05 08:25:36 np0005546420.localdomain podman[76584]: 2025-12-05 08:25:36.520935722 +0000 UTC m=+0.172421252 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, version=17.1.12, batch=17.1_20251118.1, container_name=nova_compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Queued start job for default target Main User Target.
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Created slice User Application Slice.
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Reached target Paths.
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Reached target Timers.
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Starting D-Bus User Message Bus Socket...
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Starting Create User's Volatile Files and Directories...
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Listening on D-Bus User Message Bus Socket.
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Reached target Sockets.
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Finished Create User's Volatile Files and Directories.
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Reached target Basic System.
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Reached target Main User Target.
Dec 05 08:25:36 np0005546420.localdomain systemd[76597]: Startup finished in 139ms.
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Started User Manager for UID 0.
Dec 05 08:25:36 np0005546420.localdomain podman[76584]: 2025-12-05 08:25:36.581416197 +0000 UTC m=+0.232901757 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=)
Dec 05 08:25:36 np0005546420.localdomain podman[76584]: unhealthy
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Started Session c10 of User root.
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 08:25:36 np0005546420.localdomain sudo[76583]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Dec 05 08:25:36 np0005546420.localdomain sudo[76583]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: session-c10.scope: Deactivated successfully.
Dec 05 08:25:36 np0005546420.localdomain podman[76688]: 2025-12-05 08:25:36.906872443 +0000 UTC m=+0.091118526 container create e5d25541b432389fdb277c313ab05c9723bca2dd1ed2b9f2d57efa9e4e871a15 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_wait_for_compute_service, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public)
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Started libpod-conmon-e5d25541b432389fdb277c313ab05c9723bca2dd1ed2b9f2d57efa9e4e871a15.scope.
Dec 05 08:25:36 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:25:36 np0005546420.localdomain podman[76688]: 2025-12-05 08:25:36.860455219 +0000 UTC m=+0.044701332 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 05 08:25:36 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28babb087ffe0501289e9c462f881c66803c6126daf8f53cd6e97c97c184b295/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Dec 05 08:25:36 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28babb087ffe0501289e9c462f881c66803c6126daf8f53cd6e97c97c184b295/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 08:25:36 np0005546420.localdomain podman[76688]: 2025-12-05 08:25:36.975058269 +0000 UTC m=+0.159304372 container init e5d25541b432389fdb277c313ab05c9723bca2dd1ed2b9f2d57efa9e4e871a15 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_id=tripleo_step5, architecture=x86_64, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, container_name=nova_wait_for_compute_service, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, build-date=2025-11-19T00:36:58Z)
Dec 05 08:25:36 np0005546420.localdomain podman[76688]: 2025-12-05 08:25:36.984220337 +0000 UTC m=+0.168466440 container start e5d25541b432389fdb277c313ab05c9723bca2dd1ed2b9f2d57efa9e4e871a15 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, distribution-scope=public, config_id=tripleo_step5, tcib_managed=true, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_wait_for_compute_service, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:25:36 np0005546420.localdomain podman[76688]: 2025-12-05 08:25:36.984589578 +0000 UTC m=+0.168835691 container attach e5d25541b432389fdb277c313ab05c9723bca2dd1ed2b9f2d57efa9e4e871a15 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=nova_wait_for_compute_service, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step5, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, vcs-type=git, distribution-scope=public, io.openshift.expose-services=)
Dec 05 08:25:36 np0005546420.localdomain sudo[76707]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 08:25:37 np0005546420.localdomain sudo[76707]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Dec 05 08:25:37 np0005546420.localdomain sudo[76707]: pam_unix(sudo:session): session closed for user root
Dec 05 08:25:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:25:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:25:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:25:39 np0005546420.localdomain systemd[1]: tmp-crun.7NeHer.mount: Deactivated successfully.
Dec 05 08:25:39 np0005546420.localdomain podman[76713]: 2025-12-05 08:25:39.520373282 +0000 UTC m=+0.093497040 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:25:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:25:39 np0005546420.localdomain podman[76712]: 2025-12-05 08:25:39.569090158 +0000 UTC m=+0.144063284 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible)
Dec 05 08:25:39 np0005546420.localdomain podman[76713]: 2025-12-05 08:25:39.580389132 +0000 UTC m=+0.153512890 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute)
Dec 05 08:25:39 np0005546420.localdomain podman[76712]: 2025-12-05 08:25:39.596299451 +0000 UTC m=+0.171272647 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 05 08:25:39 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:25:39 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:25:39 np0005546420.localdomain podman[76762]: 2025-12-05 08:25:39.66682553 +0000 UTC m=+0.090351041 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, architecture=x86_64, io.openshift.expose-services=)
Dec 05 08:25:39 np0005546420.localdomain podman[76711]: 2025-12-05 08:25:39.720712958 +0000 UTC m=+0.295643253 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team)
Dec 05 08:25:39 np0005546420.localdomain podman[76711]: 2025-12-05 08:25:39.758252125 +0000 UTC m=+0.333182390 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12)
Dec 05 08:25:39 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:25:40 np0005546420.localdomain podman[76762]: 2025-12-05 08:25:40.038316479 +0000 UTC m=+0.461841980 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Dec 05 08:25:40 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:25:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:25:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:25:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:25:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:25:45 np0005546420.localdomain systemd[1]: tmp-crun.7fYgOb.mount: Deactivated successfully.
Dec 05 08:25:45 np0005546420.localdomain systemd[1]: tmp-crun.ZpDAQp.mount: Deactivated successfully.
Dec 05 08:25:45 np0005546420.localdomain podman[76808]: 2025-12-05 08:25:45.583282858 +0000 UTC m=+0.152956953 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:25:45 np0005546420.localdomain podman[76808]: 2025-12-05 08:25:45.622534378 +0000 UTC m=+0.192208553 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, version=17.1.12)
Dec 05 08:25:45 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:25:45 np0005546420.localdomain podman[76809]: 2025-12-05 08:25:45.537136733 +0000 UTC m=+0.103452073 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible)
Dec 05 08:25:45 np0005546420.localdomain podman[76809]: 2025-12-05 08:25:45.672419511 +0000 UTC m=+0.238734881 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4)
Dec 05 08:25:45 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:25:45 np0005546420.localdomain podman[76806]: 2025-12-05 08:25:45.626455011 +0000 UTC m=+0.200538904 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 05 08:25:45 np0005546420.localdomain podman[76806]: 2025-12-05 08:25:45.760419967 +0000 UTC m=+0.334503850 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 05 08:25:45 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:25:45 np0005546420.localdomain podman[76807]: 2025-12-05 08:25:45.678365447 +0000 UTC m=+0.249852069 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, release=1761123044, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:25:45 np0005546420.localdomain podman[76807]: 2025-12-05 08:25:45.808830545 +0000 UTC m=+0.380317127 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=)
Dec 05 08:25:45 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:25:46 np0005546420.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Activating special unit Exit the Session...
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Stopped target Main User Target.
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Stopped target Basic System.
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Stopped target Paths.
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Stopped target Sockets.
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Stopped target Timers.
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Closed D-Bus User Message Bus Socket.
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Stopped Create User's Volatile Files and Directories.
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Removed slice User Application Slice.
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Reached target Shutdown.
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Finished Exit the Session.
Dec 05 08:25:46 np0005546420.localdomain systemd[76597]: Reached target Exit the Session.
Dec 05 08:25:46 np0005546420.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 05 08:25:46 np0005546420.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 05 08:25:46 np0005546420.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 05 08:25:46 np0005546420.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 05 08:25:46 np0005546420.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 05 08:25:46 np0005546420.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 05 08:25:46 np0005546420.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 05 08:25:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:25:58 np0005546420.localdomain podman[76894]: 2025-12-05 08:25:58.512144241 +0000 UTC m=+0.090303861 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T22:49:46Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:25:58 np0005546420.localdomain podman[76894]: 2025-12-05 08:25:58.745464761 +0000 UTC m=+0.323624351 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, vcs-type=git, maintainer=OpenStack TripleO Team)
Dec 05 08:25:58 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:26:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:26:07 np0005546420.localdomain podman[76921]: 2025-12-05 08:26:07.497581988 +0000 UTC m=+0.077067396 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, tcib_managed=true, build-date=2025-11-19T00:36:58Z, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, version=17.1.12, config_id=tripleo_step5, container_name=nova_compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:26:07 np0005546420.localdomain podman[76921]: 2025-12-05 08:26:07.553203581 +0000 UTC m=+0.132688949 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1)
Dec 05 08:26:07 np0005546420.localdomain podman[76921]: unhealthy
Dec 05 08:26:07 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:26:07 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 08:26:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:26:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:26:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:26:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:26:10 np0005546420.localdomain podman[76945]: 2025-12-05 08:26:10.509624772 +0000 UTC m=+0.079637776 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1)
Dec 05 08:26:10 np0005546420.localdomain podman[76943]: 2025-12-05 08:26:10.557002886 +0000 UTC m=+0.132370508 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:26:10 np0005546420.localdomain podman[76944]: 2025-12-05 08:26:10.627324159 +0000 UTC m=+0.199420978 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com)
Dec 05 08:26:10 np0005546420.localdomain podman[76946]: 2025-12-05 08:26:10.641585986 +0000 UTC m=+0.207757470 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, version=17.1.12, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:26:10 np0005546420.localdomain podman[76946]: 2025-12-05 08:26:10.67268532 +0000 UTC m=+0.238856754 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044)
Dec 05 08:26:10 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:26:10 np0005546420.localdomain podman[76943]: 2025-12-05 08:26:10.693482582 +0000 UTC m=+0.268850214 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:26:10 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:26:10 np0005546420.localdomain podman[76944]: 2025-12-05 08:26:10.728695755 +0000 UTC m=+0.300792584 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:26:10 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:26:10 np0005546420.localdomain podman[76945]: 2025-12-05 08:26:10.87632416 +0000 UTC m=+0.446337134 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:26:10 np0005546420.localdomain sudo[77034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:26:10 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:26:10 np0005546420.localdomain sudo[77034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:26:10 np0005546420.localdomain sudo[77034]: pam_unix(sudo:session): session closed for user root
Dec 05 08:26:10 np0005546420.localdomain sudo[77050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:26:10 np0005546420.localdomain sudo[77050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:26:11 np0005546420.localdomain sudo[77050]: pam_unix(sudo:session): session closed for user root
Dec 05 08:26:12 np0005546420.localdomain sudo[77097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:26:12 np0005546420.localdomain sudo[77097]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:26:12 np0005546420.localdomain sudo[77097]: pam_unix(sudo:session): session closed for user root
Dec 05 08:26:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:26:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:26:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:26:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:26:16 np0005546420.localdomain podman[77114]: 2025-12-05 08:26:16.529076357 +0000 UTC m=+0.090456956 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.12, container_name=iscsid)
Dec 05 08:26:16 np0005546420.localdomain podman[77112]: 2025-12-05 08:26:16.512039483 +0000 UTC m=+0.081742873 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:26:16 np0005546420.localdomain systemd[1]: tmp-crun.yEkIV9.mount: Deactivated successfully.
Dec 05 08:26:16 np0005546420.localdomain podman[77120]: 2025-12-05 08:26:16.579795946 +0000 UTC m=+0.140426601 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:26:16 np0005546420.localdomain podman[77112]: 2025-12-05 08:26:16.595391204 +0000 UTC m=+0.165094564 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T23:34:05Z, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Dec 05 08:26:16 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:26:16 np0005546420.localdomain podman[77120]: 2025-12-05 08:26:16.65400164 +0000 UTC m=+0.214632315 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, container_name=ovn_metadata_agent)
Dec 05 08:26:16 np0005546420.localdomain podman[77114]: 2025-12-05 08:26:16.662579739 +0000 UTC m=+0.223960438 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:26:16 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:26:16 np0005546420.localdomain podman[77113]: 2025-12-05 08:26:16.674226474 +0000 UTC m=+0.240711612 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, version=17.1.12, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044)
Dec 05 08:26:16 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:26:16 np0005546420.localdomain podman[77113]: 2025-12-05 08:26:16.708425866 +0000 UTC m=+0.274911044 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3)
Dec 05 08:26:16 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:26:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:26:29 np0005546420.localdomain podman[77197]: 2025-12-05 08:26:29.488487844 +0000 UTC m=+0.066812575 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:26:29 np0005546420.localdomain podman[77197]: 2025-12-05 08:26:29.724400144 +0000 UTC m=+0.302724915 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Dec 05 08:26:29 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:26:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:26:38 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:26:38 np0005546420.localdomain recover_tripleo_nova_virtqemud[77229]: 62579
Dec 05 08:26:38 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:26:38 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:26:38 np0005546420.localdomain podman[77226]: 2025-12-05 08:26:38.511145437 +0000 UTC m=+0.088981069 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, name=rhosp17/openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:26:38 np0005546420.localdomain podman[77226]: 2025-12-05 08:26:38.59968397 +0000 UTC m=+0.177519652 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=nova_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true)
Dec 05 08:26:38 np0005546420.localdomain podman[77226]: unhealthy
Dec 05 08:26:38 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:26:38 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 08:26:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:26:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:26:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:26:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:26:41 np0005546420.localdomain podman[77250]: 2025-12-05 08:26:41.569659976 +0000 UTC m=+0.137748856 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4)
Dec 05 08:26:41 np0005546420.localdomain podman[77249]: 2025-12-05 08:26:41.53401151 +0000 UTC m=+0.106428765 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:26:41 np0005546420.localdomain podman[77249]: 2025-12-05 08:26:41.61416016 +0000 UTC m=+0.186577375 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, distribution-scope=public)
Dec 05 08:26:41 np0005546420.localdomain podman[77250]: 2025-12-05 08:26:41.622320066 +0000 UTC m=+0.190408976 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible)
Dec 05 08:26:41 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:26:41 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:26:41 np0005546420.localdomain podman[77251]: 2025-12-05 08:26:41.631031379 +0000 UTC m=+0.194849815 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z)
Dec 05 08:26:41 np0005546420.localdomain podman[77257]: 2025-12-05 08:26:41.690271455 +0000 UTC m=+0.247794754 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1761123044, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:26:41 np0005546420.localdomain podman[77257]: 2025-12-05 08:26:41.724479647 +0000 UTC m=+0.282002946 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:26:41 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:26:42 np0005546420.localdomain podman[77251]: 2025-12-05 08:26:42.008513115 +0000 UTC m=+0.572331491 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:26:42 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:26:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:26:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:26:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:26:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:26:47 np0005546420.localdomain systemd[1]: tmp-crun.y7WrY7.mount: Deactivated successfully.
Dec 05 08:26:47 np0005546420.localdomain podman[77344]: 2025-12-05 08:26:47.529494893 +0000 UTC m=+0.097861446 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, config_id=tripleo_step3, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:26:47 np0005546420.localdomain podman[77344]: 2025-12-05 08:26:47.56640637 +0000 UTC m=+0.134772943 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, url=https://www.redhat.com, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Dec 05 08:26:47 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:26:47 np0005546420.localdomain podman[77343]: 2025-12-05 08:26:47.614397673 +0000 UTC m=+0.183737308 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Dec 05 08:26:47 np0005546420.localdomain podman[77345]: 2025-12-05 08:26:47.574259056 +0000 UTC m=+0.136585400 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:26:47 np0005546420.localdomain podman[77343]: 2025-12-05 08:26:47.666513156 +0000 UTC m=+0.235852851 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:26:47 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:26:47 np0005546420.localdomain podman[77346]: 2025-12-05 08:26:47.681587958 +0000 UTC m=+0.240829316 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 05 08:26:47 np0005546420.localdomain podman[77345]: 2025-12-05 08:26:47.708859563 +0000 UTC m=+0.271185907 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container)
Dec 05 08:26:47 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:26:47 np0005546420.localdomain podman[77346]: 2025-12-05 08:26:47.7557121 +0000 UTC m=+0.314953418 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Dec 05 08:26:47 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:27:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:27:00 np0005546420.localdomain systemd[1]: tmp-crun.uGFGLi.mount: Deactivated successfully.
Dec 05 08:27:00 np0005546420.localdomain podman[77431]: 2025-12-05 08:27:00.501414254 +0000 UTC m=+0.080474931 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, container_name=metrics_qdr, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:27:00 np0005546420.localdomain podman[77431]: 2025-12-05 08:27:00.713572631 +0000 UTC m=+0.292633308 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4)
Dec 05 08:27:00 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:27:03 np0005546420.localdomain sshd[36051]: Received disconnect from 192.168.122.100 port 36918:11: disconnected by user
Dec 05 08:27:03 np0005546420.localdomain sshd[36051]: Disconnected from user zuul 192.168.122.100 port 36918
Dec 05 08:27:03 np0005546420.localdomain sshd[36048]: pam_unix(sshd:session): session closed for user zuul
Dec 05 08:27:03 np0005546420.localdomain systemd[1]: session-27.scope: Deactivated successfully.
Dec 05 08:27:03 np0005546420.localdomain systemd[1]: session-27.scope: Consumed 3.077s CPU time.
Dec 05 08:27:03 np0005546420.localdomain systemd-logind[762]: Session 27 logged out. Waiting for processes to exit.
Dec 05 08:27:03 np0005546420.localdomain systemd-logind[762]: Removed session 27.
Dec 05 08:27:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:27:09 np0005546420.localdomain podman[77460]: 2025-12-05 08:27:09.51264625 +0000 UTC m=+0.089863317 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, distribution-scope=public)
Dec 05 08:27:09 np0005546420.localdomain podman[77460]: 2025-12-05 08:27:09.568471549 +0000 UTC m=+0.145688596 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:27:09 np0005546420.localdomain podman[77460]: unhealthy
Dec 05 08:27:09 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:27:09 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 08:27:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:27:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:27:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:27:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:27:12 np0005546420.localdomain podman[77483]: 2025-12-05 08:27:12.51051038 +0000 UTC m=+0.086854282 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.12, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:27:12 np0005546420.localdomain podman[77483]: 2025-12-05 08:27:12.517876781 +0000 UTC m=+0.094220713 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:27:12 np0005546420.localdomain sudo[77528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:27:12 np0005546420.localdomain sudo[77528]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:27:12 np0005546420.localdomain sudo[77528]: pam_unix(sudo:session): session closed for user root
Dec 05 08:27:12 np0005546420.localdomain systemd[1]: tmp-crun.Yr0rUg.mount: Deactivated successfully.
Dec 05 08:27:12 np0005546420.localdomain podman[77484]: 2025-12-05 08:27:12.572581335 +0000 UTC m=+0.146048337 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:27:12 np0005546420.localdomain podman[77484]: 2025-12-05 08:27:12.610347388 +0000 UTC m=+0.183814350 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12)
Dec 05 08:27:12 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:27:12 np0005546420.localdomain sudo[77554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:27:12 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:27:12 np0005546420.localdomain sudo[77554]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:27:12 np0005546420.localdomain podman[77485]: 2025-12-05 08:27:12.612507785 +0000 UTC m=+0.182391075 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 05 08:27:12 np0005546420.localdomain podman[77486]: 2025-12-05 08:27:12.686471513 +0000 UTC m=+0.251196781 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true)
Dec 05 08:27:12 np0005546420.localdomain podman[77486]: 2025-12-05 08:27:12.746557545 +0000 UTC m=+0.311282783 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 05 08:27:13 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:27:13 np0005546420.localdomain podman[77485]: 2025-12-05 08:27:13.1951855 +0000 UTC m=+0.765068840 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container)
Dec 05 08:27:13 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:27:13 np0005546420.localdomain sudo[77554]: pam_unix(sudo:session): session closed for user root
Dec 05 08:27:13 np0005546420.localdomain sudo[77640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:27:13 np0005546420.localdomain sudo[77640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:27:13 np0005546420.localdomain sudo[77640]: pam_unix(sudo:session): session closed for user root
Dec 05 08:27:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:27:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:27:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:27:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:27:18 np0005546420.localdomain podman[77657]: 2025-12-05 08:27:18.524633009 +0000 UTC m=+0.091826318 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, container_name=iscsid, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible)
Dec 05 08:27:18 np0005546420.localdomain podman[77657]: 2025-12-05 08:27:18.564668493 +0000 UTC m=+0.131861772 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, tcib_managed=true, container_name=iscsid, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 05 08:27:18 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:27:18 np0005546420.localdomain podman[77655]: 2025-12-05 08:27:18.568348259 +0000 UTC m=+0.138576773 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, architecture=x86_64, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:27:18 np0005546420.localdomain podman[77656]: 2025-12-05 08:27:18.628694479 +0000 UTC m=+0.198694896 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, architecture=x86_64)
Dec 05 08:27:18 np0005546420.localdomain podman[77656]: 2025-12-05 08:27:18.641383057 +0000 UTC m=+0.211383514 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, container_name=collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-collectd-container)
Dec 05 08:27:18 np0005546420.localdomain podman[77655]: 2025-12-05 08:27:18.656526651 +0000 UTC m=+0.226755165 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1)
Dec 05 08:27:18 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:27:18 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:27:18 np0005546420.localdomain podman[77658]: 2025-12-05 08:27:18.729073404 +0000 UTC m=+0.290264365 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 08:27:18 np0005546420.localdomain podman[77658]: 2025-12-05 08:27:18.802483844 +0000 UTC m=+0.363674785 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent)
Dec 05 08:27:18 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:27:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:27:31 np0005546420.localdomain podman[77741]: 2025-12-05 08:27:31.514921193 +0000 UTC m=+0.094788270 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, config_id=tripleo_step1, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 05 08:27:31 np0005546420.localdomain podman[77741]: 2025-12-05 08:27:31.7190855 +0000 UTC m=+0.298952567 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git)
Dec 05 08:27:31 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:27:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:27:40 np0005546420.localdomain systemd[1]: tmp-crun.7XciEr.mount: Deactivated successfully.
Dec 05 08:27:40 np0005546420.localdomain podman[77769]: 2025-12-05 08:27:40.512497909 +0000 UTC m=+0.088039679 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git)
Dec 05 08:27:40 np0005546420.localdomain podman[77769]: 2025-12-05 08:27:40.563287071 +0000 UTC m=+0.138828871 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1)
Dec 05 08:27:40 np0005546420.localdomain podman[77769]: unhealthy
Dec 05 08:27:40 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:27:40 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 08:27:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:27:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:27:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:27:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:27:43 np0005546420.localdomain podman[77791]: 2025-12-05 08:27:43.529128458 +0000 UTC m=+0.094580014 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible)
Dec 05 08:27:43 np0005546420.localdomain podman[77789]: 2025-12-05 08:27:43.574783178 +0000 UTC m=+0.145955023 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z)
Dec 05 08:27:43 np0005546420.localdomain podman[77789]: 2025-12-05 08:27:43.611465077 +0000 UTC m=+0.182637002 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=logrotate_crond, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:27:43 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:27:43 np0005546420.localdomain systemd[1]: tmp-crun.nV7jXx.mount: Deactivated successfully.
Dec 05 08:27:43 np0005546420.localdomain podman[77790]: 2025-12-05 08:27:43.662011951 +0000 UTC m=+0.232185005 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:27:43 np0005546420.localdomain podman[77790]: 2025-12-05 08:27:43.715629841 +0000 UTC m=+0.285802925 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:27:43 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:27:43 np0005546420.localdomain podman[77792]: 2025-12-05 08:27:43.733837641 +0000 UTC m=+0.296155299 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:27:43 np0005546420.localdomain podman[77792]: 2025-12-05 08:27:43.789348551 +0000 UTC m=+0.351666129 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:27:43 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:27:43 np0005546420.localdomain podman[77791]: 2025-12-05 08:27:43.89436703 +0000 UTC m=+0.459818596 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64)
Dec 05 08:27:43 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:27:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:27:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:27:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:27:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:27:49 np0005546420.localdomain systemd[1]: tmp-crun.pqETiq.mount: Deactivated successfully.
Dec 05 08:27:49 np0005546420.localdomain podman[77883]: 2025-12-05 08:27:49.528536184 +0000 UTC m=+0.101911653 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:27:49 np0005546420.localdomain podman[77883]: 2025-12-05 08:27:49.570609492 +0000 UTC m=+0.143984931 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, release=1761123044)
Dec 05 08:27:49 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:27:49 np0005546420.localdomain podman[77882]: 2025-12-05 08:27:49.624992537 +0000 UTC m=+0.200660688 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12)
Dec 05 08:27:49 np0005546420.localdomain podman[77884]: 2025-12-05 08:27:49.58009535 +0000 UTC m=+0.149779024 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, version=17.1.12)
Dec 05 08:27:49 np0005546420.localdomain podman[77884]: 2025-12-05 08:27:49.663256645 +0000 UTC m=+0.232940289 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 08:27:49 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:27:49 np0005546420.localdomain podman[77885]: 2025-12-05 08:27:49.681244819 +0000 UTC m=+0.246510504 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=ovn_metadata_agent, architecture=x86_64, release=1761123044, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:27:49 np0005546420.localdomain podman[77882]: 2025-12-05 08:27:49.701785682 +0000 UTC m=+0.277453843 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:27:49 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:27:49 np0005546420.localdomain podman[77885]: 2025-12-05 08:27:49.72594445 +0000 UTC m=+0.291210115 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_metadata_agent)
Dec 05 08:27:49 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:28:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:28:02 np0005546420.localdomain podman[77968]: 2025-12-05 08:28:02.514072094 +0000 UTC m=+0.093149960 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true)
Dec 05 08:28:02 np0005546420.localdomain podman[77968]: 2025-12-05 08:28:02.717421744 +0000 UTC m=+0.296499610 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:28:02 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:28:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:28:11 np0005546420.localdomain systemd[1]: tmp-crun.1xeJPC.mount: Deactivated successfully.
Dec 05 08:28:11 np0005546420.localdomain podman[77997]: 2025-12-05 08:28:11.501736851 +0000 UTC m=+0.080886345 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Dec 05 08:28:11 np0005546420.localdomain podman[77997]: 2025-12-05 08:28:11.56139195 +0000 UTC m=+0.140541504 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc.)
Dec 05 08:28:11 np0005546420.localdomain podman[77997]: unhealthy
Dec 05 08:28:11 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:28:11 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 08:28:14 np0005546420.localdomain sudo[78020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:28:14 np0005546420.localdomain sudo[78020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:28:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:28:14 np0005546420.localdomain sudo[78020]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:28:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:28:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:28:14 np0005546420.localdomain sudo[78056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:28:14 np0005546420.localdomain sudo[78056]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:28:14 np0005546420.localdomain podman[78037]: 2025-12-05 08:28:14.275166979 +0000 UTC m=+0.093966575 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=nova_migration_target, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:28:14 np0005546420.localdomain systemd[1]: tmp-crun.JY3gDZ.mount: Deactivated successfully.
Dec 05 08:28:14 np0005546420.localdomain podman[78036]: 2025-12-05 08:28:14.385820016 +0000 UTC m=+0.210428814 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 05 08:28:14 np0005546420.localdomain podman[78038]: 2025-12-05 08:28:14.344078458 +0000 UTC m=+0.158899749 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, tcib_managed=true, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4)
Dec 05 08:28:14 np0005546420.localdomain podman[78038]: 2025-12-05 08:28:14.427493291 +0000 UTC m=+0.242314562 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:28:14 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:28:14 np0005546420.localdomain podman[78035]: 2025-12-05 08:28:14.439623682 +0000 UTC m=+0.264702764 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044)
Dec 05 08:28:14 np0005546420.localdomain podman[78036]: 2025-12-05 08:28:14.450807922 +0000 UTC m=+0.275416700 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:28:14 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:28:14 np0005546420.localdomain podman[78035]: 2025-12-05 08:28:14.474332139 +0000 UTC m=+0.299411181 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.)
Dec 05 08:28:14 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:28:14 np0005546420.localdomain podman[78037]: 2025-12-05 08:28:14.678538507 +0000 UTC m=+0.497338153 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:28:14 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:28:14 np0005546420.localdomain sudo[78056]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:15 np0005546420.localdomain systemd[1]: tmp-crun.C15YOD.mount: Deactivated successfully.
Dec 05 08:28:15 np0005546420.localdomain sudo[78177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:28:15 np0005546420.localdomain sudo[78177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:28:15 np0005546420.localdomain sudo[78177]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:18 np0005546420.localdomain sshd[78192]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:28:18 np0005546420.localdomain sshd[78193]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:28:18 np0005546420.localdomain sshd[78193]: error: kex_exchange_identification: read: Connection reset by peer
Dec 05 08:28:18 np0005546420.localdomain sshd[78193]: Connection reset by 45.140.17.97 port 3197
Dec 05 08:28:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:28:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:28:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:28:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:28:20 np0005546420.localdomain podman[78194]: 2025-12-05 08:28:20.523261098 +0000 UTC m=+0.092369896 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=ovn_controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:28:20 np0005546420.localdomain podman[78196]: 2025-12-05 08:28:20.582490483 +0000 UTC m=+0.143094374 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:28:20 np0005546420.localdomain podman[78196]: 2025-12-05 08:28:20.592460176 +0000 UTC m=+0.153064047 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, config_id=tripleo_step3, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:28:20 np0005546420.localdomain podman[78197]: 2025-12-05 08:28:20.637198978 +0000 UTC m=+0.194028280 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com)
Dec 05 08:28:20 np0005546420.localdomain podman[78197]: 2025-12-05 08:28:20.677782759 +0000 UTC m=+0.234612031 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public)
Dec 05 08:28:20 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:28:20 np0005546420.localdomain podman[78194]: 2025-12-05 08:28:20.702539665 +0000 UTC m=+0.271648483 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1)
Dec 05 08:28:20 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:28:20 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:28:20 np0005546420.localdomain podman[78195]: 2025-12-05 08:28:20.681617889 +0000 UTC m=+0.248667362 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, tcib_managed=true, container_name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=)
Dec 05 08:28:20 np0005546420.localdomain podman[78195]: 2025-12-05 08:28:20.770482973 +0000 UTC m=+0.337532446 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1761123044, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd)
Dec 05 08:28:20 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:28:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:28:33 np0005546420.localdomain podman[78281]: 2025-12-05 08:28:33.510013074 +0000 UTC m=+0.088401701 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, config_id=tripleo_step1, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team)
Dec 05 08:28:33 np0005546420.localdomain podman[78281]: 2025-12-05 08:28:33.70109631 +0000 UTC m=+0.279484927 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:28:33 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:28:39 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:28:39 np0005546420.localdomain recover_tripleo_nova_virtqemud[78337]: 62579
Dec 05 08:28:39 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:28:39 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:28:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:28:42 np0005546420.localdomain podman[78402]: 2025-12-05 08:28:42.496307227 +0000 UTC m=+0.073009158 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, architecture=x86_64, release=1761123044, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, tcib_managed=true)
Dec 05 08:28:42 np0005546420.localdomain podman[78402]: 2025-12-05 08:28:42.522468057 +0000 UTC m=+0.099170008 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, batch=17.1_20251118.1)
Dec 05 08:28:42 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:28:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:28:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:28:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:28:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:28:45 np0005546420.localdomain systemd[1]: tmp-crun.qAp93a.mount: Deactivated successfully.
Dec 05 08:28:45 np0005546420.localdomain podman[78428]: 2025-12-05 08:28:45.520478822 +0000 UTC m=+0.098242179 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 05 08:28:45 np0005546420.localdomain podman[78430]: 2025-12-05 08:28:45.560160476 +0000 UTC m=+0.131304235 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public)
Dec 05 08:28:45 np0005546420.localdomain podman[78428]: 2025-12-05 08:28:45.579589194 +0000 UTC m=+0.157352511 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible)
Dec 05 08:28:45 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:28:45 np0005546420.localdomain podman[78429]: 2025-12-05 08:28:45.630358585 +0000 UTC m=+0.202564878 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4)
Dec 05 08:28:45 np0005546420.localdomain podman[78436]: 2025-12-05 08:28:45.726024532 +0000 UTC m=+0.291373469 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, container_name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z)
Dec 05 08:28:45 np0005546420.localdomain podman[78429]: 2025-12-05 08:28:45.748684792 +0000 UTC m=+0.320891105 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi)
Dec 05 08:28:45 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:28:45 np0005546420.localdomain podman[78436]: 2025-12-05 08:28:45.786561378 +0000 UTC m=+0.351910335 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:28:45 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:28:45 np0005546420.localdomain podman[78430]: 2025-12-05 08:28:45.98737252 +0000 UTC m=+0.558516299 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64)
Dec 05 08:28:45 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:28:49 np0005546420.localdomain systemd[1]: libpod-e5d25541b432389fdb277c313ab05c9723bca2dd1ed2b9f2d57efa9e4e871a15.scope: Deactivated successfully.
Dec 05 08:28:49 np0005546420.localdomain podman[78522]: 2025-12-05 08:28:49.885827185 +0000 UTC m=+0.059535257 container died e5d25541b432389fdb277c313ab05c9723bca2dd1ed2b9f2d57efa9e4e871a15 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_wait_for_compute_service)
Dec 05 08:28:49 np0005546420.localdomain systemd[1]: tmp-crun.leYwBc.mount: Deactivated successfully.
Dec 05 08:28:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5d25541b432389fdb277c313ab05c9723bca2dd1ed2b9f2d57efa9e4e871a15-userdata-shm.mount: Deactivated successfully.
Dec 05 08:28:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-28babb087ffe0501289e9c462f881c66803c6126daf8f53cd6e97c97c184b295-merged.mount: Deactivated successfully.
Dec 05 08:28:49 np0005546420.localdomain podman[78522]: 2025-12-05 08:28:49.925079695 +0000 UTC m=+0.098787717 container cleanup e5d25541b432389fdb277c313ab05c9723bca2dd1ed2b9f2d57efa9e4e871a15 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_wait_for_compute_service, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, architecture=x86_64, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:28:49 np0005546420.localdomain systemd[1]: libpod-conmon-e5d25541b432389fdb277c313ab05c9723bca2dd1ed2b9f2d57efa9e4e871a15.scope: Deactivated successfully.
Dec 05 08:28:49 np0005546420.localdomain python3[76524]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=ac0f5be6f71e6f8c16cd05155c4b5429 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 05 08:28:50 np0005546420.localdomain sudo[76522]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:50 np0005546420.localdomain sudo[78573]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jpfmnodehkmlqxbuobnwipfwncjdyhwd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:28:50 np0005546420.localdomain sudo[78573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:28:50 np0005546420.localdomain python3[78575]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:28:50 np0005546420.localdomain sudo[78573]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:50 np0005546420.localdomain sudo[78589]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-owjmuezloafptoqialqayjisdpiwowir ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:28:50 np0005546420.localdomain sudo[78589]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:28:50 np0005546420.localdomain python3[78591]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 08:28:50 np0005546420.localdomain sudo[78589]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:51 np0005546420.localdomain sudo[78650]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djeerattpzqskhhibamczuojyduevpoo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:28:51 np0005546420.localdomain sudo[78650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:28:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:28:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:28:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:28:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:28:51 np0005546420.localdomain podman[78656]: 2025-12-05 08:28:51.341476259 +0000 UTC m=+0.078408007 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, version=17.1.12, vendor=Red Hat, Inc.)
Dec 05 08:28:51 np0005546420.localdomain podman[78656]: 2025-12-05 08:28:51.374904207 +0000 UTC m=+0.111835785 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Dec 05 08:28:51 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:28:51 np0005546420.localdomain python3[78652]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764923330.8080542-118482-256741692701861/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:28:51 np0005546420.localdomain systemd[1]: tmp-crun.Ek0Dis.mount: Deactivated successfully.
Dec 05 08:28:51 np0005546420.localdomain podman[78654]: 2025-12-05 08:28:51.407852369 +0000 UTC m=+0.150696202 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, name=rhosp17/openstack-collectd, vcs-type=git)
Dec 05 08:28:51 np0005546420.localdomain sudo[78650]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:51 np0005546420.localdomain podman[78655]: 2025-12-05 08:28:51.451126614 +0000 UTC m=+0.188621060 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3)
Dec 05 08:28:51 np0005546420.localdomain podman[78655]: 2025-12-05 08:28:51.489325141 +0000 UTC m=+0.226819577 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=)
Dec 05 08:28:51 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:28:51 np0005546420.localdomain podman[78653]: 2025-12-05 08:28:51.498837689 +0000 UTC m=+0.243567322 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, container_name=ovn_controller, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Dec 05 08:28:51 np0005546420.localdomain sudo[78741]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ferropdcseriigkmtixqlhiupdwtblzb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:28:51 np0005546420.localdomain podman[78653]: 2025-12-05 08:28:51.517165764 +0000 UTC m=+0.261895337 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4)
Dec 05 08:28:51 np0005546420.localdomain sudo[78741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:28:51 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:28:51 np0005546420.localdomain podman[78654]: 2025-12-05 08:28:51.574094317 +0000 UTC m=+0.316938150 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd)
Dec 05 08:28:51 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:28:51 np0005546420.localdomain python3[78754]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 08:28:51 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:28:51 np0005546420.localdomain systemd-rc-local-generator[78777]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:28:51 np0005546420.localdomain systemd-sysv-generator[78782]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:28:51 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:28:52 np0005546420.localdomain sudo[78741]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:52 np0005546420.localdomain sudo[78805]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jztogikybjchgultoqieqjrapcjqfhzg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 05 08:28:52 np0005546420.localdomain sudo[78805]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:28:52 np0005546420.localdomain python3[78807]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 08:28:53 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:28:53 np0005546420.localdomain systemd-sysv-generator[78837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:28:53 np0005546420.localdomain systemd-rc-local-generator[78832]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:28:53 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:28:53 np0005546420.localdomain systemd[1]: Starting nova_compute container...
Dec 05 08:28:53 np0005546420.localdomain tripleo-start-podman-container[78846]: Creating additional drop-in dependency for "nova_compute" (ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e)
Dec 05 08:28:53 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 08:28:53 np0005546420.localdomain systemd-rc-local-generator[78901]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 08:28:53 np0005546420.localdomain systemd-sysv-generator[78908]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 08:28:53 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 08:28:53 np0005546420.localdomain systemd[1]: Started nova_compute container.
Dec 05 08:28:53 np0005546420.localdomain sudo[78805]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:54 np0005546420.localdomain sudo[78940]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ixrgczaqivbhgvyrumdmigfneeazapbw ; /usr/bin/python3
Dec 05 08:28:54 np0005546420.localdomain sudo[78940]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:28:54 np0005546420.localdomain python3[78942]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:28:54 np0005546420.localdomain sudo[78940]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:54 np0005546420.localdomain sudo[78988]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wusjbmbdktwxpxyzuoilmijpukejhzga ; /usr/bin/python3
Dec 05 08:28:54 np0005546420.localdomain sudo[78988]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:28:54 np0005546420.localdomain sudo[78988]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:55 np0005546420.localdomain sudo[79031]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bscinamtlehczgavhyiihritfbpvdimw ; /usr/bin/python3
Dec 05 08:28:55 np0005546420.localdomain sudo[79031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:28:55 np0005546420.localdomain sudo[79031]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:55 np0005546420.localdomain sudo[79061]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cknxtobguemejzwarsyqccrawcnpwqop ; /usr/bin/python3
Dec 05 08:28:55 np0005546420.localdomain sudo[79061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:28:55 np0005546420.localdomain python3[79063]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005546420 step=5 update_config_hash_only=False
Dec 05 08:28:55 np0005546420.localdomain sudo[79061]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:56 np0005546420.localdomain sudo[79077]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pelynkntliburytcuytuntdzbwzrlgke ; /usr/bin/python3
Dec 05 08:28:56 np0005546420.localdomain sudo[79077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:28:56 np0005546420.localdomain python3[79079]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 08:28:56 np0005546420.localdomain sudo[79077]: pam_unix(sudo:session): session closed for user root
Dec 05 08:28:56 np0005546420.localdomain sudo[79093]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iofwdbcbmxnzjtmjeehjetpgkkruslpk ; /usr/bin/python3
Dec 05 08:28:56 np0005546420.localdomain sudo[79093]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 05 08:28:56 np0005546420.localdomain python3[79095]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 05 08:28:56 np0005546420.localdomain sudo[79093]: pam_unix(sudo:session): session closed for user root
Dec 05 08:29:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:29:04 np0005546420.localdomain podman[79096]: 2025-12-05 08:29:04.526168786 +0000 UTC m=+0.092931142 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:29:04 np0005546420.localdomain podman[79096]: 2025-12-05 08:29:04.753760376 +0000 UTC m=+0.320522702 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=metrics_qdr)
Dec 05 08:29:04 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:29:06 np0005546420.localdomain sshd[79126]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:29:07 np0005546420.localdomain sshd[79126]: Received disconnect from 195.250.72.168 port 42512:11: Bye Bye [preauth]
Dec 05 08:29:07 np0005546420.localdomain sshd[79126]: Disconnected from authenticating user root 195.250.72.168 port 42512 [preauth]
Dec 05 08:29:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:29:13 np0005546420.localdomain podman[79128]: 2025-12-05 08:29:13.525190018 +0000 UTC m=+0.102605166 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=nova_compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044)
Dec 05 08:29:13 np0005546420.localdomain podman[79128]: 2025-12-05 08:29:13.580899384 +0000 UTC m=+0.158314562 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, release=1761123044, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Dec 05 08:29:13 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:29:15 np0005546420.localdomain sudo[79155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:29:15 np0005546420.localdomain sudo[79155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:29:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:29:15 np0005546420.localdomain sudo[79155]: pam_unix(sudo:session): session closed for user root
Dec 05 08:29:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:29:15 np0005546420.localdomain sudo[79182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:29:15 np0005546420.localdomain sudo[79182]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:29:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:29:15 np0005546420.localdomain podman[79171]: 2025-12-05 08:29:15.871035471 +0000 UTC m=+0.081438102 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team)
Dec 05 08:29:15 np0005546420.localdomain podman[79171]: 2025-12-05 08:29:15.942926804 +0000 UTC m=+0.153329465 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:29:15 np0005546420.localdomain systemd[1]: tmp-crun.Pg3RpI.mount: Deactivated successfully.
Dec 05 08:29:15 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:29:15 np0005546420.localdomain podman[79213]: 2025-12-05 08:29:15.963417245 +0000 UTC m=+0.086752839 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team)
Dec 05 08:29:15 np0005546420.localdomain podman[79170]: 2025-12-05 08:29:15.923904637 +0000 UTC m=+0.136309411 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, distribution-scope=public, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true)
Dec 05 08:29:16 np0005546420.localdomain podman[79170]: 2025-12-05 08:29:16.003571864 +0000 UTC m=+0.215976588 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=logrotate_crond, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=)
Dec 05 08:29:16 np0005546420.localdomain podman[79213]: 2025-12-05 08:29:16.010932364 +0000 UTC m=+0.134267918 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:29:16 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:29:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:29:16 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:29:16 np0005546420.localdomain podman[79256]: 2025-12-05 08:29:16.085869852 +0000 UTC m=+0.053821247 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, release=1761123044)
Dec 05 08:29:16 np0005546420.localdomain podman[79256]: 2025-12-05 08:29:16.485471571 +0000 UTC m=+0.453423016 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:29:16 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:29:16 np0005546420.localdomain sudo[79182]: pam_unix(sudo:session): session closed for user root
Dec 05 08:29:17 np0005546420.localdomain sudo[79311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:29:17 np0005546420.localdomain sudo[79311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:29:17 np0005546420.localdomain sudo[79311]: pam_unix(sudo:session): session closed for user root
Dec 05 08:29:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:29:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:29:21 np0005546420.localdomain podman[79326]: 2025-12-05 08:29:21.519175395 +0000 UTC m=+0.094746723 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:29:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:29:21 np0005546420.localdomain podman[79326]: 2025-12-05 08:29:21.593988387 +0000 UTC m=+0.169559715 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:29:21 np0005546420.localdomain systemd[1]: tmp-crun.QE5QSb.mount: Deactivated successfully.
Dec 05 08:29:21 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:29:21 np0005546420.localdomain podman[79344]: 2025-12-05 08:29:21.618571161 +0000 UTC m=+0.091046778 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z)
Dec 05 08:29:21 np0005546420.localdomain podman[79344]: 2025-12-05 08:29:21.628166739 +0000 UTC m=+0.100642396 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:29:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:29:21 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:29:21 np0005546420.localdomain podman[79361]: 2025-12-05 08:29:21.694675153 +0000 UTC m=+0.106963761 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=ovn_controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:29:21 np0005546420.localdomain podman[79361]: 2025-12-05 08:29:21.717220993 +0000 UTC m=+0.129509611 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ovn_controller, version=17.1.12, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:29:21 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:29:21 np0005546420.localdomain podman[79380]: 2025-12-05 08:29:21.78701416 +0000 UTC m=+0.143674701 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, container_name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z)
Dec 05 08:29:21 np0005546420.localdomain podman[79380]: 2025-12-05 08:29:21.824267478 +0000 UTC m=+0.180927989 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, vcs-type=git, name=rhosp17/openstack-collectd, version=17.1.12, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044)
Dec 05 08:29:21 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:29:22 np0005546420.localdomain sshd[79411]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:29:22 np0005546420.localdomain sshd[79411]: Accepted publickey for zuul from 192.168.122.100 port 44678 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 08:29:22 np0005546420.localdomain systemd-logind[762]: New session 33 of user zuul.
Dec 05 08:29:22 np0005546420.localdomain systemd[1]: Started Session 33 of User zuul.
Dec 05 08:29:22 np0005546420.localdomain sshd[79411]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 08:29:23 np0005546420.localdomain sudo[79518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eivykexsxsidsasqgtbfnnovnpzpqgru ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764923362.5511243-40749-194482691874492/AnsiballZ_setup.py
Dec 05 08:29:23 np0005546420.localdomain sudo[79518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 08:29:23 np0005546420.localdomain python3[79520]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 08:29:25 np0005546420.localdomain sudo[79518]: pam_unix(sudo:session): session closed for user root
Dec 05 08:29:30 np0005546420.localdomain sudo[79781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yiouskuturrqzdajvosrvnghppjrudwr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764923370.3809528-40840-59687605411859/AnsiballZ_dnf.py
Dec 05 08:29:30 np0005546420.localdomain sudo[79781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 08:29:31 np0005546420.localdomain python3[79783]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None
Dec 05 08:29:33 np0005546420.localdomain sudo[79781]: pam_unix(sudo:session): session closed for user root
Dec 05 08:29:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:29:35 np0005546420.localdomain podman[79800]: 2025-12-05 08:29:35.503709495 +0000 UTC m=+0.080127219 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step1)
Dec 05 08:29:35 np0005546420.localdomain podman[79800]: 2025-12-05 08:29:35.714245172 +0000 UTC m=+0.290662906 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, container_name=metrics_qdr, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:29:35 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:29:37 np0005546420.localdomain sudo[79903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gwvqcpauwqaimrtotgqjaqtfgcmxrecg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764923377.560901-40896-3528958553568/AnsiballZ_iptables.py
Dec 05 08:29:37 np0005546420.localdomain sudo[79903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 08:29:38 np0005546420.localdomain python3[79905]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None
Dec 05 08:29:38 np0005546420.localdomain kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 05 08:29:38 np0005546420.localdomain systemd-journald[48245]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation.
Dec 05 08:29:38 np0005546420.localdomain systemd-journald[48245]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 05 08:29:38 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 08:29:38 np0005546420.localdomain sudo[79903]: pam_unix(sudo:session): session closed for user root
Dec 05 08:29:38 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 08:29:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:29:44 np0005546420.localdomain podman[79974]: 2025-12-05 08:29:44.564061809 +0000 UTC m=+0.141307508 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public)
Dec 05 08:29:44 np0005546420.localdomain podman[79974]: 2025-12-05 08:29:44.593474642 +0000 UTC m=+0.170720391 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public)
Dec 05 08:29:44 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:29:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:29:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:29:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:29:46 np0005546420.localdomain podman[80000]: 2025-12-05 08:29:46.501924026 +0000 UTC m=+0.082064990 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=logrotate_crond, architecture=x86_64, build-date=2025-11-18T22:49:32Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., distribution-scope=public)
Dec 05 08:29:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:29:46 np0005546420.localdomain podman[80000]: 2025-12-05 08:29:46.517704345 +0000 UTC m=+0.097845309 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:29:46 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:29:46 np0005546420.localdomain podman[80041]: 2025-12-05 08:29:46.592323642 +0000 UTC m=+0.069239901 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., container_name=nova_migration_target)
Dec 05 08:29:46 np0005546420.localdomain podman[80001]: 2025-12-05 08:29:46.648993081 +0000 UTC m=+0.224771779 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044)
Dec 05 08:29:46 np0005546420.localdomain podman[80002]: 2025-12-05 08:29:46.720158321 +0000 UTC m=+0.292456501 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:29:46 np0005546420.localdomain podman[80001]: 2025-12-05 08:29:46.736419645 +0000 UTC m=+0.312198343 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:29:46 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:29:46 np0005546420.localdomain podman[80002]: 2025-12-05 08:29:46.778665507 +0000 UTC m=+0.350963677 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:29:46 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:29:46 np0005546420.localdomain podman[80041]: 2025-12-05 08:29:46.935042192 +0000 UTC m=+0.411958451 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:29:46 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:29:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:29:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:29:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:29:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:29:52 np0005546420.localdomain podman[80096]: 2025-12-05 08:29:52.508274279 +0000 UTC m=+0.079202041 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:29:52 np0005546420.localdomain podman[80098]: 2025-12-05 08:29:52.532044016 +0000 UTC m=+0.096377003 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 08:29:52 np0005546420.localdomain podman[80096]: 2025-12-05 08:29:52.546360671 +0000 UTC m=+0.117288443 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1761123044, container_name=collectd, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, vcs-type=git, batch=17.1_20251118.1)
Dec 05 08:29:52 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:29:52 np0005546420.localdomain podman[80098]: 2025-12-05 08:29:52.578360015 +0000 UTC m=+0.142693002 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:29:52 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:29:52 np0005546420.localdomain podman[80097]: 2025-12-05 08:29:52.622180345 +0000 UTC m=+0.189603568 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:29:52 np0005546420.localdomain podman[80097]: 2025-12-05 08:29:52.664719136 +0000 UTC m=+0.232142349 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:29:52 np0005546420.localdomain podman[80095]: 2025-12-05 08:29:52.673938232 +0000 UTC m=+0.246435902 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ovn-controller-container)
Dec 05 08:29:52 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:29:52 np0005546420.localdomain podman[80095]: 2025-12-05 08:29:52.725385749 +0000 UTC m=+0.297883399 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:29:52 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:29:53 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:29:53 np0005546420.localdomain recover_tripleo_nova_virtqemud[80180]: 62579
Dec 05 08:29:53 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:29:53 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:30:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:30:06 np0005546420.localdomain systemd[1]: tmp-crun.IIPah3.mount: Deactivated successfully.
Dec 05 08:30:06 np0005546420.localdomain podman[80181]: 2025-12-05 08:30:06.509075601 +0000 UTC m=+0.088681974 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:30:06 np0005546420.localdomain podman[80181]: 2025-12-05 08:30:06.709421862 +0000 UTC m=+0.289028275 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:30:06 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:30:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:30:15 np0005546420.localdomain podman[80211]: 2025-12-05 08:30:15.506069768 +0000 UTC m=+0.085135704 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:30:15 np0005546420.localdomain podman[80211]: 2025-12-05 08:30:15.533669365 +0000 UTC m=+0.112735291 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 08:30:15 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:30:17 np0005546420.localdomain sudo[80238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:30:17 np0005546420.localdomain sudo[80238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:30:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:30:17 np0005546420.localdomain sudo[80238]: pam_unix(sudo:session): session closed for user root
Dec 05 08:30:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:30:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:30:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:30:17 np0005546420.localdomain sudo[80268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:30:17 np0005546420.localdomain sudo[80268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:30:17 np0005546420.localdomain systemd[1]: tmp-crun.aj9Jdm.mount: Deactivated successfully.
Dec 05 08:30:17 np0005546420.localdomain podman[80255]: 2025-12-05 08:30:17.395535222 +0000 UTC m=+0.087886500 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:30:17 np0005546420.localdomain podman[80252]: 2025-12-05 08:30:17.42900077 +0000 UTC m=+0.126856409 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:30:17 np0005546420.localdomain podman[80252]: 2025-12-05 08:30:17.43476118 +0000 UTC m=+0.132616829 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:30:17 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:30:17 np0005546420.localdomain podman[80256]: 2025-12-05 08:30:17.503733751 +0000 UTC m=+0.192548189 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible)
Dec 05 08:30:17 np0005546420.localdomain podman[80254]: 2025-12-05 08:30:17.54655291 +0000 UTC m=+0.240103375 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, version=17.1.12, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 05 08:30:17 np0005546420.localdomain podman[80256]: 2025-12-05 08:30:17.568618876 +0000 UTC m=+0.257433344 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute)
Dec 05 08:30:17 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:30:17 np0005546420.localdomain podman[80254]: 2025-12-05 08:30:17.606468841 +0000 UTC m=+0.300019336 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1)
Dec 05 08:30:17 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:30:17 np0005546420.localdomain podman[80255]: 2025-12-05 08:30:17.747346945 +0000 UTC m=+0.439698273 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container)
Dec 05 08:30:17 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:30:17 np0005546420.localdomain sudo[80268]: pam_unix(sudo:session): session closed for user root
Dec 05 08:30:18 np0005546420.localdomain sudo[80392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:30:18 np0005546420.localdomain sudo[80392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:30:18 np0005546420.localdomain sudo[80392]: pam_unix(sudo:session): session closed for user root
Dec 05 08:30:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:30:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:30:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:30:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:30:23 np0005546420.localdomain podman[80407]: 2025-12-05 08:30:23.493274814 +0000 UTC m=+0.076437714 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:30:23 np0005546420.localdomain podman[80407]: 2025-12-05 08:30:23.547363054 +0000 UTC m=+0.130525954 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vendor=Red Hat, Inc., container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com)
Dec 05 08:30:23 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:30:23 np0005546420.localdomain podman[80409]: 2025-12-05 08:30:23.5620692 +0000 UTC m=+0.140765281 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, distribution-scope=public, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team)
Dec 05 08:30:23 np0005546420.localdomain podman[80408]: 2025-12-05 08:30:23.598105599 +0000 UTC m=+0.177742450 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:30:23 np0005546420.localdomain podman[80408]: 2025-12-05 08:30:23.608215923 +0000 UTC m=+0.187852784 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, distribution-scope=public, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, managed_by=tripleo_ansible)
Dec 05 08:30:23 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:30:23 np0005546420.localdomain podman[80409]: 2025-12-05 08:30:23.625720716 +0000 UTC m=+0.204416797 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com)
Dec 05 08:30:23 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:30:23 np0005546420.localdomain podman[80410]: 2025-12-05 08:30:23.713629846 +0000 UTC m=+0.290226212 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:30:23 np0005546420.localdomain podman[80410]: 2025-12-05 08:30:23.763407532 +0000 UTC m=+0.340003928 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 05 08:30:23 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:30:37 np0005546420.localdomain sshd[79411]: pam_unix(sshd:session): session closed for user zuul
Dec 05 08:30:37 np0005546420.localdomain systemd[1]: session-33.scope: Deactivated successfully.
Dec 05 08:30:37 np0005546420.localdomain systemd[1]: session-33.scope: Consumed 5.781s CPU time.
Dec 05 08:30:37 np0005546420.localdomain systemd-logind[762]: Session 33 logged out. Waiting for processes to exit.
Dec 05 08:30:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:30:37 np0005546420.localdomain systemd-logind[762]: Removed session 33.
Dec 05 08:30:37 np0005546420.localdomain podman[80491]: 2025-12-05 08:30:37.454711869 +0000 UTC m=+0.090369847 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Dec 05 08:30:37 np0005546420.localdomain podman[80491]: 2025-12-05 08:30:37.645489562 +0000 UTC m=+0.281147480 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true)
Dec 05 08:30:37 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:30:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:30:46 np0005546420.localdomain systemd[1]: tmp-crun.nmq2Xr.mount: Deactivated successfully.
Dec 05 08:30:46 np0005546420.localdomain podman[80565]: 2025-12-05 08:30:46.51404684 +0000 UTC m=+0.087591681 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, batch=17.1_20251118.1)
Dec 05 08:30:46 np0005546420.localdomain podman[80565]: 2025-12-05 08:30:46.567164969 +0000 UTC m=+0.140709840 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, name=rhosp17/openstack-nova-compute)
Dec 05 08:30:46 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:30:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:30:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:30:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:30:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:30:48 np0005546420.localdomain podman[80593]: 2025-12-05 08:30:48.507952117 +0000 UTC m=+0.079772348 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=nova_migration_target, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4)
Dec 05 08:30:48 np0005546420.localdomain podman[80592]: 2025-12-05 08:30:48.567933329 +0000 UTC m=+0.141172914 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=)
Dec 05 08:30:48 np0005546420.localdomain podman[80591]: 2025-12-05 08:30:48.61402994 +0000 UTC m=+0.188724660 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=logrotate_crond, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:30:48 np0005546420.localdomain podman[80592]: 2025-12-05 08:30:48.623418932 +0000 UTC m=+0.196658497 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.)
Dec 05 08:30:48 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:30:48 np0005546420.localdomain podman[80594]: 2025-12-05 08:30:48.68295287 +0000 UTC m=+0.250012093 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z)
Dec 05 08:30:48 np0005546420.localdomain podman[80591]: 2025-12-05 08:30:48.695297183 +0000 UTC m=+0.269991913 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-cron-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z)
Dec 05 08:30:48 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:30:48 np0005546420.localdomain podman[80594]: 2025-12-05 08:30:48.716673757 +0000 UTC m=+0.283732970 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4)
Dec 05 08:30:48 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:30:48 np0005546420.localdomain podman[80593]: 2025-12-05 08:30:48.833271188 +0000 UTC m=+0.405091419 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, release=1761123044, distribution-scope=public, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 05 08:30:48 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:30:53 np0005546420.localdomain sshd[80682]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:30:53 np0005546420.localdomain sshd[80682]: Accepted publickey for zuul from 38.102.83.114 port 54910 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 08:30:53 np0005546420.localdomain systemd-logind[762]: New session 34 of user zuul.
Dec 05 08:30:53 np0005546420.localdomain systemd[1]: Started Session 34 of User zuul.
Dec 05 08:30:53 np0005546420.localdomain sshd[80682]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 08:30:53 np0005546420.localdomain sudo[80699]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ldfffacekytjexgybfsxhjbuocyqbkfb ; /usr/bin/python3
Dec 05 08:30:53 np0005546420.localdomain sudo[80699]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 08:30:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:30:53 np0005546420.localdomain podman[80701]: 2025-12-05 08:30:53.697559044 +0000 UTC m=+0.079859730 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, tcib_managed=true)
Dec 05 08:30:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:30:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:30:53 np0005546420.localdomain podman[80701]: 2025-12-05 08:30:53.752321635 +0000 UTC m=+0.134622281 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller)
Dec 05 08:30:53 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:30:53 np0005546420.localdomain podman[80726]: 2025-12-05 08:30:53.812642157 +0000 UTC m=+0.085890677 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, vcs-type=git, name=rhosp17/openstack-iscsid, release=1761123044, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=iscsid, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:30:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:30:53 np0005546420.localdomain python3[80702]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 08:30:53 np0005546420.localdomain podman[80726]: 2025-12-05 08:30:53.854513127 +0000 UTC m=+0.127761637 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, vendor=Red Hat, Inc.)
Dec 05 08:30:53 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:30:53 np0005546420.localdomain podman[80724]: 2025-12-05 08:30:53.869608625 +0000 UTC m=+0.147126798 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, vcs-type=git, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:30:53 np0005546420.localdomain podman[80724]: 2025-12-05 08:30:53.88389038 +0000 UTC m=+0.161408543 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, architecture=x86_64, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, version=17.1.12, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true)
Dec 05 08:30:53 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:30:53 np0005546420.localdomain podman[80761]: 2025-12-05 08:30:53.977089003 +0000 UTC m=+0.143466365 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:30:54 np0005546420.localdomain podman[80761]: 2025-12-05 08:30:54.053418112 +0000 UTC m=+0.219795434 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:30:54 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:30:56 np0005546420.localdomain sudo[80699]: pam_unix(sudo:session): session closed for user root
Dec 05 08:31:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:31:08 np0005546420.localdomain podman[80791]: 2025-12-05 08:31:08.530817934 +0000 UTC m=+0.102299437 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com)
Dec 05 08:31:08 np0005546420.localdomain podman[80791]: 2025-12-05 08:31:08.783431717 +0000 UTC m=+0.354913170 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step1, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd)
Dec 05 08:31:08 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:31:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:31:17 np0005546420.localdomain podman[80820]: 2025-12-05 08:31:17.500495233 +0000 UTC m=+0.075783353 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 05 08:31:17 np0005546420.localdomain podman[80820]: 2025-12-05 08:31:17.533500339 +0000 UTC m=+0.108788479 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1)
Dec 05 08:31:17 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:31:18 np0005546420.localdomain sudo[80848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:31:18 np0005546420.localdomain sudo[80848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:31:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:31:18 np0005546420.localdomain sudo[80848]: pam_unix(sudo:session): session closed for user root
Dec 05 08:31:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:31:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:31:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:31:18 np0005546420.localdomain sudo[80872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:31:18 np0005546420.localdomain sudo[80872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:31:18 np0005546420.localdomain systemd[1]: tmp-crun.O6oK3L.mount: Deactivated successfully.
Dec 05 08:31:18 np0005546420.localdomain podman[80864]: 2025-12-05 08:31:18.968157232 +0000 UTC m=+0.104559298 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi)
Dec 05 08:31:18 np0005546420.localdomain podman[80862]: 2025-12-05 08:31:18.944823187 +0000 UTC m=+0.088095586 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.openshift.expose-services=, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, release=1761123044, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:31:19 np0005546420.localdomain podman[80865]: 2025-12-05 08:31:19.017155252 +0000 UTC m=+0.151867255 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public)
Dec 05 08:31:19 np0005546420.localdomain podman[80862]: 2025-12-05 08:31:19.028467804 +0000 UTC m=+0.171740273 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, architecture=x86_64, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc.)
Dec 05 08:31:19 np0005546420.localdomain podman[80864]: 2025-12-05 08:31:19.037459034 +0000 UTC m=+0.173861090 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, release=1761123044, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4)
Dec 05 08:31:19 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:31:19 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:31:19 np0005546420.localdomain podman[80871]: 2025-12-05 08:31:19.112037118 +0000 UTC m=+0.240953232 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team)
Dec 05 08:31:19 np0005546420.localdomain podman[80865]: 2025-12-05 08:31:19.13365232 +0000 UTC m=+0.268364383 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:31:19 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:31:19 np0005546420.localdomain podman[80871]: 2025-12-05 08:31:19.47667985 +0000 UTC m=+0.605596014 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4)
Dec 05 08:31:19 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:31:19 np0005546420.localdomain sudo[80872]: pam_unix(sudo:session): session closed for user root
Dec 05 08:31:19 np0005546420.localdomain systemd[1]: tmp-crun.sc98GH.mount: Deactivated successfully.
Dec 05 08:31:21 np0005546420.localdomain sudo[81009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdoyaxrzbahtmpswerkjmqzozeuiulei ; /usr/bin/python3
Dec 05 08:31:21 np0005546420.localdomain sudo[81009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 08:31:21 np0005546420.localdomain python3[81011]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 08:31:22 np0005546420.localdomain sudo[81013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:31:22 np0005546420.localdomain sudo[81013]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:31:22 np0005546420.localdomain sudo[81013]: pam_unix(sudo:session): session closed for user root
Dec 05 08:31:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:31:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:31:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:31:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:31:24 np0005546420.localdomain systemd[1]: tmp-crun.Y5Fd4w.mount: Deactivated successfully.
Dec 05 08:31:24 np0005546420.localdomain podman[81029]: 2025-12-05 08:31:24.533433921 +0000 UTC m=+0.108968433 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true)
Dec 05 08:31:24 np0005546420.localdomain systemd[1]: tmp-crun.Ip1t3w.mount: Deactivated successfully.
Dec 05 08:31:24 np0005546420.localdomain podman[81036]: 2025-12-05 08:31:24.589148991 +0000 UTC m=+0.156494460 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, build-date=2025-11-19T00:14:25Z, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 05 08:31:24 np0005546420.localdomain podman[81028]: 2025-12-05 08:31:24.561153702 +0000 UTC m=+0.139807712 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public)
Dec 05 08:31:24 np0005546420.localdomain podman[81030]: 2025-12-05 08:31:24.625076637 +0000 UTC m=+0.198449913 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:31:24 np0005546420.localdomain podman[81028]: 2025-12-05 08:31:24.646356258 +0000 UTC m=+0.225010338 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Dec 05 08:31:24 np0005546420.localdomain podman[81030]: 2025-12-05 08:31:24.660762835 +0000 UTC m=+0.234136091 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, container_name=iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4)
Dec 05 08:31:24 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:31:24 np0005546420.localdomain podman[81029]: 2025-12-05 08:31:24.670868598 +0000 UTC m=+0.246403110 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12)
Dec 05 08:31:24 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:31:24 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:31:24 np0005546420.localdomain podman[81036]: 2025-12-05 08:31:24.709486158 +0000 UTC m=+0.276831627 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ovn_metadata_agent, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com)
Dec 05 08:31:24 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:31:25 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 08:31:25 np0005546420.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 05 08:31:25 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 08:31:26 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 08:31:26 np0005546420.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 05 08:31:26 np0005546420.localdomain systemd[1]: run-r2699dfcc989043378748f906dda8cc49.service: Deactivated successfully.
Dec 05 08:31:26 np0005546420.localdomain systemd[1]: run-r92745102f82d4df18396f14faf3422ce.service: Deactivated successfully.
Dec 05 08:31:26 np0005546420.localdomain sudo[81009]: pam_unix(sudo:session): session closed for user root
Dec 05 08:31:27 np0005546420.localdomain sshd[81263]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:31:29 np0005546420.localdomain sshd[81263]: Connection reset by authenticating user root 45.135.232.92 port 36684 [preauth]
Dec 05 08:31:29 np0005546420.localdomain sshd[81265]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:31:30 np0005546420.localdomain sshd[81265]: Invalid user Admin from 45.135.232.92 port 36716
Dec 05 08:31:31 np0005546420.localdomain sshd[81265]: Connection reset by invalid user Admin 45.135.232.92 port 36716 [preauth]
Dec 05 08:31:31 np0005546420.localdomain sshd[81267]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:31:36 np0005546420.localdomain sshd[81267]: Connection reset by authenticating user root 45.135.232.92 port 36722 [preauth]
Dec 05 08:31:36 np0005546420.localdomain sshd[81269]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:31:38 np0005546420.localdomain sshd[81269]: Connection reset by authenticating user root 45.135.232.92 port 35332 [preauth]
Dec 05 08:31:38 np0005546420.localdomain sshd[81271]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:31:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:31:39 np0005546420.localdomain podman[81273]: 2025-12-05 08:31:39.508893457 +0000 UTC m=+0.085283438 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:31:39 np0005546420.localdomain podman[81273]: 2025-12-05 08:31:39.700500507 +0000 UTC m=+0.276890478 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z)
Dec 05 08:31:39 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:31:39 np0005546420.localdomain sshd[81271]: Invalid user support from 45.135.232.92 port 35338
Dec 05 08:31:40 np0005546420.localdomain sshd[81271]: Connection reset by invalid user support 45.135.232.92 port 35338 [preauth]
Dec 05 08:31:42 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:31:42 np0005546420.localdomain recover_tripleo_nova_virtqemud[81348]: 62579
Dec 05 08:31:42 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:31:42 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:31:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:31:48 np0005546420.localdomain podman[81349]: 2025-12-05 08:31:48.504745739 +0000 UTC m=+0.085638199 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step5, container_name=nova_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:31:48 np0005546420.localdomain podman[81349]: 2025-12-05 08:31:48.534578645 +0000 UTC m=+0.115471125 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., config_id=tripleo_step5, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 05 08:31:48 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:31:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:31:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:31:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:31:49 np0005546420.localdomain podman[81374]: 2025-12-05 08:31:49.507453821 +0000 UTC m=+0.085160665 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi)
Dec 05 08:31:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:31:49 np0005546420.localdomain systemd[1]: tmp-crun.WkYcJj.mount: Deactivated successfully.
Dec 05 08:31:49 np0005546420.localdomain podman[81373]: 2025-12-05 08:31:49.56794119 +0000 UTC m=+0.147479741 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=logrotate_crond, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, release=1761123044, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron)
Dec 05 08:31:49 np0005546420.localdomain podman[81374]: 2025-12-05 08:31:49.572567233 +0000 UTC m=+0.150274037 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true)
Dec 05 08:31:49 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:31:49 np0005546420.localdomain podman[81373]: 2025-12-05 08:31:49.652400222 +0000 UTC m=+0.231938773 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true)
Dec 05 08:31:49 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:31:49 np0005546420.localdomain podman[81375]: 2025-12-05 08:31:49.673448846 +0000 UTC m=+0.249259941 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, version=17.1.12, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true)
Dec 05 08:31:49 np0005546420.localdomain podman[81414]: 2025-12-05 08:31:49.635708604 +0000 UTC m=+0.102391611 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible)
Dec 05 08:31:49 np0005546420.localdomain podman[81375]: 2025-12-05 08:31:49.70935906 +0000 UTC m=+0.285170145 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute)
Dec 05 08:31:49 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:31:50 np0005546420.localdomain podman[81414]: 2025-12-05 08:31:50.027439566 +0000 UTC m=+0.494122523 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Dec 05 08:31:50 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:31:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:31:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:31:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:31:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:31:55 np0005546420.localdomain podman[81471]: 2025-12-05 08:31:55.49773857 +0000 UTC m=+0.078807629 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:31:55 np0005546420.localdomain podman[81474]: 2025-12-05 08:31:55.515292535 +0000 UTC m=+0.087211969 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, batch=17.1_20251118.1, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:31:55 np0005546420.localdomain podman[81471]: 2025-12-05 08:31:55.553206241 +0000 UTC m=+0.134275310 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64)
Dec 05 08:31:55 np0005546420.localdomain podman[81474]: 2025-12-05 08:31:55.602234324 +0000 UTC m=+0.174153778 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 05 08:31:55 np0005546420.localdomain podman[81472]: 2025-12-05 08:31:55.612378829 +0000 UTC m=+0.189045010 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:51:28Z, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:31:55 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:31:55 np0005546420.localdomain podman[81472]: 2025-12-05 08:31:55.627366845 +0000 UTC m=+0.204033036 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, vcs-type=git)
Dec 05 08:31:55 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:31:55 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:31:55 np0005546420.localdomain podman[81473]: 2025-12-05 08:31:55.579569321 +0000 UTC m=+0.155175930 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64)
Dec 05 08:31:55 np0005546420.localdomain podman[81473]: 2025-12-05 08:31:55.713514609 +0000 UTC m=+0.289121228 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc.)
Dec 05 08:31:55 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:32:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:32:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 2400.1 total, 600.0 interval
                                                          Cumulative writes: 5161 writes, 23K keys, 5161 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5161 writes, 538 syncs, 9.59 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 08:32:06 np0005546420.localdomain sudo[81571]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqqqhrmnovmaemwgdvmcectufnldfcqy ; /usr/bin/python3
Dec 05 08:32:06 np0005546420.localdomain sudo[81571]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 08:32:07 np0005546420.localdomain python3[81573]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:32:10 np0005546420.localdomain rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 08:32:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:32:10 np0005546420.localdomain rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 08:32:10 np0005546420.localdomain podman[81697]: 2025-12-05 08:32:10.514298892 +0000 UTC m=+0.092720500 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, container_name=metrics_qdr, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git)
Dec 05 08:32:10 np0005546420.localdomain podman[81697]: 2025-12-05 08:32:10.74869745 +0000 UTC m=+0.327119048 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc.)
Dec 05 08:32:10 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:32:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:32:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 2400.1 total, 600.0 interval
                                                          Cumulative writes: 4290 writes, 19K keys, 4290 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4290 writes, 440 syncs, 9.75 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 08:32:14 np0005546420.localdomain sudo[81571]: pam_unix(sudo:session): session closed for user root
Dec 05 08:32:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:32:19 np0005546420.localdomain podman[81792]: 2025-12-05 08:32:19.506323375 +0000 UTC m=+0.083511204 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step5, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:32:19 np0005546420.localdomain podman[81792]: 2025-12-05 08:32:19.559490336 +0000 UTC m=+0.136678155 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vcs-type=git)
Dec 05 08:32:19 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:32:19 np0005546420.localdomain sshd[81819]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:32:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:32:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:32:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:32:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:32:20 np0005546420.localdomain systemd[1]: tmp-crun.OR5HLt.mount: Deactivated successfully.
Dec 05 08:32:20 np0005546420.localdomain podman[81822]: 2025-12-05 08:32:20.505469186 +0000 UTC m=+0.077653892 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, release=1761123044, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec 05 08:32:20 np0005546420.localdomain podman[81823]: 2025-12-05 08:32:20.561277108 +0000 UTC m=+0.127682604 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container)
Dec 05 08:32:20 np0005546420.localdomain podman[81821]: 2025-12-05 08:32:20.624883234 +0000 UTC m=+0.197059679 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-cron, vendor=Red Hat, Inc.)
Dec 05 08:32:20 np0005546420.localdomain podman[81822]: 2025-12-05 08:32:20.641573392 +0000 UTC m=+0.213758128 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:32:20 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:32:20 np0005546420.localdomain podman[81821]: 2025-12-05 08:32:20.659431786 +0000 UTC m=+0.231608271 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:32:20 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:32:20 np0005546420.localdomain podman[81824]: 2025-12-05 08:32:20.542852817 +0000 UTC m=+0.105313501 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12)
Dec 05 08:32:20 np0005546420.localdomain podman[81824]: 2025-12-05 08:32:20.722922077 +0000 UTC m=+0.285382751 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=ceilometer_agent_compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4)
Dec 05 08:32:20 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:32:20 np0005546420.localdomain podman[81823]: 2025-12-05 08:32:20.878470997 +0000 UTC m=+0.444876523 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 05 08:32:20 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:32:22 np0005546420.localdomain sudo[81918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:32:22 np0005546420.localdomain sudo[81918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:32:22 np0005546420.localdomain sudo[81918]: pam_unix(sudo:session): session closed for user root
Dec 05 08:32:22 np0005546420.localdomain sudo[81933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:32:22 np0005546420.localdomain sudo[81933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:32:23 np0005546420.localdomain sudo[81933]: pam_unix(sudo:session): session closed for user root
Dec 05 08:32:23 np0005546420.localdomain sshd[81980]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:32:24 np0005546420.localdomain sudo[81982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:32:24 np0005546420.localdomain sudo[81982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:32:24 np0005546420.localdomain sudo[81982]: pam_unix(sudo:session): session closed for user root
Dec 05 08:32:25 np0005546420.localdomain sshd[81980]: Connection reset by authenticating user root 45.140.17.124 port 27440 [preauth]
Dec 05 08:32:25 np0005546420.localdomain sshd[81997]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:32:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:32:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:32:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:32:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:32:26 np0005546420.localdomain systemd[1]: tmp-crun.pmZkuI.mount: Deactivated successfully.
Dec 05 08:32:26 np0005546420.localdomain podman[81999]: 2025-12-05 08:32:26.537699633 +0000 UTC m=+0.108523351 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:32:26 np0005546420.localdomain podman[82001]: 2025-12-05 08:32:26.588027246 +0000 UTC m=+0.152573989 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-iscsid-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, name=rhosp17/openstack-iscsid, container_name=iscsid, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true)
Dec 05 08:32:26 np0005546420.localdomain podman[82000]: 2025-12-05 08:32:26.637684017 +0000 UTC m=+0.205084489 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd)
Dec 05 08:32:26 np0005546420.localdomain podman[82000]: 2025-12-05 08:32:26.648475732 +0000 UTC m=+0.215876204 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step3, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:32:26 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:32:26 np0005546420.localdomain podman[82001]: 2025-12-05 08:32:26.702653375 +0000 UTC m=+0.267200138 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, container_name=iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1)
Dec 05 08:32:26 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:32:26 np0005546420.localdomain podman[81999]: 2025-12-05 08:32:26.719041853 +0000 UTC m=+0.289865521 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:32:26 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:32:26 np0005546420.localdomain podman[82002]: 2025-12-05 08:32:26.781204863 +0000 UTC m=+0.343283169 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com)
Dec 05 08:32:26 np0005546420.localdomain podman[82002]: 2025-12-05 08:32:26.848566985 +0000 UTC m=+0.410645311 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4)
Dec 05 08:32:26 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:32:27 np0005546420.localdomain sshd[81997]: Invalid user ubuntu from 45.140.17.124 port 27468
Dec 05 08:32:27 np0005546420.localdomain sshd[81997]: Connection reset by invalid user ubuntu 45.140.17.124 port 27468 [preauth]
Dec 05 08:32:27 np0005546420.localdomain systemd[1]: tmp-crun.QWABtB.mount: Deactivated successfully.
Dec 05 08:32:27 np0005546420.localdomain sshd[82087]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:32:29 np0005546420.localdomain sshd[82087]: Invalid user default from 45.140.17.124 port 27480
Dec 05 08:32:29 np0005546420.localdomain sshd[82087]: Connection reset by invalid user default 45.140.17.124 port 27480 [preauth]
Dec 05 08:32:29 np0005546420.localdomain sshd[82089]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:32:31 np0005546420.localdomain sshd[81819]: ssh_dispatch_run_fatal: Connection from 180.184.182.87 port 65374: Connection timed out [preauth]
Dec 05 08:32:32 np0005546420.localdomain sshd[82089]: Connection reset by authenticating user root 45.140.17.124 port 27494 [preauth]
Dec 05 08:32:32 np0005546420.localdomain sshd[82091]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:32:34 np0005546420.localdomain sshd[82091]: Connection reset by authenticating user root 45.140.17.124 port 56344 [preauth]
Dec 05 08:32:36 np0005546420.localdomain sshd[82093]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:32:37 np0005546420.localdomain sshd[82093]: Received disconnect from 195.250.72.168 port 58814:11: Bye Bye [preauth]
Dec 05 08:32:37 np0005546420.localdomain sshd[82093]: Disconnected from authenticating user root 195.250.72.168 port 58814 [preauth]
Dec 05 08:32:38 np0005546420.localdomain sshd[82095]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:32:39 np0005546420.localdomain sshd[82095]: Received disconnect from 93.157.248.178 port 57494:11: Bye Bye [preauth]
Dec 05 08:32:39 np0005546420.localdomain sshd[82095]: Disconnected from authenticating user root 93.157.248.178 port 57494 [preauth]
Dec 05 08:32:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:32:41 np0005546420.localdomain systemd[1]: tmp-crun.PHMPr3.mount: Deactivated successfully.
Dec 05 08:32:41 np0005546420.localdomain podman[82142]: 2025-12-05 08:32:41.512208638 +0000 UTC m=+0.089473649 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, config_id=tripleo_step1, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 05 08:32:41 np0005546420.localdomain podman[82142]: 2025-12-05 08:32:41.746593495 +0000 UTC m=+0.323858526 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64)
Dec 05 08:32:41 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:32:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:32:50 np0005546420.localdomain podman[82172]: 2025-12-05 08:32:50.511042071 +0000 UTC m=+0.086150815 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, vcs-type=git, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Dec 05 08:32:50 np0005546420.localdomain podman[82172]: 2025-12-05 08:32:50.543408986 +0000 UTC m=+0.118517740 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute)
Dec 05 08:32:50 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:32:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:32:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:32:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:32:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:32:51 np0005546420.localdomain systemd[1]: tmp-crun.zUNghm.mount: Deactivated successfully.
Dec 05 08:32:51 np0005546420.localdomain podman[82199]: 2025-12-05 08:32:51.516585992 +0000 UTC m=+0.090269604 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, version=17.1.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:32:51 np0005546420.localdomain podman[82199]: 2025-12-05 08:32:51.524409275 +0000 UTC m=+0.098092857 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container)
Dec 05 08:32:51 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:32:51 np0005546420.localdomain systemd[1]: tmp-crun.lQADTt.mount: Deactivated successfully.
Dec 05 08:32:51 np0005546420.localdomain podman[82201]: 2025-12-05 08:32:51.577232945 +0000 UTC m=+0.143594799 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z)
Dec 05 08:32:51 np0005546420.localdomain podman[82200]: 2025-12-05 08:32:51.665402002 +0000 UTC m=+0.235714990 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public)
Dec 05 08:32:51 np0005546420.localdomain podman[82202]: 2025-12-05 08:32:51.632743498 +0000 UTC m=+0.195706717 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 05 08:32:51 np0005546420.localdomain podman[82202]: 2025-12-05 08:32:51.720379029 +0000 UTC m=+0.283342288 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com)
Dec 05 08:32:51 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:32:51 np0005546420.localdomain podman[82200]: 2025-12-05 08:32:51.775691357 +0000 UTC m=+0.346004375 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.)
Dec 05 08:32:51 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:32:51 np0005546420.localdomain podman[82201]: 2025-12-05 08:32:51.998335909 +0000 UTC m=+0.564697693 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=nova_migration_target, io.openshift.expose-services=)
Dec 05 08:32:52 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:32:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:32:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:32:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:32:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:32:57 np0005546420.localdomain podman[82294]: 2025-12-05 08:32:57.508213238 +0000 UTC m=+0.079394246 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:32:57 np0005546420.localdomain podman[82294]: 2025-12-05 08:32:57.535353721 +0000 UTC m=+0.106534789 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, batch=17.1_20251118.1, container_name=ovn_controller, io.openshift.expose-services=)
Dec 05 08:32:57 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:32:57 np0005546420.localdomain podman[82296]: 2025-12-05 08:32:57.551057838 +0000 UTC m=+0.121499843 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 08:32:57 np0005546420.localdomain podman[82295]: 2025-12-05 08:32:57.61231508 +0000 UTC m=+0.183091875 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:32:57 np0005546420.localdomain podman[82297]: 2025-12-05 08:32:57.583543907 +0000 UTC m=+0.144644082 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:32:57 np0005546420.localdomain podman[82296]: 2025-12-05 08:32:57.635165449 +0000 UTC m=+0.205607454 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:32:57 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:32:57 np0005546420.localdomain podman[82295]: 2025-12-05 08:32:57.647289526 +0000 UTC m=+0.218066321 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:32:57 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:32:57 np0005546420.localdomain podman[82297]: 2025-12-05 08:32:57.717358232 +0000 UTC m=+0.278458387 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4)
Dec 05 08:32:57 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:33:02 np0005546420.localdomain sudo[82394]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sixfnzbemhnztwgzdsierzfbfkvmtfsr ; /usr/bin/python3
Dec 05 08:33:02 np0005546420.localdomain sudo[82394]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 08:33:03 np0005546420.localdomain python3[82396]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 08:33:04 np0005546420.localdomain sshd[82399]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:33:06 np0005546420.localdomain rhsm-service[6609]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 05 08:33:10 np0005546420.localdomain sudo[82394]: pam_unix(sudo:session): session closed for user root
Dec 05 08:33:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:33:12 np0005546420.localdomain podman[82584]: 2025-12-05 08:33:12.53215399 +0000 UTC m=+0.100092849 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:33:12 np0005546420.localdomain podman[82584]: 2025-12-05 08:33:12.738314831 +0000 UTC m=+0.306253640 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vcs-type=git, architecture=x86_64, container_name=metrics_qdr, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.openshift.expose-services=, release=1761123044)
Dec 05 08:33:12 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:33:14 np0005546420.localdomain sshd[82399]: error: kex_exchange_identification: read: Connection timed out
Dec 05 08:33:14 np0005546420.localdomain sshd[82399]: banner exchange: Connection from 115.190.22.194 port 45358: Connection timed out
Dec 05 08:33:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:33:21 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:33:21 np0005546420.localdomain recover_tripleo_nova_virtqemud[82619]: 62579
Dec 05 08:33:21 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:33:21 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:33:21 np0005546420.localdomain podman[82612]: 2025-12-05 08:33:21.515378939 +0000 UTC m=+0.091378538 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Dec 05 08:33:21 np0005546420.localdomain podman[82612]: 2025-12-05 08:33:21.551551692 +0000 UTC m=+0.127551341 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5)
Dec 05 08:33:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:33:21 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:33:21 np0005546420.localdomain systemd[1]: tmp-crun.to0uge.mount: Deactivated successfully.
Dec 05 08:33:21 np0005546420.localdomain podman[82640]: 2025-12-05 08:33:21.674487719 +0000 UTC m=+0.095821796 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, name=rhosp17/openstack-cron, container_name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:33:21 np0005546420.localdomain podman[82640]: 2025-12-05 08:33:21.679766933 +0000 UTC m=+0.101100950 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com)
Dec 05 08:33:21 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:33:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:33:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:33:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:33:22 np0005546420.localdomain podman[82661]: 2025-12-05 08:33:22.512353253 +0000 UTC m=+0.083642698 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=nova_migration_target, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 05 08:33:22 np0005546420.localdomain podman[82662]: 2025-12-05 08:33:22.551021674 +0000 UTC m=+0.120435820 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=)
Dec 05 08:33:22 np0005546420.localdomain podman[82662]: 2025-12-05 08:33:22.577330821 +0000 UTC m=+0.146744957 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:33:22 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:33:22 np0005546420.localdomain systemd[1]: tmp-crun.OXzfmP.mount: Deactivated successfully.
Dec 05 08:33:22 np0005546420.localdomain podman[82660]: 2025-12-05 08:33:22.671727492 +0000 UTC m=+0.247910559 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Dec 05 08:33:22 np0005546420.localdomain podman[82660]: 2025-12-05 08:33:22.703372744 +0000 UTC m=+0.279555811 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:33:22 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:33:22 np0005546420.localdomain podman[82661]: 2025-12-05 08:33:22.90487976 +0000 UTC m=+0.476169195 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target)
Dec 05 08:33:22 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:33:24 np0005546420.localdomain sudo[82735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:33:24 np0005546420.localdomain sudo[82735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:33:24 np0005546420.localdomain sudo[82735]: pam_unix(sudo:session): session closed for user root
Dec 05 08:33:24 np0005546420.localdomain sudo[82750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 08:33:24 np0005546420.localdomain sudo[82750]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:33:24 np0005546420.localdomain sudo[82750]: pam_unix(sudo:session): session closed for user root
Dec 05 08:33:25 np0005546420.localdomain sudo[82786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:33:25 np0005546420.localdomain sudo[82786]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:33:25 np0005546420.localdomain sudo[82786]: pam_unix(sudo:session): session closed for user root
Dec 05 08:33:25 np0005546420.localdomain sudo[82801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:33:25 np0005546420.localdomain sudo[82801]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:33:25 np0005546420.localdomain sudo[82801]: pam_unix(sudo:session): session closed for user root
Dec 05 08:33:26 np0005546420.localdomain sudo[82848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:33:26 np0005546420.localdomain sudo[82848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:33:26 np0005546420.localdomain sudo[82848]: pam_unix(sudo:session): session closed for user root
Dec 05 08:33:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:33:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:33:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:33:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:33:28 np0005546420.localdomain systemd[1]: tmp-crun.IdGuMK.mount: Deactivated successfully.
Dec 05 08:33:28 np0005546420.localdomain podman[82863]: 2025-12-05 08:33:28.526442818 +0000 UTC m=+0.104646890 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, release=1761123044, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:33:28 np0005546420.localdomain podman[82864]: 2025-12-05 08:33:28.554338074 +0000 UTC m=+0.132049411 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:33:28 np0005546420.localdomain podman[82864]: 2025-12-05 08:33:28.567302846 +0000 UTC m=+0.145014223 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 05 08:33:28 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:33:28 np0005546420.localdomain podman[82863]: 2025-12-05 08:33:28.606379819 +0000 UTC m=+0.184583831 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=ovn_controller, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1761123044)
Dec 05 08:33:28 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:33:28 np0005546420.localdomain podman[82865]: 2025-12-05 08:33:28.66824443 +0000 UTC m=+0.241056835 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step3, batch=17.1_20251118.1, managed_by=tripleo_ansible)
Dec 05 08:33:28 np0005546420.localdomain podman[82866]: 2025-12-05 08:33:28.70913586 +0000 UTC m=+0.279640833 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z)
Dec 05 08:33:28 np0005546420.localdomain podman[82865]: 2025-12-05 08:33:28.739566745 +0000 UTC m=+0.312379150 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, container_name=iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, version=17.1.12, com.redhat.component=openstack-iscsid-container, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid)
Dec 05 08:33:28 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:33:28 np0005546420.localdomain podman[82866]: 2025-12-05 08:33:28.783471677 +0000 UTC m=+0.353976710 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:33:28 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:33:29 np0005546420.localdomain systemd[1]: tmp-crun.BUYDid.mount: Deactivated successfully.
Dec 05 08:33:32 np0005546420.localdomain python3[82960]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname
Dec 05 08:33:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:33:43 np0005546420.localdomain systemd[1]: tmp-crun.p2bUI3.mount: Deactivated successfully.
Dec 05 08:33:43 np0005546420.localdomain podman[83006]: 2025-12-05 08:33:43.523583038 +0000 UTC m=+0.097524430 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 05 08:33:43 np0005546420.localdomain podman[83006]: 2025-12-05 08:33:43.750565281 +0000 UTC m=+0.324506653 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, name=rhosp17/openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:33:43 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:33:47 np0005546420.localdomain sshd[83036]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:33:48 np0005546420.localdomain sshd[83036]: Received disconnect from 195.250.72.168 port 43684:11: Bye Bye [preauth]
Dec 05 08:33:48 np0005546420.localdomain sshd[83036]: Disconnected from authenticating user root 195.250.72.168 port 43684 [preauth]
Dec 05 08:33:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:33:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:33:52 np0005546420.localdomain systemd[1]: tmp-crun.ICwG8N.mount: Deactivated successfully.
Dec 05 08:33:52 np0005546420.localdomain podman[83039]: 2025-12-05 08:33:52.52133815 +0000 UTC m=+0.098472149 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, tcib_managed=true, container_name=nova_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:33:52 np0005546420.localdomain podman[83039]: 2025-12-05 08:33:52.55345269 +0000 UTC m=+0.130586699 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:33:52 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:33:52 np0005546420.localdomain podman[83038]: 2025-12-05 08:33:52.610357947 +0000 UTC m=+0.187910459 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, com.redhat.component=openstack-cron-container, version=17.1.12, release=1761123044, tcib_managed=true, name=rhosp17/openstack-cron, vcs-type=git, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:33:52 np0005546420.localdomain podman[83038]: 2025-12-05 08:33:52.618408415 +0000 UTC m=+0.195960937 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=logrotate_crond, vendor=Red Hat, Inc.)
Dec 05 08:33:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:33:52 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:33:52 np0005546420.localdomain podman[83084]: 2025-12-05 08:33:52.700664583 +0000 UTC m=+0.069038692 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible)
Dec 05 08:33:52 np0005546420.localdomain podman[83084]: 2025-12-05 08:33:52.733702462 +0000 UTC m=+0.102076611 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:33:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:33:52 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:33:52 np0005546420.localdomain podman[83112]: 2025-12-05 08:33:52.836581206 +0000 UTC m=+0.076990086 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:12:45Z, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:33:52 np0005546420.localdomain podman[83112]: 2025-12-05 08:33:52.893828663 +0000 UTC m=+0.134237573 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12)
Dec 05 08:33:52 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:33:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:33:53 np0005546420.localdomain podman[83139]: 2025-12-05 08:33:53.494523456 +0000 UTC m=+0.073821588 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:33:53 np0005546420.localdomain podman[83139]: 2025-12-05 08:33:53.895918881 +0000 UTC m=+0.475217013 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=nova_migration_target, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 08:33:53 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:33:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:33:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:33:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:33:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:33:59 np0005546420.localdomain systemd[1]: tmp-crun.5ACfp8.mount: Deactivated successfully.
Dec 05 08:33:59 np0005546420.localdomain podman[83162]: 2025-12-05 08:33:59.58015402 +0000 UTC m=+0.152804626 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, vcs-type=git, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:33:59 np0005546420.localdomain podman[83162]: 2025-12-05 08:33:59.619442072 +0000 UTC m=+0.192092688 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:33:59 np0005546420.localdomain podman[83161]: 2025-12-05 08:33:59.622105584 +0000 UTC m=+0.195190563 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-ovn-controller, architecture=x86_64)
Dec 05 08:33:59 np0005546420.localdomain podman[83164]: 2025-12-05 08:33:59.538370241 +0000 UTC m=+0.100857593 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, vcs-type=git)
Dec 05 08:33:59 np0005546420.localdomain podman[83164]: 2025-12-05 08:33:59.667896627 +0000 UTC m=+0.230383969 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Dec 05 08:33:59 np0005546420.localdomain podman[83161]: 2025-12-05 08:33:59.678495514 +0000 UTC m=+0.251580483 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc.)
Dec 05 08:33:59 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:33:59 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:33:59 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:33:59 np0005546420.localdomain podman[83163]: 2025-12-05 08:33:59.682524478 +0000 UTC m=+0.251507641 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, container_name=iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team)
Dec 05 08:33:59 np0005546420.localdomain podman[83163]: 2025-12-05 08:33:59.763055113 +0000 UTC m=+0.332038216 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:33:59 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:34:00 np0005546420.localdomain systemd[1]: tmp-crun.CoqpvI.mount: Deactivated successfully.
Dec 05 08:34:01 np0005546420.localdomain sshd[83249]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:34:02 np0005546420.localdomain sshd[83249]: Received disconnect from 93.157.248.178 port 39792:11: Bye Bye [preauth]
Dec 05 08:34:02 np0005546420.localdomain sshd[83249]: Disconnected from authenticating user root 93.157.248.178 port 39792 [preauth]
Dec 05 08:34:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:34:14 np0005546420.localdomain podman[83251]: 2025-12-05 08:34:14.507731135 +0000 UTC m=+0.087227332 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr)
Dec 05 08:34:14 np0005546420.localdomain podman[83251]: 2025-12-05 08:34:14.700521943 +0000 UTC m=+0.280018140 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 05 08:34:14 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:34:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:34:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:34:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:34:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:34:23 np0005546420.localdomain podman[83280]: 2025-12-05 08:34:23.517716535 +0000 UTC m=+0.091349910 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 05 08:34:23 np0005546420.localdomain podman[83280]: 2025-12-05 08:34:23.526673921 +0000 UTC m=+0.100307276 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, config_id=tripleo_step4, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 05 08:34:23 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:34:23 np0005546420.localdomain podman[83281]: 2025-12-05 08:34:23.570495682 +0000 UTC m=+0.141829896 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:34:23 np0005546420.localdomain podman[83283]: 2025-12-05 08:34:23.625305334 +0000 UTC m=+0.189636703 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:34:23 np0005546420.localdomain podman[83281]: 2025-12-05 08:34:23.650556713 +0000 UTC m=+0.221890877 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64)
Dec 05 08:34:23 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:34:23 np0005546420.localdomain podman[83283]: 2025-12-05 08:34:23.702897798 +0000 UTC m=+0.267229157 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4)
Dec 05 08:34:23 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:34:23 np0005546420.localdomain podman[83282]: 2025-12-05 08:34:23.78820842 +0000 UTC m=+0.354670345 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, container_name=nova_compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step5, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4)
Dec 05 08:34:23 np0005546420.localdomain podman[83282]: 2025-12-05 08:34:23.846357904 +0000 UTC m=+0.412819809 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-nova-compute, container_name=nova_compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z)
Dec 05 08:34:23 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:34:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:34:24 np0005546420.localdomain podman[83378]: 2025-12-05 08:34:24.494934204 +0000 UTC m=+0.075464049 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=nova_migration_target)
Dec 05 08:34:24 np0005546420.localdomain podman[83378]: 2025-12-05 08:34:24.881274534 +0000 UTC m=+0.461804519 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044)
Dec 05 08:34:24 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:34:26 np0005546420.localdomain sudo[83399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:34:26 np0005546420.localdomain sudo[83399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:34:26 np0005546420.localdomain sudo[83399]: pam_unix(sudo:session): session closed for user root
Dec 05 08:34:26 np0005546420.localdomain sudo[83414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:34:26 np0005546420.localdomain sudo[83414]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:34:27 np0005546420.localdomain sudo[83414]: pam_unix(sudo:session): session closed for user root
Dec 05 08:34:28 np0005546420.localdomain sudo[83461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:34:28 np0005546420.localdomain sudo[83461]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:34:28 np0005546420.localdomain sudo[83461]: pam_unix(sudo:session): session closed for user root
Dec 05 08:34:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:34:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:34:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:34:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:34:30 np0005546420.localdomain podman[83477]: 2025-12-05 08:34:30.520270327 +0000 UTC m=+0.091828505 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:34:30 np0005546420.localdomain podman[83477]: 2025-12-05 08:34:30.560469298 +0000 UTC m=+0.132027486 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, managed_by=tripleo_ansible, container_name=collectd, name=rhosp17/openstack-collectd, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:34:30 np0005546420.localdomain systemd[1]: tmp-crun.Ndy7kP.mount: Deactivated successfully.
Dec 05 08:34:30 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:34:30 np0005546420.localdomain podman[83476]: 2025-12-05 08:34:30.576357497 +0000 UTC m=+0.151586127 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible)
Dec 05 08:34:30 np0005546420.localdomain podman[83476]: 2025-12-05 08:34:30.628103284 +0000 UTC m=+0.203331904 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T23:34:05Z, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.openshift.expose-services=, container_name=ovn_controller, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:34:30 np0005546420.localdomain podman[83484]: 2025-12-05 08:34:30.631177929 +0000 UTC m=+0.194180623 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, release=1761123044, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:34:30 np0005546420.localdomain podman[83484]: 2025-12-05 08:34:30.679466239 +0000 UTC m=+0.242468973 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:34:30 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:34:30 np0005546420.localdomain podman[83478]: 2025-12-05 08:34:30.685527876 +0000 UTC m=+0.252195232 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_id=tripleo_step3, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:34:30 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:34:30 np0005546420.localdomain podman[83478]: 2025-12-05 08:34:30.769735894 +0000 UTC m=+0.336403240 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:34:30 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:34:32 np0005546420.localdomain sshd[80685]: Received disconnect from 38.102.83.114 port 54910:11: disconnected by user
Dec 05 08:34:32 np0005546420.localdomain sshd[80685]: Disconnected from user zuul 38.102.83.114 port 54910
Dec 05 08:34:32 np0005546420.localdomain sshd[80682]: pam_unix(sshd:session): session closed for user zuul
Dec 05 08:34:32 np0005546420.localdomain systemd[1]: session-34.scope: Deactivated successfully.
Dec 05 08:34:32 np0005546420.localdomain systemd[1]: session-34.scope: Consumed 19.943s CPU time.
Dec 05 08:34:32 np0005546420.localdomain systemd-logind[762]: Session 34 logged out. Waiting for processes to exit.
Dec 05 08:34:32 np0005546420.localdomain systemd-logind[762]: Removed session 34.
Dec 05 08:34:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:34:45 np0005546420.localdomain podman[83603]: 2025-12-05 08:34:45.519949969 +0000 UTC m=+0.096618853 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true)
Dec 05 08:34:45 np0005546420.localdomain podman[83603]: 2025-12-05 08:34:45.745477877 +0000 UTC m=+0.322146761 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, architecture=x86_64, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:34:45 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:34:52 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:34:52 np0005546420.localdomain recover_tripleo_nova_virtqemud[83633]: 62579
Dec 05 08:34:52 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:34:52 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:34:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:34:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:34:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:34:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:34:54 np0005546420.localdomain systemd[1]: tmp-crun.SxHP20.mount: Deactivated successfully.
Dec 05 08:34:54 np0005546420.localdomain podman[83637]: 2025-12-05 08:34:54.578133654 +0000 UTC m=+0.148858033 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z)
Dec 05 08:34:54 np0005546420.localdomain podman[83637]: 2025-12-05 08:34:54.615485646 +0000 UTC m=+0.186210045 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=)
Dec 05 08:34:54 np0005546420.localdomain podman[83635]: 2025-12-05 08:34:54.625225447 +0000 UTC m=+0.201665213 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, vcs-type=git, vendor=Red Hat, Inc.)
Dec 05 08:34:54 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:34:54 np0005546420.localdomain podman[83636]: 2025-12-05 08:34:54.665158389 +0000 UTC m=+0.240008026 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1)
Dec 05 08:34:54 np0005546420.localdomain podman[83634]: 2025-12-05 08:34:54.53327198 +0000 UTC m=+0.111572143 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=logrotate_crond, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:34:54 np0005546420.localdomain podman[83635]: 2025-12-05 08:34:54.689678476 +0000 UTC m=+0.266118212 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, container_name=ceilometer_agent_ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 05 08:34:54 np0005546420.localdomain podman[83634]: 2025-12-05 08:34:54.721451886 +0000 UTC m=+0.299752079 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, release=1761123044, tcib_managed=true, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z)
Dec 05 08:34:54 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:34:54 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:34:54 np0005546420.localdomain podman[83636]: 2025-12-05 08:34:54.776937118 +0000 UTC m=+0.351786765 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:36:58Z, distribution-scope=public, batch=17.1_20251118.1, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:34:54 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:34:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:34:55 np0005546420.localdomain podman[83732]: 2025-12-05 08:34:55.501552815 +0000 UTC m=+0.076563404 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1)
Dec 05 08:34:55 np0005546420.localdomain sshd[83755]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:34:55 np0005546420.localdomain podman[83732]: 2025-12-05 08:34:55.909583194 +0000 UTC m=+0.484593793 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=nova_migration_target, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:34:55 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:34:56 np0005546420.localdomain sshd[83755]: Received disconnect from 195.250.72.168 port 53202:11: Bye Bye [preauth]
Dec 05 08:34:56 np0005546420.localdomain sshd[83755]: Disconnected from authenticating user root 195.250.72.168 port 53202 [preauth]
Dec 05 08:35:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:35:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:35:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:35:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:35:01 np0005546420.localdomain podman[83758]: 2025-12-05 08:35:01.51692004 +0000 UTC m=+0.092392541 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:35:01 np0005546420.localdomain podman[83758]: 2025-12-05 08:35:01.52534723 +0000 UTC m=+0.100819691 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, container_name=collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64)
Dec 05 08:35:01 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:35:01 np0005546420.localdomain podman[83757]: 2025-12-05 08:35:01.566315235 +0000 UTC m=+0.141303822 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller)
Dec 05 08:35:01 np0005546420.localdomain podman[83760]: 2025-12-05 08:35:01.624730166 +0000 UTC m=+0.193499151 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64)
Dec 05 08:35:01 np0005546420.localdomain podman[83760]: 2025-12-05 08:35:01.675398469 +0000 UTC m=+0.244167514 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044)
Dec 05 08:35:01 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:35:01 np0005546420.localdomain podman[83759]: 2025-12-05 08:35:01.686988098 +0000 UTC m=+0.256697152 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:35:01 np0005546420.localdomain podman[83757]: 2025-12-05 08:35:01.744161661 +0000 UTC m=+0.319150238 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, version=17.1.12, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com)
Dec 05 08:35:01 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:35:01 np0005546420.localdomain podman[83759]: 2025-12-05 08:35:01.771581827 +0000 UTC m=+0.341290811 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 08:35:01 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:35:02 np0005546420.localdomain systemd[1]: tmp-crun.coHqsx.mount: Deactivated successfully.
Dec 05 08:35:09 np0005546420.localdomain sshd[83843]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:35:10 np0005546420.localdomain sshd[83843]: Received disconnect from 93.157.248.178 port 47330:11: Bye Bye [preauth]
Dec 05 08:35:10 np0005546420.localdomain sshd[83843]: Disconnected from authenticating user root 93.157.248.178 port 47330 [preauth]
Dec 05 08:35:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:35:16 np0005546420.localdomain podman[83845]: 2025-12-05 08:35:16.507557943 +0000 UTC m=+0.087948984 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git)
Dec 05 08:35:16 np0005546420.localdomain podman[83845]: 2025-12-05 08:35:16.726342954 +0000 UTC m=+0.306733935 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, architecture=x86_64, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:35:16 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:35:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:35:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:35:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:35:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:35:25 np0005546420.localdomain systemd[1]: tmp-crun.Ync2WU.mount: Deactivated successfully.
Dec 05 08:35:25 np0005546420.localdomain podman[83877]: 2025-12-05 08:35:25.494228683 +0000 UTC m=+0.072516359 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044)
Dec 05 08:35:25 np0005546420.localdomain podman[83874]: 2025-12-05 08:35:25.514655673 +0000 UTC m=+0.095149987 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z)
Dec 05 08:35:25 np0005546420.localdomain podman[83874]: 2025-12-05 08:35:25.525379774 +0000 UTC m=+0.105874128 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, name=rhosp17/openstack-cron, architecture=x86_64)
Dec 05 08:35:25 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:35:25 np0005546420.localdomain podman[83877]: 2025-12-05 08:35:25.577442971 +0000 UTC m=+0.155730697 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, version=17.1.12, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:35:25 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:35:25 np0005546420.localdomain podman[83876]: 2025-12-05 08:35:25.666989143 +0000 UTC m=+0.241900075 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, container_name=nova_compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com)
Dec 05 08:35:25 np0005546420.localdomain podman[83875]: 2025-12-05 08:35:25.716288624 +0000 UTC m=+0.292905268 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Dec 05 08:35:25 np0005546420.localdomain podman[83876]: 2025-12-05 08:35:25.721264658 +0000 UTC m=+0.296175560 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 05 08:35:25 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:35:25 np0005546420.localdomain podman[83875]: 2025-12-05 08:35:25.751364577 +0000 UTC m=+0.327981181 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044)
Dec 05 08:35:25 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:35:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:35:26 np0005546420.localdomain podman[83967]: 2025-12-05 08:35:26.502150801 +0000 UTC m=+0.078290287 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc.)
Dec 05 08:35:26 np0005546420.localdomain podman[83967]: 2025-12-05 08:35:26.854656717 +0000 UTC m=+0.430796243 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:35:26 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:35:28 np0005546420.localdomain sudo[83991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:35:28 np0005546420.localdomain sudo[83991]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:35:28 np0005546420.localdomain sudo[83991]: pam_unix(sudo:session): session closed for user root
Dec 05 08:35:28 np0005546420.localdomain sudo[84006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 08:35:28 np0005546420.localdomain sudo[84006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:35:29 np0005546420.localdomain systemd[1]: tmp-crun.Cgxg3z.mount: Deactivated successfully.
Dec 05 08:35:29 np0005546420.localdomain podman[84092]: 2025-12-05 08:35:29.241102806 +0000 UTC m=+0.106219397 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, ceph=True, distribution-scope=public, io.openshift.expose-services=, release=1763362218, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 08:35:29 np0005546420.localdomain podman[84092]: 2025-12-05 08:35:29.373399929 +0000 UTC m=+0.238516520 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, name=rhceph, ceph=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 08:35:29 np0005546420.localdomain sudo[84006]: pam_unix(sudo:session): session closed for user root
Dec 05 08:35:29 np0005546420.localdomain sudo[84158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:35:29 np0005546420.localdomain sudo[84158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:35:29 np0005546420.localdomain sudo[84158]: pam_unix(sudo:session): session closed for user root
Dec 05 08:35:29 np0005546420.localdomain sudo[84173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:35:29 np0005546420.localdomain sudo[84173]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:35:30 np0005546420.localdomain sudo[84173]: pam_unix(sudo:session): session closed for user root
Dec 05 08:35:31 np0005546420.localdomain sudo[84220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:35:31 np0005546420.localdomain sudo[84220]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:35:31 np0005546420.localdomain sudo[84220]: pam_unix(sudo:session): session closed for user root
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:35:32 np0005546420.localdomain recover_tripleo_nova_virtqemud[84260]: 62579
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:35:32 np0005546420.localdomain podman[84236]: 2025-12-05 08:35:32.520523348 +0000 UTC m=+0.096494128 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, container_name=collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: tmp-crun.0adEPl.mount: Deactivated successfully.
Dec 05 08:35:32 np0005546420.localdomain podman[84235]: 2025-12-05 08:35:32.564211607 +0000 UTC m=+0.141105166 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, release=1761123044, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 05 08:35:32 np0005546420.localdomain podman[84236]: 2025-12-05 08:35:32.581561992 +0000 UTC m=+0.157532782 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git)
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:35:32 np0005546420.localdomain podman[84238]: 2025-12-05 08:35:32.632632297 +0000 UTC m=+0.200470466 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 05 08:35:32 np0005546420.localdomain podman[84235]: 2025-12-05 08:35:32.647606189 +0000 UTC m=+0.224499758 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true)
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:35:32 np0005546420.localdomain podman[84237]: 2025-12-05 08:35:32.670884758 +0000 UTC m=+0.245572728 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, config_id=tripleo_step3, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:35:32 np0005546420.localdomain podman[84237]: 2025-12-05 08:35:32.681256957 +0000 UTC m=+0.255944927 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step3, name=rhosp17/openstack-iscsid)
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:35:32 np0005546420.localdomain podman[84238]: 2025-12-05 08:35:32.766435885 +0000 UTC m=+0.334273994 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:35:32 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:35:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:35:47 np0005546420.localdomain systemd[1]: tmp-crun.h3jfwT.mount: Deactivated successfully.
Dec 05 08:35:47 np0005546420.localdomain podman[84369]: 2025-12-05 08:35:47.506685933 +0000 UTC m=+0.080004040 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:35:47 np0005546420.localdomain podman[84369]: 2025-12-05 08:35:47.744605523 +0000 UTC m=+0.317923670 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, container_name=metrics_qdr, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:35:47 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:35:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:35:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:35:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:35:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:35:56 np0005546420.localdomain podman[84401]: 2025-12-05 08:35:56.515307409 +0000 UTC m=+0.088923884 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 05 08:35:56 np0005546420.localdomain systemd[1]: tmp-crun.ySfyRT.mount: Deactivated successfully.
Dec 05 08:35:56 np0005546420.localdomain podman[84400]: 2025-12-05 08:35:56.56977969 +0000 UTC m=+0.145990375 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:35:56 np0005546420.localdomain podman[84400]: 2025-12-05 08:35:56.583451972 +0000 UTC m=+0.159662677 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:35:56 np0005546420.localdomain podman[84401]: 2025-12-05 08:35:56.593008137 +0000 UTC m=+0.166624532 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:35:56 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:35:56 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:35:56 np0005546420.localdomain podman[84402]: 2025-12-05 08:35:56.674741078 +0000 UTC m=+0.246041692 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Dec 05 08:35:56 np0005546420.localdomain podman[84403]: 2025-12-05 08:35:56.729019194 +0000 UTC m=+0.297758599 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, release=1761123044)
Dec 05 08:35:56 np0005546420.localdomain podman[84403]: 2025-12-05 08:35:56.761376141 +0000 UTC m=+0.330115586 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute)
Dec 05 08:35:56 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:35:56 np0005546420.localdomain podman[84402]: 2025-12-05 08:35:56.782793812 +0000 UTC m=+0.354094426 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:35:56 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:35:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:35:57 np0005546420.localdomain podman[84499]: 2025-12-05 08:35:57.491483378 +0000 UTC m=+0.073943973 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, release=1761123044, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vcs-type=git)
Dec 05 08:35:57 np0005546420.localdomain podman[84499]: 2025-12-05 08:35:57.862083042 +0000 UTC m=+0.444543707 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:35:57 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:36:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:36:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:36:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:36:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:36:03 np0005546420.localdomain systemd[1]: tmp-crun.tDMu6d.mount: Deactivated successfully.
Dec 05 08:36:03 np0005546420.localdomain podman[84522]: 2025-12-05 08:36:03.531012368 +0000 UTC m=+0.109015105 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Dec 05 08:36:03 np0005546420.localdomain podman[84525]: 2025-12-05 08:36:03.571108095 +0000 UTC m=+0.138805293 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:36:03 np0005546420.localdomain podman[84522]: 2025-12-05 08:36:03.579557505 +0000 UTC m=+0.157560222 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team)
Dec 05 08:36:03 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:36:03 np0005546420.localdomain podman[84524]: 2025-12-05 08:36:03.628132634 +0000 UTC m=+0.200293470 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z)
Dec 05 08:36:03 np0005546420.localdomain podman[84525]: 2025-12-05 08:36:03.646516801 +0000 UTC m=+0.214214079 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:36:03 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:36:03 np0005546420.localdomain podman[84524]: 2025-12-05 08:36:03.662978139 +0000 UTC m=+0.235138965 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z)
Dec 05 08:36:03 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:36:03 np0005546420.localdomain podman[84523]: 2025-12-05 08:36:03.498728181 +0000 UTC m=+0.077567663 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:36:03 np0005546420.localdomain podman[84523]: 2025-12-05 08:36:03.733392442 +0000 UTC m=+0.312231964 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, version=17.1.12)
Dec 05 08:36:03 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:36:04 np0005546420.localdomain sshd[84608]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:36:06 np0005546420.localdomain sshd[84608]: Received disconnect from 195.250.72.168 port 53266:11: Bye Bye [preauth]
Dec 05 08:36:06 np0005546420.localdomain sshd[84608]: Disconnected from authenticating user root 195.250.72.168 port 53266 [preauth]
Dec 05 08:36:17 np0005546420.localdomain sshd[84610]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:36:17 np0005546420.localdomain sshd[84610]: Received disconnect from 93.157.248.178 port 45532:11: Bye Bye [preauth]
Dec 05 08:36:17 np0005546420.localdomain sshd[84610]: Disconnected from authenticating user root 93.157.248.178 port 45532 [preauth]
Dec 05 08:36:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:36:18 np0005546420.localdomain systemd[1]: tmp-crun.3eOqq6.mount: Deactivated successfully.
Dec 05 08:36:18 np0005546420.localdomain podman[84612]: 2025-12-05 08:36:18.056841758 +0000 UTC m=+0.111949005 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, distribution-scope=public, release=1761123044, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:36:18 np0005546420.localdomain podman[84612]: 2025-12-05 08:36:18.310418751 +0000 UTC m=+0.365525988 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:36:18 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:36:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:36:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:36:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:36:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:36:27 np0005546420.localdomain podman[84640]: 2025-12-05 08:36:27.513889039 +0000 UTC m=+0.085903541 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:36:27 np0005546420.localdomain podman[84640]: 2025-12-05 08:36:27.523767904 +0000 UTC m=+0.095782396 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1)
Dec 05 08:36:27 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:36:27 np0005546420.localdomain systemd[1]: tmp-crun.k9Sajj.mount: Deactivated successfully.
Dec 05 08:36:27 np0005546420.localdomain podman[84642]: 2025-12-05 08:36:27.585105416 +0000 UTC m=+0.150314218 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step5, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, container_name=nova_compute, name=rhosp17/openstack-nova-compute)
Dec 05 08:36:27 np0005546420.localdomain podman[84643]: 2025-12-05 08:36:27.629370142 +0000 UTC m=+0.191937443 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 05 08:36:27 np0005546420.localdomain podman[84642]: 2025-12-05 08:36:27.640579308 +0000 UTC m=+0.205788130 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044)
Dec 05 08:36:27 np0005546420.localdomain podman[84641]: 2025-12-05 08:36:27.687702052 +0000 UTC m=+0.254752261 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi)
Dec 05 08:36:27 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:36:27 np0005546420.localdomain podman[84643]: 2025-12-05 08:36:27.74078874 +0000 UTC m=+0.303356021 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, release=1761123044, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:36:27 np0005546420.localdomain podman[84641]: 2025-12-05 08:36:27.751370537 +0000 UTC m=+0.318420786 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 05 08:36:27 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:36:27 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:36:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:36:28 np0005546420.localdomain systemd[1]: tmp-crun.UH6pFv.mount: Deactivated successfully.
Dec 05 08:36:28 np0005546420.localdomain podman[84739]: 2025-12-05 08:36:28.528513544 +0000 UTC m=+0.104079202 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 05 08:36:28 np0005546420.localdomain podman[84739]: 2025-12-05 08:36:28.901708658 +0000 UTC m=+0.477274326 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:36:28 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:36:31 np0005546420.localdomain sudo[84764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:36:31 np0005546420.localdomain sudo[84764]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:36:31 np0005546420.localdomain sudo[84764]: pam_unix(sudo:session): session closed for user root
Dec 05 08:36:31 np0005546420.localdomain sudo[84779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:36:31 np0005546420.localdomain sudo[84779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:36:31 np0005546420.localdomain sudo[84779]: pam_unix(sudo:session): session closed for user root
Dec 05 08:36:32 np0005546420.localdomain sudo[84826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:36:32 np0005546420.localdomain sudo[84826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:36:32 np0005546420.localdomain sudo[84826]: pam_unix(sudo:session): session closed for user root
Dec 05 08:36:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:36:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:36:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:36:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:36:34 np0005546420.localdomain podman[84842]: 2025-12-05 08:36:34.521733435 +0000 UTC m=+0.085850789 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, container_name=collectd, tcib_managed=true, release=1761123044, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public)
Dec 05 08:36:34 np0005546420.localdomain podman[84842]: 2025-12-05 08:36:34.535335885 +0000 UTC m=+0.099453279 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12)
Dec 05 08:36:34 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:36:34 np0005546420.localdomain systemd[1]: tmp-crun.aAxdIS.mount: Deactivated successfully.
Dec 05 08:36:34 np0005546420.localdomain podman[84841]: 2025-12-05 08:36:34.624348162 +0000 UTC m=+0.187734854 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ovn_controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 05 08:36:34 np0005546420.localdomain podman[84844]: 2025-12-05 08:36:34.594141129 +0000 UTC m=+0.153859698 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:36:34 np0005546420.localdomain podman[84841]: 2025-12-05 08:36:34.670143184 +0000 UTC m=+0.233529886 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, version=17.1.12, distribution-scope=public, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:36:34 np0005546420.localdomain podman[84844]: 2025-12-05 08:36:34.681397361 +0000 UTC m=+0.241115860 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 05 08:36:34 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:36:34 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:36:34 np0005546420.localdomain podman[84843]: 2025-12-05 08:36:34.740829206 +0000 UTC m=+0.304908039 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:36:34 np0005546420.localdomain podman[84843]: 2025-12-05 08:36:34.779512758 +0000 UTC m=+0.343591631 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:36:34 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:36:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:36:48 np0005546420.localdomain podman[84973]: 2025-12-05 08:36:48.509979502 +0000 UTC m=+0.085504099 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:36:48 np0005546420.localdomain podman[84973]: 2025-12-05 08:36:48.755571059 +0000 UTC m=+0.331095656 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-type=git, io.openshift.expose-services=)
Dec 05 08:36:48 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:36:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:36:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:36:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:36:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:36:58 np0005546420.localdomain podman[85002]: 2025-12-05 08:36:58.53153433 +0000 UTC m=+0.096018944 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true)
Dec 05 08:36:58 np0005546420.localdomain podman[85002]: 2025-12-05 08:36:58.565668793 +0000 UTC m=+0.130153437 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron)
Dec 05 08:36:58 np0005546420.localdomain podman[85003]: 2025-12-05 08:36:58.582024867 +0000 UTC m=+0.144546021 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:36:58 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:36:58 np0005546420.localdomain podman[85004]: 2025-12-05 08:36:58.64369317 +0000 UTC m=+0.200486057 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5)
Dec 05 08:36:58 np0005546420.localdomain podman[85004]: 2025-12-05 08:36:58.678416091 +0000 UTC m=+0.235209028 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, version=17.1.12)
Dec 05 08:36:58 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:36:58 np0005546420.localdomain podman[85005]: 2025-12-05 08:36:58.696115468 +0000 UTC m=+0.249568931 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:36:58 np0005546420.localdomain podman[85003]: 2025-12-05 08:36:58.719633313 +0000 UTC m=+0.282154507 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 05 08:36:58 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:36:58 np0005546420.localdomain podman[85005]: 2025-12-05 08:36:58.765792538 +0000 UTC m=+0.319245961 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:36:58 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:36:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:36:59 np0005546420.localdomain podman[85100]: 2025-12-05 08:36:59.499415452 +0000 UTC m=+0.081057012 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:36:59 np0005546420.localdomain podman[85100]: 2025-12-05 08:36:59.877349752 +0000 UTC m=+0.458991332 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.expose-services=, container_name=nova_migration_target, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:36:59 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:37:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:37:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:37:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:37:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:37:05 np0005546420.localdomain podman[85126]: 2025-12-05 08:37:05.528793799 +0000 UTC m=+0.100680498 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, vcs-type=git, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:37:05 np0005546420.localdomain podman[85128]: 2025-12-05 08:37:05.506990906 +0000 UTC m=+0.071702474 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z)
Dec 05 08:37:05 np0005546420.localdomain podman[85126]: 2025-12-05 08:37:05.596843765 +0000 UTC m=+0.168730514 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team)
Dec 05 08:37:05 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:37:05 np0005546420.localdomain podman[85127]: 2025-12-05 08:37:05.611410684 +0000 UTC m=+0.182023804 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 05 08:37:05 np0005546420.localdomain podman[85127]: 2025-12-05 08:37:05.642637358 +0000 UTC m=+0.213250408 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, batch=17.1_20251118.1, container_name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible)
Dec 05 08:37:05 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:37:05 np0005546420.localdomain podman[85128]: 2025-12-05 08:37:05.726643322 +0000 UTC m=+0.291354900 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Dec 05 08:37:05 np0005546420.localdomain podman[85129]: 2025-12-05 08:37:05.736977121 +0000 UTC m=+0.298049176 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, release=1761123044, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 05 08:37:05 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:37:05 np0005546420.localdomain podman[85129]: 2025-12-05 08:37:05.81341003 +0000 UTC m=+0.374482075 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64)
Dec 05 08:37:05 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:37:17 np0005546420.localdomain sshd[85214]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:37:18 np0005546420.localdomain sshd[85214]: Received disconnect from 195.250.72.168 port 43922:11: Bye Bye [preauth]
Dec 05 08:37:18 np0005546420.localdomain sshd[85214]: Disconnected from authenticating user root 195.250.72.168 port 43922 [preauth]
Dec 05 08:37:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:37:19 np0005546420.localdomain podman[85216]: 2025-12-05 08:37:19.515024864 +0000 UTC m=+0.088768319 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, release=1761123044, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4)
Dec 05 08:37:19 np0005546420.localdomain podman[85216]: 2025-12-05 08:37:19.724811547 +0000 UTC m=+0.298554972 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_id=tripleo_step1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=metrics_qdr, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:37:19 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:37:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:37:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:37:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:37:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:37:29 np0005546420.localdomain podman[85248]: 2025-12-05 08:37:29.532748617 +0000 UTC m=+0.097447667 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, build-date=2025-11-19T00:11:48Z, vcs-type=git, tcib_managed=true, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:37:29 np0005546420.localdomain podman[85246]: 2025-12-05 08:37:29.559218894 +0000 UTC m=+0.132139058 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:37:29 np0005546420.localdomain podman[85247]: 2025-12-05 08:37:29.582083739 +0000 UTC m=+0.149972598 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5)
Dec 05 08:37:29 np0005546420.localdomain podman[85248]: 2025-12-05 08:37:29.588133716 +0000 UTC m=+0.152832726 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, tcib_managed=true, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:37:29 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:37:29 np0005546420.localdomain podman[85246]: 2025-12-05 08:37:29.641270005 +0000 UTC m=+0.214190169 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:37:29 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:37:29 np0005546420.localdomain podman[85247]: 2025-12-05 08:37:29.661938833 +0000 UTC m=+0.229827702 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=)
Dec 05 08:37:29 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:37:29 np0005546420.localdomain podman[85245]: 2025-12-05 08:37:29.726680951 +0000 UTC m=+0.301541245 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1)
Dec 05 08:37:29 np0005546420.localdomain podman[85245]: 2025-12-05 08:37:29.740188287 +0000 UTC m=+0.315048601 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:37:29 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:37:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:37:30 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:37:30 np0005546420.localdomain recover_tripleo_nova_virtqemud[85354]: 62579
Dec 05 08:37:30 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:37:30 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:37:30 np0005546420.localdomain podman[85347]: 2025-12-05 08:37:30.485710339 +0000 UTC m=+0.070762215 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible)
Dec 05 08:37:30 np0005546420.localdomain podman[85347]: 2025-12-05 08:37:30.882606824 +0000 UTC m=+0.467658680 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12)
Dec 05 08:37:30 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:37:32 np0005546420.localdomain sshd[85372]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:37:32 np0005546420.localdomain sudo[85374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:37:32 np0005546420.localdomain sudo[85374]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:37:32 np0005546420.localdomain sudo[85374]: pam_unix(sudo:session): session closed for user root
Dec 05 08:37:32 np0005546420.localdomain sshd[85372]: Received disconnect from 93.157.248.178 port 49934:11: Bye Bye [preauth]
Dec 05 08:37:32 np0005546420.localdomain sshd[85372]: Disconnected from authenticating user root 93.157.248.178 port 49934 [preauth]
Dec 05 08:37:33 np0005546420.localdomain sudo[85389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:37:33 np0005546420.localdomain sudo[85389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:37:33 np0005546420.localdomain sudo[85389]: pam_unix(sudo:session): session closed for user root
Dec 05 08:37:34 np0005546420.localdomain sudo[85435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:37:34 np0005546420.localdomain sudo[85435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:37:34 np0005546420.localdomain sudo[85435]: pam_unix(sudo:session): session closed for user root
Dec 05 08:37:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:37:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:37:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:37:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:37:36 np0005546420.localdomain podman[85453]: 2025-12-05 08:37:36.529860861 +0000 UTC m=+0.095805276 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1)
Dec 05 08:37:36 np0005546420.localdomain systemd[1]: tmp-crun.05qBXJ.mount: Deactivated successfully.
Dec 05 08:37:36 np0005546420.localdomain podman[85450]: 2025-12-05 08:37:36.579586876 +0000 UTC m=+0.153887969 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:37:36 np0005546420.localdomain podman[85453]: 2025-12-05 08:37:36.585507478 +0000 UTC m=+0.151451893 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public)
Dec 05 08:37:36 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:37:36 np0005546420.localdomain podman[85450]: 2025-12-05 08:37:36.611375886 +0000 UTC m=+0.185676959 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc.)
Dec 05 08:37:36 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:37:36 np0005546420.localdomain podman[85452]: 2025-12-05 08:37:36.664618819 +0000 UTC m=+0.234279699 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 08:37:36 np0005546420.localdomain podman[85451]: 2025-12-05 08:37:36.728642005 +0000 UTC m=+0.299060459 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, architecture=x86_64, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:37:36 np0005546420.localdomain podman[85451]: 2025-12-05 08:37:36.740381487 +0000 UTC m=+0.310799941 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc.)
Dec 05 08:37:36 np0005546420.localdomain podman[85452]: 2025-12-05 08:37:36.750984344 +0000 UTC m=+0.320645264 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, version=17.1.12, config_id=tripleo_step3, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 05 08:37:36 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:37:36 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:37:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:37:50 np0005546420.localdomain systemd[1]: tmp-crun.qILoNx.mount: Deactivated successfully.
Dec 05 08:37:50 np0005546420.localdomain podman[85579]: 2025-12-05 08:37:50.510432772 +0000 UTC m=+0.087648026 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible)
Dec 05 08:37:50 np0005546420.localdomain podman[85579]: 2025-12-05 08:37:50.740494731 +0000 UTC m=+0.317709945 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, container_name=metrics_qdr, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1)
Dec 05 08:37:50 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:38:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:38:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:38:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:38:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:38:00 np0005546420.localdomain podman[85610]: 2025-12-05 08:38:00.494593337 +0000 UTC m=+0.069921619 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true)
Dec 05 08:38:00 np0005546420.localdomain podman[85609]: 2025-12-05 08:38:00.556516517 +0000 UTC m=+0.134125799 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 05 08:38:00 np0005546420.localdomain podman[85609]: 2025-12-05 08:38:00.606392066 +0000 UTC m=+0.184001338 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible)
Dec 05 08:38:00 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:38:00 np0005546420.localdomain podman[85611]: 2025-12-05 08:38:00.527938775 +0000 UTC m=+0.098438369 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:38:00 np0005546420.localdomain podman[85611]: 2025-12-05 08:38:00.658165943 +0000 UTC m=+0.228665797 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12)
Dec 05 08:38:00 np0005546420.localdomain podman[85608]: 2025-12-05 08:38:00.607289843 +0000 UTC m=+0.186156495 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container)
Dec 05 08:38:00 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:38:00 np0005546420.localdomain podman[85610]: 2025-12-05 08:38:00.682312298 +0000 UTC m=+0.257640660 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:36:58Z, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 08:38:00 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:38:00 np0005546420.localdomain podman[85608]: 2025-12-05 08:38:00.73877102 +0000 UTC m=+0.317637682 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:38:00 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:38:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:38:01 np0005546420.localdomain podman[85707]: 2025-12-05 08:38:01.508385666 +0000 UTC m=+0.087858323 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Dec 05 08:38:01 np0005546420.localdomain podman[85707]: 2025-12-05 08:38:01.915414323 +0000 UTC m=+0.494886960 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=)
Dec 05 08:38:01 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:38:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:38:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:38:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:38:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:38:07 np0005546420.localdomain podman[85729]: 2025-12-05 08:38:07.507084476 +0000 UTC m=+0.084030904 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-ovn-controller, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:38:07 np0005546420.localdomain podman[85729]: 2025-12-05 08:38:07.558410479 +0000 UTC m=+0.135356927 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:38:07 np0005546420.localdomain systemd[1]: tmp-crun.kOHs1z.mount: Deactivated successfully.
Dec 05 08:38:07 np0005546420.localdomain podman[85732]: 2025-12-05 08:38:07.575350042 +0000 UTC m=+0.141978451 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ovn_metadata_agent, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:38:07 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:38:07 np0005546420.localdomain podman[85730]: 2025-12-05 08:38:07.629045318 +0000 UTC m=+0.200856207 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=)
Dec 05 08:38:07 np0005546420.localdomain podman[85730]: 2025-12-05 08:38:07.641476542 +0000 UTC m=+0.213287441 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container)
Dec 05 08:38:07 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:38:07 np0005546420.localdomain podman[85731]: 2025-12-05 08:38:07.725468564 +0000 UTC m=+0.295732486 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:38:07 np0005546420.localdomain podman[85731]: 2025-12-05 08:38:07.736878046 +0000 UTC m=+0.307141978 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.expose-services=, release=1761123044, vcs-type=git, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, architecture=x86_64)
Dec 05 08:38:07 np0005546420.localdomain podman[85732]: 2025-12-05 08:38:07.747278347 +0000 UTC m=+0.313906726 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent)
Dec 05 08:38:07 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:38:07 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:38:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:38:21 np0005546420.localdomain podman[85811]: 2025-12-05 08:38:21.501653287 +0000 UTC m=+0.078520263 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:38:21 np0005546420.localdomain podman[85811]: 2025-12-05 08:38:21.69033984 +0000 UTC m=+0.267206776 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true)
Dec 05 08:38:21 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:38:28 np0005546420.localdomain sshd[85840]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:38:29 np0005546420.localdomain sshd[85840]: Received disconnect from 195.250.72.168 port 56100:11: Bye Bye [preauth]
Dec 05 08:38:29 np0005546420.localdomain sshd[85840]: Disconnected from authenticating user root 195.250.72.168 port 56100 [preauth]
Dec 05 08:38:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:38:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:38:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:38:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:38:31 np0005546420.localdomain podman[85842]: 2025-12-05 08:38:31.512379692 +0000 UTC m=+0.089461481 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:38:31 np0005546420.localdomain podman[85842]: 2025-12-05 08:38:31.52496376 +0000 UTC m=+0.102045599 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp17/openstack-cron, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:38:31 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:38:31 np0005546420.localdomain podman[85843]: 2025-12-05 08:38:31.567801823 +0000 UTC m=+0.140361722 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4)
Dec 05 08:38:31 np0005546420.localdomain podman[85843]: 2025-12-05 08:38:31.595181807 +0000 UTC m=+0.167741676 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi)
Dec 05 08:38:31 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:38:31 np0005546420.localdomain podman[85844]: 2025-12-05 08:38:31.608419625 +0000 UTC m=+0.177373183 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-type=git)
Dec 05 08:38:31 np0005546420.localdomain podman[85844]: 2025-12-05 08:38:31.637372999 +0000 UTC m=+0.206326547 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:38:31 np0005546420.localdomain podman[85850]: 2025-12-05 08:38:31.66819253 +0000 UTC m=+0.231068571 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute)
Dec 05 08:38:31 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:38:31 np0005546420.localdomain podman[85850]: 2025-12-05 08:38:31.716654275 +0000 UTC m=+0.279530266 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:38:31 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:38:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:38:32 np0005546420.localdomain podman[85944]: 2025-12-05 08:38:32.504220964 +0000 UTC m=+0.080898037 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:38:32 np0005546420.localdomain podman[85944]: 2025-12-05 08:38:32.880778602 +0000 UTC m=+0.457455635 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team)
Dec 05 08:38:32 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:38:33 np0005546420.localdomain sshd[85967]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:38:34 np0005546420.localdomain sudo[85968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:38:34 np0005546420.localdomain sudo[85968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:38:34 np0005546420.localdomain sudo[85968]: pam_unix(sudo:session): session closed for user root
Dec 05 08:38:34 np0005546420.localdomain sudo[85983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:38:34 np0005546420.localdomain sudo[85983]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:38:35 np0005546420.localdomain sudo[85983]: pam_unix(sudo:session): session closed for user root
Dec 05 08:38:35 np0005546420.localdomain sudo[86032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:38:35 np0005546420.localdomain sudo[86032]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:38:35 np0005546420.localdomain sudo[86032]: pam_unix(sudo:session): session closed for user root
Dec 05 08:38:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:38:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:38:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:38:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:38:38 np0005546420.localdomain podman[86049]: 2025-12-05 08:38:38.492519394 +0000 UTC m=+0.066242885 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, managed_by=tripleo_ansible)
Dec 05 08:38:38 np0005546420.localdomain systemd[1]: tmp-crun.mYN7tP.mount: Deactivated successfully.
Dec 05 08:38:38 np0005546420.localdomain podman[86050]: 2025-12-05 08:38:38.527234334 +0000 UTC m=+0.095202318 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 05 08:38:38 np0005546420.localdomain podman[86047]: 2025-12-05 08:38:38.598685108 +0000 UTC m=+0.173425860 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z)
Dec 05 08:38:38 np0005546420.localdomain podman[86047]: 2025-12-05 08:38:38.639097426 +0000 UTC m=+0.213838228 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.4)
Dec 05 08:38:38 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:38:38 np0005546420.localdomain podman[86048]: 2025-12-05 08:38:38.548845161 +0000 UTC m=+0.122394887 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4)
Dec 05 08:38:38 np0005546420.localdomain podman[86050]: 2025-12-05 08:38:38.658277398 +0000 UTC m=+0.226245412 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1)
Dec 05 08:38:38 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:38:38 np0005546420.localdomain podman[86049]: 2025-12-05 08:38:38.681336549 +0000 UTC m=+0.255060070 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 05 08:38:38 np0005546420.localdomain podman[86048]: 2025-12-05 08:38:38.684309191 +0000 UTC m=+0.257858887 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, release=1761123044, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com)
Dec 05 08:38:38 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:38:38 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:38:49 np0005546420.localdomain sshd[86173]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:38:50 np0005546420.localdomain sshd[86173]: Received disconnect from 93.157.248.178 port 42364:11: Bye Bye [preauth]
Dec 05 08:38:50 np0005546420.localdomain sshd[86173]: Disconnected from authenticating user root 93.157.248.178 port 42364 [preauth]
Dec 05 08:38:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:38:52 np0005546420.localdomain podman[86175]: 2025-12-05 08:38:52.517246106 +0000 UTC m=+0.095891829 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:38:52 np0005546420.localdomain podman[86175]: 2025-12-05 08:38:52.723093037 +0000 UTC m=+0.301738760 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true)
Dec 05 08:38:52 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:39:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:39:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:39:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:39:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:39:02 np0005546420.localdomain podman[86206]: 2025-12-05 08:39:02.511093598 +0000 UTC m=+0.079154263 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=nova_compute, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, release=1761123044, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:39:02 np0005546420.localdomain podman[86207]: 2025-12-05 08:39:02.532792188 +0000 UTC m=+0.096677763 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public)
Dec 05 08:39:02 np0005546420.localdomain podman[86206]: 2025-12-05 08:39:02.539841456 +0000 UTC m=+0.107902121 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, config_id=tripleo_step5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1)
Dec 05 08:39:02 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:39:02 np0005546420.localdomain podman[86204]: 2025-12-05 08:39:02.589144107 +0000 UTC m=+0.157294244 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:39:02 np0005546420.localdomain podman[86207]: 2025-12-05 08:39:02.616906643 +0000 UTC m=+0.180792168 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:39:02 np0005546420.localdomain podman[86205]: 2025-12-05 08:39:02.617071618 +0000 UTC m=+0.183801121 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:39:02 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:39:02 np0005546420.localdomain podman[86204]: 2025-12-05 08:39:02.649895221 +0000 UTC m=+0.218045378 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:39:02 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:39:02 np0005546420.localdomain podman[86205]: 2025-12-05 08:39:02.700315637 +0000 UTC m=+0.267045160 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=)
Dec 05 08:39:02 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:39:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:39:03 np0005546420.localdomain podman[86300]: 2025-12-05 08:39:03.504669624 +0000 UTC m=+0.086372966 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Dec 05 08:39:03 np0005546420.localdomain podman[86300]: 2025-12-05 08:39:03.883671327 +0000 UTC m=+0.465374669 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z)
Dec 05 08:39:03 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:39:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:39:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:39:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:39:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:39:09 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:39:09 np0005546420.localdomain recover_tripleo_nova_virtqemud[86347]: 62579
Dec 05 08:39:09 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:39:09 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:39:09 np0005546420.localdomain podman[86324]: 2025-12-05 08:39:09.511049711 +0000 UTC m=+0.079752931 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:39:09 np0005546420.localdomain podman[86323]: 2025-12-05 08:39:09.555700809 +0000 UTC m=+0.131309083 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4)
Dec 05 08:39:09 np0005546420.localdomain podman[86325]: 2025-12-05 08:39:09.564456499 +0000 UTC m=+0.132752797 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container)
Dec 05 08:39:09 np0005546420.localdomain podman[86331]: 2025-12-05 08:39:09.522018199 +0000 UTC m=+0.082078743 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, distribution-scope=public, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ovn_metadata_agent)
Dec 05 08:39:09 np0005546420.localdomain podman[86331]: 2025-12-05 08:39:09.601488291 +0000 UTC m=+0.161548815 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_id=tripleo_step4)
Dec 05 08:39:09 np0005546420.localdomain podman[86323]: 2025-12-05 08:39:09.607249939 +0000 UTC m=+0.182858233 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, vendor=Red Hat, Inc., distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:39:09 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:39:09 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:39:09 np0005546420.localdomain podman[86325]: 2025-12-05 08:39:09.652064111 +0000 UTC m=+0.220360469 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, distribution-scope=public, container_name=iscsid, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container)
Dec 05 08:39:09 np0005546420.localdomain podman[86324]: 2025-12-05 08:39:09.652483955 +0000 UTC m=+0.221187175 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:39:09 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:39:09 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:39:10 np0005546420.localdomain systemd[1]: tmp-crun.lQBpsj.mount: Deactivated successfully.
Dec 05 08:39:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:39:23 np0005546420.localdomain podman[86411]: 2025-12-05 08:39:23.513227876 +0000 UTC m=+0.090380359 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:39:23 np0005546420.localdomain podman[86411]: 2025-12-05 08:39:23.709495322 +0000 UTC m=+0.286647785 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step1, architecture=x86_64, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 05 08:39:23 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:39:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:39:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:39:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:39:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:39:33 np0005546420.localdomain podman[86441]: 2025-12-05 08:39:33.507392949 +0000 UTC m=+0.087633615 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:39:33 np0005546420.localdomain podman[86441]: 2025-12-05 08:39:33.520568856 +0000 UTC m=+0.100809512 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64)
Dec 05 08:39:33 np0005546420.localdomain systemd[1]: tmp-crun.HiWJXq.mount: Deactivated successfully.
Dec 05 08:39:33 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:39:33 np0005546420.localdomain podman[86442]: 2025-12-05 08:39:33.52458917 +0000 UTC m=+0.098305955 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:39:33 np0005546420.localdomain podman[86449]: 2025-12-05 08:39:33.577572635 +0000 UTC m=+0.141374283 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, container_name=ceilometer_agent_compute, architecture=x86_64, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git)
Dec 05 08:39:33 np0005546420.localdomain podman[86449]: 2025-12-05 08:39:33.606309181 +0000 UTC m=+0.170110829 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, architecture=x86_64)
Dec 05 08:39:33 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:39:33 np0005546420.localdomain podman[86442]: 2025-12-05 08:39:33.656592593 +0000 UTC m=+0.230309408 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:39:33 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:39:33 np0005546420.localdomain podman[86443]: 2025-12-05 08:39:33.668404847 +0000 UTC m=+0.238620714 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, version=17.1.12, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Dec 05 08:39:33 np0005546420.localdomain podman[86443]: 2025-12-05 08:39:33.702411807 +0000 UTC m=+0.272627654 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:39:33 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:39:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:39:34 np0005546420.localdomain podman[86537]: 2025-12-05 08:39:34.489638186 +0000 UTC m=+0.068756963 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, tcib_managed=true, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute)
Dec 05 08:39:34 np0005546420.localdomain podman[86537]: 2025-12-05 08:39:34.884614972 +0000 UTC m=+0.463733799 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=nova_migration_target, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z)
Dec 05 08:39:34 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:39:36 np0005546420.localdomain sudo[86561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:39:36 np0005546420.localdomain sudo[86561]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:39:36 np0005546420.localdomain sudo[86561]: pam_unix(sudo:session): session closed for user root
Dec 05 08:39:36 np0005546420.localdomain sudo[86576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:39:36 np0005546420.localdomain sudo[86576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:39:36 np0005546420.localdomain sudo[86576]: pam_unix(sudo:session): session closed for user root
Dec 05 08:39:37 np0005546420.localdomain sudo[86623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:39:37 np0005546420.localdomain sudo[86623]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:39:37 np0005546420.localdomain sudo[86623]: pam_unix(sudo:session): session closed for user root
Dec 05 08:39:38 np0005546420.localdomain sshd[86638]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:39:39 np0005546420.localdomain sshd[86638]: Received disconnect from 195.250.72.168 port 34274:11: Bye Bye [preauth]
Dec 05 08:39:39 np0005546420.localdomain sshd[86638]: Disconnected from authenticating user root 195.250.72.168 port 34274 [preauth]
Dec 05 08:39:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:39:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:39:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:39:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:39:40 np0005546420.localdomain podman[86642]: 2025-12-05 08:39:40.511155469 +0000 UTC m=+0.078934777 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4)
Dec 05 08:39:40 np0005546420.localdomain podman[86642]: 2025-12-05 08:39:40.52773999 +0000 UTC m=+0.095519358 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, container_name=iscsid, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team)
Dec 05 08:39:40 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:39:40 np0005546420.localdomain podman[86641]: 2025-12-05 08:39:40.581133588 +0000 UTC m=+0.151538677 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044)
Dec 05 08:39:40 np0005546420.localdomain systemd[1]: tmp-crun.L67wxu.mount: Deactivated successfully.
Dec 05 08:39:40 np0005546420.localdomain podman[86640]: 2025-12-05 08:39:40.619838092 +0000 UTC m=+0.190213840 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:39:40 np0005546420.localdomain podman[86641]: 2025-12-05 08:39:40.620312837 +0000 UTC m=+0.190717876 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vcs-type=git, managed_by=tripleo_ansible, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:39:40 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:39:40 np0005546420.localdomain podman[86643]: 2025-12-05 08:39:40.673948051 +0000 UTC m=+0.239695676 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:39:40 np0005546420.localdomain podman[86640]: 2025-12-05 08:39:40.704582447 +0000 UTC m=+0.274958185 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, container_name=ovn_controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, release=1761123044)
Dec 05 08:39:40 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:39:40 np0005546420.localdomain podman[86643]: 2025-12-05 08:39:40.724308865 +0000 UTC m=+0.290056450 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, release=1761123044)
Dec 05 08:39:40 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:39:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:39:54 np0005546420.localdomain podman[86771]: 2025-12-05 08:39:54.530781243 +0000 UTC m=+0.101264496 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container)
Dec 05 08:39:54 np0005546420.localdomain podman[86771]: 2025-12-05 08:39:54.767549458 +0000 UTC m=+0.338032691 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd)
Dec 05 08:39:54 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:40:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:40:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:40:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:40:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:40:04 np0005546420.localdomain podman[86803]: 2025-12-05 08:40:04.527698793 +0000 UTC m=+0.093010740 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64)
Dec 05 08:40:04 np0005546420.localdomain podman[86801]: 2025-12-05 08:40:04.584120144 +0000 UTC m=+0.155936621 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, release=1761123044, container_name=ceilometer_agent_ipmi, vcs-type=git, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:40:04 np0005546420.localdomain podman[86801]: 2025-12-05 08:40:04.614637216 +0000 UTC m=+0.186453703 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi)
Dec 05 08:40:04 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:40:04 np0005546420.localdomain podman[86802]: 2025-12-05 08:40:04.634133617 +0000 UTC m=+0.200975332 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, container_name=nova_compute, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 05 08:40:04 np0005546420.localdomain podman[86802]: 2025-12-05 08:40:04.688760613 +0000 UTC m=+0.255602358 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 05 08:40:04 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:40:04 np0005546420.localdomain podman[86803]: 2025-12-05 08:40:04.711589217 +0000 UTC m=+0.276901164 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12)
Dec 05 08:40:04 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:40:04 np0005546420.localdomain podman[86800]: 2025-12-05 08:40:04.692206359 +0000 UTC m=+0.264022866 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 05 08:40:04 np0005546420.localdomain podman[86800]: 2025-12-05 08:40:04.770897247 +0000 UTC m=+0.342713754 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:40:04 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:40:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:40:05 np0005546420.localdomain podman[86898]: 2025-12-05 08:40:05.504942615 +0000 UTC m=+0.081010671 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=nova_migration_target, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true)
Dec 05 08:40:05 np0005546420.localdomain podman[86898]: 2025-12-05 08:40:05.877822639 +0000 UTC m=+0.453890665 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, release=1761123044, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:40:05 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:40:06 np0005546420.localdomain sshd[86922]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:40:06 np0005546420.localdomain sshd[86922]: Received disconnect from 93.157.248.178 port 35788:11: Bye Bye [preauth]
Dec 05 08:40:06 np0005546420.localdomain sshd[86922]: Disconnected from authenticating user root 93.157.248.178 port 35788 [preauth]
Dec 05 08:40:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:40:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:40:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:40:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:40:11 np0005546420.localdomain podman[86925]: 2025-12-05 08:40:11.517255514 +0000 UTC m=+0.090651737 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=collectd, architecture=x86_64, config_id=tripleo_step3, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:40:11 np0005546420.localdomain podman[86926]: 2025-12-05 08:40:11.495268546 +0000 UTC m=+0.068461783 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, container_name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, architecture=x86_64)
Dec 05 08:40:11 np0005546420.localdomain podman[86925]: 2025-12-05 08:40:11.548349374 +0000 UTC m=+0.121745597 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, container_name=collectd)
Dec 05 08:40:11 np0005546420.localdomain podman[86924]: 2025-12-05 08:40:11.556552227 +0000 UTC m=+0.129614640 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, distribution-scope=public, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team)
Dec 05 08:40:11 np0005546420.localdomain podman[86926]: 2025-12-05 08:40:11.582332822 +0000 UTC m=+0.155526049 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044)
Dec 05 08:40:11 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:40:11 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:40:11 np0005546420.localdomain podman[86924]: 2025-12-05 08:40:11.656674836 +0000 UTC m=+0.229737489 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com)
Dec 05 08:40:11 np0005546420.localdomain podman[86927]: 2025-12-05 08:40:11.667014475 +0000 UTC m=+0.235543808 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public)
Dec 05 08:40:11 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:40:11 np0005546420.localdomain podman[86927]: 2025-12-05 08:40:11.744592398 +0000 UTC m=+0.313121731 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:40:11 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:40:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:40:25 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:40:25 np0005546420.localdomain recover_tripleo_nova_virtqemud[87014]: 62579
Dec 05 08:40:25 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:40:25 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:40:25 np0005546420.localdomain podman[87009]: 2025-12-05 08:40:25.507316396 +0000 UTC m=+0.088524693 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4)
Dec 05 08:40:25 np0005546420.localdomain podman[87009]: 2025-12-05 08:40:25.726528219 +0000 UTC m=+0.307736526 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-qdrouterd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Dec 05 08:40:25 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:40:33 np0005546420.localdomain sshd[85967]: fatal: Timeout before authentication for 180.184.182.87 port 29016
Dec 05 08:40:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:40:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:40:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:40:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:40:35 np0005546420.localdomain podman[87045]: 2025-12-05 08:40:35.516608238 +0000 UTC m=+0.089915986 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Dec 05 08:40:35 np0005546420.localdomain podman[87045]: 2025-12-05 08:40:35.553401653 +0000 UTC m=+0.126709361 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible)
Dec 05 08:40:35 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:40:35 np0005546420.localdomain podman[87044]: 2025-12-05 08:40:35.569327125 +0000 UTC m=+0.146577364 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1)
Dec 05 08:40:35 np0005546420.localdomain podman[87044]: 2025-12-05 08:40:35.607401379 +0000 UTC m=+0.184651638 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com)
Dec 05 08:40:35 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:40:35 np0005546420.localdomain systemd[1]: tmp-crun.iE2gZ8.mount: Deactivated successfully.
Dec 05 08:40:35 np0005546420.localdomain podman[87046]: 2025-12-05 08:40:35.686441498 +0000 UTC m=+0.254933887 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_id=tripleo_step5, release=1761123044, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Dec 05 08:40:35 np0005546420.localdomain podman[87050]: 2025-12-05 08:40:35.64726721 +0000 UTC m=+0.212423386 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044)
Dec 05 08:40:35 np0005546420.localdomain podman[87046]: 2025-12-05 08:40:35.72150828 +0000 UTC m=+0.290000719 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044)
Dec 05 08:40:35 np0005546420.localdomain podman[87050]: 2025-12-05 08:40:35.731410396 +0000 UTC m=+0.296566732 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:40:35 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:40:35 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:40:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:40:36 np0005546420.localdomain podman[87139]: 2025-12-05 08:40:36.519004106 +0000 UTC m=+0.090211815 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Dec 05 08:40:36 np0005546420.localdomain podman[87139]: 2025-12-05 08:40:36.875752922 +0000 UTC m=+0.446960571 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target)
Dec 05 08:40:36 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:40:37 np0005546420.localdomain sudo[87163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:40:37 np0005546420.localdomain sudo[87163]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:40:37 np0005546420.localdomain sudo[87163]: pam_unix(sudo:session): session closed for user root
Dec 05 08:40:37 np0005546420.localdomain sudo[87178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:40:37 np0005546420.localdomain sudo[87178]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:40:38 np0005546420.localdomain sudo[87178]: pam_unix(sudo:session): session closed for user root
Dec 05 08:40:39 np0005546420.localdomain sudo[87224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:40:39 np0005546420.localdomain sudo[87224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:40:39 np0005546420.localdomain sudo[87224]: pam_unix(sudo:session): session closed for user root
Dec 05 08:40:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:40:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:40:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:40:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:40:42 np0005546420.localdomain podman[87240]: 2025-12-05 08:40:42.522734141 +0000 UTC m=+0.093122984 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, vcs-type=git, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.openshift.expose-services=, io.buildah.version=1.41.4)
Dec 05 08:40:42 np0005546420.localdomain podman[87240]: 2025-12-05 08:40:42.592028129 +0000 UTC m=+0.162416932 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Dec 05 08:40:42 np0005546420.localdomain podman[87239]: 2025-12-05 08:40:42.564385946 +0000 UTC m=+0.133790919 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, container_name=ovn_controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12)
Dec 05 08:40:42 np0005546420.localdomain podman[87241]: 2025-12-05 08:40:42.632875289 +0000 UTC m=+0.197913217 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, container_name=iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid)
Dec 05 08:40:42 np0005546420.localdomain podman[87239]: 2025-12-05 08:40:42.654408614 +0000 UTC m=+0.223813557 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 05 08:40:42 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:40:42 np0005546420.localdomain podman[87241]: 2025-12-05 08:40:42.672996577 +0000 UTC m=+0.238034505 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, version=17.1.12, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.buildah.version=1.41.4, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:40:42 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:40:42 np0005546420.localdomain podman[87242]: 2025-12-05 08:40:42.593087232 +0000 UTC m=+0.154936122 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64)
Dec 05 08:40:42 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:40:42 np0005546420.localdomain podman[87242]: 2025-12-05 08:40:42.727489498 +0000 UTC m=+0.289338378 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 05 08:40:42 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:40:43 np0005546420.localdomain sshd[87325]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:40:43 np0005546420.localdomain systemd[1]: tmp-crun.5PUron.mount: Deactivated successfully.
Dec 05 08:40:44 np0005546420.localdomain sshd[87325]: Received disconnect from 195.250.72.168 port 45752:11: Bye Bye [preauth]
Dec 05 08:40:44 np0005546420.localdomain sshd[87325]: Disconnected from authenticating user root 195.250.72.168 port 45752 [preauth]
Dec 05 08:40:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:40:56 np0005546420.localdomain podman[87372]: 2025-12-05 08:40:56.515772264 +0000 UTC m=+0.090705309 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 05 08:40:56 np0005546420.localdomain podman[87372]: 2025-12-05 08:40:56.726017911 +0000 UTC m=+0.300950876 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd)
Dec 05 08:40:56 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:41:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:41:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:41:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:41:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:41:06 np0005546420.localdomain systemd[1]: tmp-crun.8sQACc.mount: Deactivated successfully.
Dec 05 08:41:06 np0005546420.localdomain podman[87404]: 2025-12-05 08:41:06.533288557 +0000 UTC m=+0.105864337 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:41:06 np0005546420.localdomain podman[87404]: 2025-12-05 08:41:06.564357526 +0000 UTC m=+0.136933326 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=)
Dec 05 08:41:06 np0005546420.localdomain podman[87406]: 2025-12-05 08:41:06.578583835 +0000 UTC m=+0.144211610 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1)
Dec 05 08:41:06 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:41:06 np0005546420.localdomain podman[87406]: 2025-12-05 08:41:06.614212664 +0000 UTC m=+0.179840389 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4)
Dec 05 08:41:06 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:41:06 np0005546420.localdomain podman[87405]: 2025-12-05 08:41:06.635971846 +0000 UTC m=+0.202470369 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 08:41:06 np0005546420.localdomain podman[87405]: 2025-12-05 08:41:06.666321532 +0000 UTC m=+0.232820065 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=nova_compute, tcib_managed=true, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:41:06 np0005546420.localdomain podman[87403]: 2025-12-05 08:41:06.674074021 +0000 UTC m=+0.248856829 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12)
Dec 05 08:41:06 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:41:06 np0005546420.localdomain podman[87403]: 2025-12-05 08:41:06.712322042 +0000 UTC m=+0.287104810 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-cron, release=1761123044, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 05 08:41:06 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:41:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:41:07 np0005546420.localdomain podman[87501]: 2025-12-05 08:41:07.521037283 +0000 UTC m=+0.076981407 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:41:07 np0005546420.localdomain podman[87501]: 2025-12-05 08:41:07.916423252 +0000 UTC m=+0.472367386 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Dec 05 08:41:07 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:41:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:41:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:41:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:41:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:41:13 np0005546420.localdomain podman[87523]: 2025-12-05 08:41:13.510867158 +0000 UTC m=+0.087398468 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git)
Dec 05 08:41:13 np0005546420.localdomain podman[87523]: 2025-12-05 08:41:13.564347548 +0000 UTC m=+0.140878848 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:41:13 np0005546420.localdomain systemd[1]: tmp-crun.IG6Ny4.mount: Deactivated successfully.
Dec 05 08:41:13 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:41:13 np0005546420.localdomain podman[87525]: 2025-12-05 08:41:13.583566131 +0000 UTC m=+0.154708645 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 05 08:41:13 np0005546420.localdomain podman[87525]: 2025-12-05 08:41:13.623536784 +0000 UTC m=+0.194679258 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, release=1761123044, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:41:13 np0005546420.localdomain podman[87526]: 2025-12-05 08:41:13.635306387 +0000 UTC m=+0.204166680 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, release=1761123044, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 08:41:13 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:41:13 np0005546420.localdomain podman[87526]: 2025-12-05 08:41:13.709530778 +0000 UTC m=+0.278391021 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, tcib_managed=true, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 05 08:41:13 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:41:13 np0005546420.localdomain podman[87524]: 2025-12-05 08:41:13.722855628 +0000 UTC m=+0.296509018 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:41:13 np0005546420.localdomain podman[87524]: 2025-12-05 08:41:13.758545979 +0000 UTC m=+0.332199369 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, vcs-type=git, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:41:13 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:41:14 np0005546420.localdomain systemd[1]: tmp-crun.BfsMWT.mount: Deactivated successfully.
Dec 05 08:41:21 np0005546420.localdomain sshd[87608]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:41:22 np0005546420.localdomain sshd[87608]: Received disconnect from 93.157.248.178 port 58994:11: Bye Bye [preauth]
Dec 05 08:41:22 np0005546420.localdomain sshd[87608]: Disconnected from authenticating user root 93.157.248.178 port 58994 [preauth]
Dec 05 08:41:24 np0005546420.localdomain sshd[87610]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:41:24 np0005546420.localdomain sshd[87610]: error: kex_exchange_identification: banner line contains invalid characters
Dec 05 08:41:24 np0005546420.localdomain sshd[87610]: banner exchange: Connection from 8.130.184.62 port 52498: invalid format
Dec 05 08:41:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:41:27 np0005546420.localdomain podman[87611]: 2025-12-05 08:41:27.518916962 +0000 UTC m=+0.094852687 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:41:27 np0005546420.localdomain podman[87611]: 2025-12-05 08:41:27.714070684 +0000 UTC m=+0.290006429 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, distribution-scope=public, release=1761123044, architecture=x86_64, tcib_managed=true, config_id=tripleo_step1, vendor=Red Hat, Inc., container_name=metrics_qdr, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container)
Dec 05 08:41:27 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:41:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:41:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:41:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:41:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:41:37 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:41:37 np0005546420.localdomain recover_tripleo_nova_virtqemud[87666]: 62579
Dec 05 08:41:37 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:41:37 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:41:37 np0005546420.localdomain podman[87640]: 2025-12-05 08:41:37.528612917 +0000 UTC m=+0.099754808 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, release=1761123044, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, name=rhosp17/openstack-cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 05 08:41:37 np0005546420.localdomain podman[87642]: 2025-12-05 08:41:37.574174492 +0000 UTC m=+0.138304087 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, container_name=nova_compute, version=17.1.12, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:41:37 np0005546420.localdomain podman[87640]: 2025-12-05 08:41:37.586571595 +0000 UTC m=+0.157713496 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20251118.1)
Dec 05 08:41:37 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:41:37 np0005546420.localdomain podman[87642]: 2025-12-05 08:41:37.636789675 +0000 UTC m=+0.200919210 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, architecture=x86_64)
Dec 05 08:41:37 np0005546420.localdomain podman[87641]: 2025-12-05 08:41:37.64054511 +0000 UTC m=+0.207314117 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Dec 05 08:41:37 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:41:37 np0005546420.localdomain podman[87644]: 2025-12-05 08:41:37.716251886 +0000 UTC m=+0.275664966 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:41:37 np0005546420.localdomain podman[87641]: 2025-12-05 08:41:37.752987909 +0000 UTC m=+0.319756906 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:41:37 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:41:37 np0005546420.localdomain podman[87644]: 2025-12-05 08:41:37.775570296 +0000 UTC m=+0.334983386 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, architecture=x86_64)
Dec 05 08:41:37 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:41:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:41:38 np0005546420.localdomain podman[87736]: 2025-12-05 08:41:38.505152117 +0000 UTC m=+0.084873880 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:41:38 np0005546420.localdomain podman[87736]: 2025-12-05 08:41:38.871685586 +0000 UTC m=+0.451407329 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_migration_target)
Dec 05 08:41:38 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:41:39 np0005546420.localdomain sudo[87759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:41:39 np0005546420.localdomain sudo[87759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:41:39 np0005546420.localdomain sudo[87759]: pam_unix(sudo:session): session closed for user root
Dec 05 08:41:39 np0005546420.localdomain sudo[87774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:41:39 np0005546420.localdomain sudo[87774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:41:40 np0005546420.localdomain sudo[87774]: pam_unix(sudo:session): session closed for user root
Dec 05 08:41:42 np0005546420.localdomain sudo[87821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:41:42 np0005546420.localdomain sudo[87821]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:41:42 np0005546420.localdomain sudo[87821]: pam_unix(sudo:session): session closed for user root
Dec 05 08:41:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:41:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:41:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:41:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:41:44 np0005546420.localdomain systemd[1]: tmp-crun.qWiRcv.mount: Deactivated successfully.
Dec 05 08:41:44 np0005546420.localdomain podman[87838]: 2025-12-05 08:41:44.570628537 +0000 UTC m=+0.136821092 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 05 08:41:44 np0005546420.localdomain podman[87838]: 2025-12-05 08:41:44.582355699 +0000 UTC m=+0.148548294 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, container_name=iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public)
Dec 05 08:41:44 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:41:44 np0005546420.localdomain podman[87836]: 2025-12-05 08:41:44.544737179 +0000 UTC m=+0.115042281 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ovn_controller)
Dec 05 08:41:44 np0005546420.localdomain podman[87836]: 2025-12-05 08:41:44.630544085 +0000 UTC m=+0.200849147 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 05 08:41:44 np0005546420.localdomain podman[87839]: 2025-12-05 08:41:44.641498014 +0000 UTC m=+0.203261232 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 08:41:44 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:41:44 np0005546420.localdomain podman[87837]: 2025-12-05 08:41:44.720505461 +0000 UTC m=+0.289218904 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:41:44 np0005546420.localdomain podman[87837]: 2025-12-05 08:41:44.734491863 +0000 UTC m=+0.303205336 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 05 08:41:44 np0005546420.localdomain podman[87839]: 2025-12-05 08:41:44.74313729 +0000 UTC m=+0.304900448 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 05 08:41:44 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:41:44 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:41:47 np0005546420.localdomain sshd[87923]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:41:48 np0005546420.localdomain sshd[87923]: Received disconnect from 195.250.72.168 port 46392:11: Bye Bye [preauth]
Dec 05 08:41:48 np0005546420.localdomain sshd[87923]: Disconnected from authenticating user root 195.250.72.168 port 46392 [preauth]
Dec 05 08:41:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:41:58 np0005546420.localdomain podman[87970]: 2025-12-05 08:41:58.518727724 +0000 UTC m=+0.093650421 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:41:58 np0005546420.localdomain podman[87970]: 2025-12-05 08:41:58.738573897 +0000 UTC m=+0.313496564 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vcs-type=git, url=https://www.redhat.com, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 05 08:41:58 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:42:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:42:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3000.1 total, 600.0 interval
                                                          Cumulative writes: 5715 writes, 25K keys, 5715 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5715 writes, 734 syncs, 7.79 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 554 writes, 2130 keys, 554 commit groups, 1.0 writes per commit group, ingest: 2.56 MB, 0.00 MB/s
                                                          Interval WAL: 554 writes, 196 syncs, 2.83 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 08:42:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:42:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:42:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:42:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:42:08 np0005546420.localdomain systemd[1]: tmp-crun.oZzwIX.mount: Deactivated successfully.
Dec 05 08:42:08 np0005546420.localdomain podman[88003]: 2025-12-05 08:42:08.573626922 +0000 UTC m=+0.143749946 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, release=1761123044, io.buildah.version=1.41.4)
Dec 05 08:42:08 np0005546420.localdomain podman[88003]: 2025-12-05 08:42:08.615446503 +0000 UTC m=+0.185569477 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:42:08 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:42:08 np0005546420.localdomain podman[88002]: 2025-12-05 08:42:08.628381682 +0000 UTC m=+0.200820397 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=)
Dec 05 08:42:08 np0005546420.localdomain podman[88004]: 2025-12-05 08:42:08.67985601 +0000 UTC m=+0.246457185 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:42:08 np0005546420.localdomain podman[88001]: 2025-12-05 08:42:08.543315247 +0000 UTC m=+0.118314431 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team)
Dec 05 08:42:08 np0005546420.localdomain podman[88004]: 2025-12-05 08:42:08.711868298 +0000 UTC m=+0.278469493 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:42:08 np0005546420.localdomain podman[88001]: 2025-12-05 08:42:08.725144227 +0000 UTC m=+0.300143361 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:42:08 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:42:08 np0005546420.localdomain podman[88002]: 2025-12-05 08:42:08.733370221 +0000 UTC m=+0.305808926 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:42:08 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:42:08 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:42:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:42:09 np0005546420.localdomain podman[88100]: 2025-12-05 08:42:09.515542383 +0000 UTC m=+0.089415230 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, release=1761123044, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 05 08:42:09 np0005546420.localdomain podman[88100]: 2025-12-05 08:42:09.894536917 +0000 UTC m=+0.468409754 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:42:09 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:42:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:42:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3000.1 total, 600.0 interval
                                                          Cumulative writes: 4690 writes, 21K keys, 4690 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4690 writes, 584 syncs, 8.03 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 400 writes, 1670 keys, 400 commit groups, 1.0 writes per commit group, ingest: 2.21 MB, 0.00 MB/s
                                                          Interval WAL: 400 writes, 144 syncs, 2.78 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 08:42:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:42:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:42:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:42:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:42:15 np0005546420.localdomain systemd[1]: tmp-crun.zjjYPm.mount: Deactivated successfully.
Dec 05 08:42:15 np0005546420.localdomain podman[88124]: 2025-12-05 08:42:15.527869574 +0000 UTC m=+0.100938005 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-collectd-container, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, container_name=collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:42:15 np0005546420.localdomain systemd[1]: tmp-crun.EgykSo.mount: Deactivated successfully.
Dec 05 08:42:15 np0005546420.localdomain podman[88125]: 2025-12-05 08:42:15.632323397 +0000 UTC m=+0.198941829 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.component=openstack-iscsid-container)
Dec 05 08:42:15 np0005546420.localdomain podman[88124]: 2025-12-05 08:42:15.645341739 +0000 UTC m=+0.218410180 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, container_name=collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team)
Dec 05 08:42:15 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:42:15 np0005546420.localdomain podman[88125]: 2025-12-05 08:42:15.694720868 +0000 UTC m=+0.261339300 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, container_name=iscsid, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, distribution-scope=public)
Dec 05 08:42:15 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:42:15 np0005546420.localdomain podman[88126]: 2025-12-05 08:42:15.789421631 +0000 UTC m=+0.354660491 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git)
Dec 05 08:42:15 np0005546420.localdomain podman[88123]: 2025-12-05 08:42:15.59938949 +0000 UTC m=+0.172588356 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, container_name=ovn_controller, batch=17.1_20251118.1)
Dec 05 08:42:15 np0005546420.localdomain podman[88123]: 2025-12-05 08:42:15.831873626 +0000 UTC m=+0.405072532 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:42:15 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:42:15 np0005546420.localdomain podman[88126]: 2025-12-05 08:42:15.886302162 +0000 UTC m=+0.451541012 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git)
Dec 05 08:42:15 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:42:17 np0005546420.localdomain sshd[88212]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:42:19 np0005546420.localdomain sshd[88212]: Invalid user user from 91.202.233.33 port 25918
Dec 05 08:42:20 np0005546420.localdomain sshd[88212]: Connection reset by invalid user user 91.202.233.33 port 25918 [preauth]
Dec 05 08:42:20 np0005546420.localdomain sshd[88214]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:42:22 np0005546420.localdomain sshd[88214]: Connection reset by authenticating user root 91.202.233.33 port 41660 [preauth]
Dec 05 08:42:22 np0005546420.localdomain sshd[88216]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:42:24 np0005546420.localdomain sshd[88216]: Invalid user user from 91.202.233.33 port 41662
Dec 05 08:42:25 np0005546420.localdomain sshd[88216]: Connection reset by invalid user user 91.202.233.33 port 41662 [preauth]
Dec 05 08:42:25 np0005546420.localdomain sshd[88218]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:42:28 np0005546420.localdomain sshd[88218]: Connection reset by authenticating user root 91.202.233.33 port 41670 [preauth]
Dec 05 08:42:28 np0005546420.localdomain sshd[88220]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:42:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:42:29 np0005546420.localdomain systemd[1]: tmp-crun.40vFDL.mount: Deactivated successfully.
Dec 05 08:42:29 np0005546420.localdomain podman[88222]: 2025-12-05 08:42:29.496990187 +0000 UTC m=+0.076847190 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:42:29 np0005546420.localdomain podman[88222]: 2025-12-05 08:42:29.674494434 +0000 UTC m=+0.254351457 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 05 08:42:29 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:42:29 np0005546420.localdomain sshd[88220]: Invalid user ubuntu from 91.202.233.33 port 41680
Dec 05 08:42:30 np0005546420.localdomain sshd[88220]: Connection reset by invalid user ubuntu 91.202.233.33 port 41680 [preauth]
Dec 05 08:42:37 np0005546420.localdomain sshd[88252]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:42:38 np0005546420.localdomain sshd[88254]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:42:38 np0005546420.localdomain sshd[88252]: Received disconnect from 93.157.248.178 port 53706:11: Bye Bye [preauth]
Dec 05 08:42:38 np0005546420.localdomain sshd[88252]: Disconnected from authenticating user root 93.157.248.178 port 53706 [preauth]
Dec 05 08:42:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:42:38 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:42:38 np0005546420.localdomain recover_tripleo_nova_virtqemud[88262]: 62579
Dec 05 08:42:38 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:42:38 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:42:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:42:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:42:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:42:38 np0005546420.localdomain podman[88255]: 2025-12-05 08:42:38.794167622 +0000 UTC m=+0.099009918 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=nova_compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:42:38 np0005546420.localdomain systemd[1]: tmp-crun.jm3SQS.mount: Deactivated successfully.
Dec 05 08:42:38 np0005546420.localdomain podman[88267]: 2025-12-05 08:42:38.883708635 +0000 UTC m=+0.115538899 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git)
Dec 05 08:42:38 np0005546420.localdomain podman[88269]: 2025-12-05 08:42:38.927126559 +0000 UTC m=+0.147129177 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:42:38 np0005546420.localdomain podman[88268]: 2025-12-05 08:42:38.984835137 +0000 UTC m=+0.208304722 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:42:38 np0005546420.localdomain podman[88267]: 2025-12-05 08:42:38.996631312 +0000 UTC m=+0.228461586 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=ceilometer_agent_compute, io.openshift.expose-services=)
Dec 05 08:42:39 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:42:39 np0005546420.localdomain podman[88269]: 2025-12-05 08:42:39.009248103 +0000 UTC m=+0.229250721 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc.)
Dec 05 08:42:39 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:42:39 np0005546420.localdomain podman[88268]: 2025-12-05 08:42:39.023305819 +0000 UTC m=+0.246775444 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:42:39 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:42:39 np0005546420.localdomain podman[88255]: 2025-12-05 08:42:39.053126141 +0000 UTC m=+0.357968437 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, container_name=nova_compute, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible)
Dec 05 08:42:39 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:42:39 np0005546420.localdomain sshd[88356]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:42:39 np0005546420.localdomain sshd[88356]: error: kex_exchange_identification: read: Connection reset by peer
Dec 05 08:42:39 np0005546420.localdomain sshd[88356]: Connection reset by 8.130.184.62 port 54212
Dec 05 08:42:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:42:40 np0005546420.localdomain systemd[1]: tmp-crun.BHxbc5.mount: Deactivated successfully.
Dec 05 08:42:40 np0005546420.localdomain podman[88357]: 2025-12-05 08:42:40.51605939 +0000 UTC m=+0.094701703 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vendor=Red Hat, Inc., container_name=nova_migration_target, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:42:40 np0005546420.localdomain podman[88357]: 2025-12-05 08:42:40.856412502 +0000 UTC m=+0.435054835 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_migration_target, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:42:40 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:42:42 np0005546420.localdomain sudo[88379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:42:42 np0005546420.localdomain sudo[88379]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:42:42 np0005546420.localdomain sudo[88379]: pam_unix(sudo:session): session closed for user root
Dec 05 08:42:42 np0005546420.localdomain sudo[88394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:42:42 np0005546420.localdomain sudo[88394]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:42:43 np0005546420.localdomain sudo[88394]: pam_unix(sudo:session): session closed for user root
Dec 05 08:42:44 np0005546420.localdomain sudo[88440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:42:44 np0005546420.localdomain sudo[88440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:42:44 np0005546420.localdomain sudo[88440]: pam_unix(sudo:session): session closed for user root
Dec 05 08:42:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:42:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:42:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:42:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:42:46 np0005546420.localdomain systemd[1]: tmp-crun.LQsKB9.mount: Deactivated successfully.
Dec 05 08:42:46 np0005546420.localdomain podman[88455]: 2025-12-05 08:42:46.527262583 +0000 UTC m=+0.101309598 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Dec 05 08:42:46 np0005546420.localdomain podman[88455]: 2025-12-05 08:42:46.5578283 +0000 UTC m=+0.131875305 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, vcs-type=git)
Dec 05 08:42:46 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:42:46 np0005546420.localdomain systemd[1]: tmp-crun.ozd8LY.mount: Deactivated successfully.
Dec 05 08:42:46 np0005546420.localdomain podman[88456]: 2025-12-05 08:42:46.618022085 +0000 UTC m=+0.189579493 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc., container_name=collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:42:46 np0005546420.localdomain podman[88456]: 2025-12-05 08:42:46.657352943 +0000 UTC m=+0.228910311 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:42:46 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:42:46 np0005546420.localdomain podman[88457]: 2025-12-05 08:42:46.673937477 +0000 UTC m=+0.244015778 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible)
Dec 05 08:42:46 np0005546420.localdomain podman[88457]: 2025-12-05 08:42:46.683234904 +0000 UTC m=+0.253313255 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, architecture=x86_64, distribution-scope=public, release=1761123044, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, version=17.1.12, container_name=iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4)
Dec 05 08:42:46 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:42:46 np0005546420.localdomain podman[88458]: 2025-12-05 08:42:46.767248376 +0000 UTC m=+0.332981633 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:42:46 np0005546420.localdomain podman[88458]: 2025-12-05 08:42:46.805686367 +0000 UTC m=+0.371419624 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=)
Dec 05 08:42:46 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:42:49 np0005546420.localdomain sshd[88254]: error: kex_exchange_identification: read: Connection timed out
Dec 05 08:42:49 np0005546420.localdomain sshd[88254]: banner exchange: Connection from 8.130.184.62 port 60064: Connection timed out
Dec 05 08:42:53 np0005546420.localdomain sshd[88587]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:42:54 np0005546420.localdomain sshd[88587]: Received disconnect from 195.250.72.168 port 32944:11: Bye Bye [preauth]
Dec 05 08:42:54 np0005546420.localdomain sshd[88587]: Disconnected from authenticating user root 195.250.72.168 port 32944 [preauth]
Dec 05 08:43:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:43:00 np0005546420.localdomain podman[88589]: 2025-12-05 08:43:00.508116624 +0000 UTC m=+0.087931594 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, container_name=metrics_qdr, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 05 08:43:00 np0005546420.localdomain podman[88589]: 2025-12-05 08:43:00.696419776 +0000 UTC m=+0.276234786 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, config_id=tripleo_step1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git)
Dec 05 08:43:00 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:43:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:43:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:43:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:43:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:43:09 np0005546420.localdomain systemd[1]: tmp-crun.bmj3ql.mount: Deactivated successfully.
Dec 05 08:43:09 np0005546420.localdomain podman[88619]: 2025-12-05 08:43:09.522592373 +0000 UTC m=+0.096826311 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 05 08:43:09 np0005546420.localdomain podman[88620]: 2025-12-05 08:43:09.564187791 +0000 UTC m=+0.136105527 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, config_id=tripleo_step5, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:43:09 np0005546420.localdomain podman[88620]: 2025-12-05 08:43:09.597247094 +0000 UTC m=+0.169164830 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_compute, name=rhosp17/openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5)
Dec 05 08:43:09 np0005546420.localdomain podman[88618]: 2025-12-05 08:43:09.611428434 +0000 UTC m=+0.186871409 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, architecture=x86_64, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:43:09 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:43:09 np0005546420.localdomain podman[88619]: 2025-12-05 08:43:09.635713426 +0000 UTC m=+0.209947354 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi)
Dec 05 08:43:09 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:43:09 np0005546420.localdomain podman[88618]: 2025-12-05 08:43:09.651410162 +0000 UTC m=+0.226853147 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, tcib_managed=true, container_name=logrotate_crond, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:43:09 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:43:09 np0005546420.localdomain podman[88621]: 2025-12-05 08:43:09.732066971 +0000 UTC m=+0.297656841 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public)
Dec 05 08:43:09 np0005546420.localdomain podman[88621]: 2025-12-05 08:43:09.793388559 +0000 UTC m=+0.358978459 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:43:09 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:43:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:43:11 np0005546420.localdomain podman[88718]: 2025-12-05 08:43:11.50210658 +0000 UTC m=+0.081629679 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Dec 05 08:43:11 np0005546420.localdomain podman[88718]: 2025-12-05 08:43:11.826792636 +0000 UTC m=+0.406315725 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:43:11 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:43:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:43:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:43:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:43:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:43:17 np0005546420.localdomain systemd[1]: tmp-crun.6GsHYq.mount: Deactivated successfully.
Dec 05 08:43:17 np0005546420.localdomain podman[88741]: 2025-12-05 08:43:17.522735265 +0000 UTC m=+0.094740505 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:43:17 np0005546420.localdomain podman[88742]: 2025-12-05 08:43:17.495471301 +0000 UTC m=+0.069168704 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12)
Dec 05 08:43:17 np0005546420.localdomain podman[88740]: 2025-12-05 08:43:17.563078174 +0000 UTC m=+0.140262504 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, distribution-scope=public, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 05 08:43:17 np0005546420.localdomain podman[88742]: 2025-12-05 08:43:17.575312823 +0000 UTC m=+0.149010226 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Dec 05 08:43:17 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:43:17 np0005546420.localdomain podman[88740]: 2025-12-05 08:43:17.609609435 +0000 UTC m=+0.186793785 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 05 08:43:17 np0005546420.localdomain podman[88743]: 2025-12-05 08:43:17.619337756 +0000 UTC m=+0.187090485 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible)
Dec 05 08:43:17 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:43:17 np0005546420.localdomain podman[88743]: 2025-12-05 08:43:17.656539508 +0000 UTC m=+0.224292237 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 05 08:43:17 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:43:17 np0005546420.localdomain podman[88741]: 2025-12-05 08:43:17.708039314 +0000 UTC m=+0.280044594 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.buildah.version=1.41.4)
Dec 05 08:43:17 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:43:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:43:31 np0005546420.localdomain podman[88822]: 2025-12-05 08:43:31.524386429 +0000 UTC m=+0.103953580 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12)
Dec 05 08:43:31 np0005546420.localdomain podman[88822]: 2025-12-05 08:43:31.710138242 +0000 UTC m=+0.289705343 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Dec 05 08:43:31 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:43:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:43:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:43:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:43:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:43:40 np0005546420.localdomain systemd[1]: tmp-crun.VH0WoD.mount: Deactivated successfully.
Dec 05 08:43:40 np0005546420.localdomain podman[88850]: 2025-12-05 08:43:40.515110231 +0000 UTC m=+0.095293892 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:43:40 np0005546420.localdomain podman[88850]: 2025-12-05 08:43:40.527370341 +0000 UTC m=+0.107554072 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:43:40 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:43:40 np0005546420.localdomain systemd[1]: tmp-crun.XjWkjz.mount: Deactivated successfully.
Dec 05 08:43:40 np0005546420.localdomain podman[88851]: 2025-12-05 08:43:40.579073693 +0000 UTC m=+0.146431236 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:43:40 np0005546420.localdomain podman[88851]: 2025-12-05 08:43:40.613364494 +0000 UTC m=+0.180722017 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:43:40 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:43:40 np0005546420.localdomain podman[88852]: 2025-12-05 08:43:40.669326178 +0000 UTC m=+0.240653215 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, name=rhosp17/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team)
Dec 05 08:43:40 np0005546420.localdomain podman[88858]: 2025-12-05 08:43:40.640408342 +0000 UTC m=+0.204520825 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z)
Dec 05 08:43:40 np0005546420.localdomain podman[88852]: 2025-12-05 08:43:40.702374161 +0000 UTC m=+0.273701158 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, container_name=nova_compute)
Dec 05 08:43:40 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:43:40 np0005546420.localdomain podman[88858]: 2025-12-05 08:43:40.729663437 +0000 UTC m=+0.293775870 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute)
Dec 05 08:43:40 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:43:41 np0005546420.localdomain systemd[1]: tmp-crun.aHqJjW.mount: Deactivated successfully.
Dec 05 08:43:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:43:42 np0005546420.localdomain podman[88948]: 2025-12-05 08:43:42.507725414 +0000 UTC m=+0.083695303 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:43:42 np0005546420.localdomain podman[88948]: 2025-12-05 08:43:42.900476089 +0000 UTC m=+0.476445978 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:43:42 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:43:44 np0005546420.localdomain sudo[88971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:43:44 np0005546420.localdomain sudo[88971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:43:44 np0005546420.localdomain sudo[88971]: pam_unix(sudo:session): session closed for user root
Dec 05 08:43:44 np0005546420.localdomain sudo[88986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 08:43:44 np0005546420.localdomain sudo[88986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:43:45 np0005546420.localdomain sudo[88986]: pam_unix(sudo:session): session closed for user root
Dec 05 08:43:45 np0005546420.localdomain sudo[89021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:43:45 np0005546420.localdomain sudo[89021]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:43:45 np0005546420.localdomain sudo[89021]: pam_unix(sudo:session): session closed for user root
Dec 05 08:43:45 np0005546420.localdomain sudo[89036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:43:45 np0005546420.localdomain sudo[89036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:43:46 np0005546420.localdomain sudo[89036]: pam_unix(sudo:session): session closed for user root
Dec 05 08:43:46 np0005546420.localdomain sudo[89084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:43:46 np0005546420.localdomain sudo[89084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:43:46 np0005546420.localdomain sudo[89084]: pam_unix(sudo:session): session closed for user root
Dec 05 08:43:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:43:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:43:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:43:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:43:48 np0005546420.localdomain systemd[1]: tmp-crun.FTRyqB.mount: Deactivated successfully.
Dec 05 08:43:48 np0005546420.localdomain podman[89101]: 2025-12-05 08:43:48.529150695 +0000 UTC m=+0.094812518 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, vcs-type=git, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 05 08:43:48 np0005546420.localdomain podman[89100]: 2025-12-05 08:43:48.544034705 +0000 UTC m=+0.109925575 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, distribution-scope=public, batch=17.1_20251118.1, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1761123044)
Dec 05 08:43:48 np0005546420.localdomain podman[89100]: 2025-12-05 08:43:48.555561082 +0000 UTC m=+0.121451962 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, architecture=x86_64, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team)
Dec 05 08:43:48 np0005546420.localdomain podman[89101]: 2025-12-05 08:43:48.565571773 +0000 UTC m=+0.131233616 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, version=17.1.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:43:48 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:43:48 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:43:48 np0005546420.localdomain podman[89099]: 2025-12-05 08:43:48.624305232 +0000 UTC m=+0.195262269 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:43:48 np0005546420.localdomain podman[89102]: 2025-12-05 08:43:48.67719483 +0000 UTC m=+0.240032936 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64)
Dec 05 08:43:48 np0005546420.localdomain podman[89099]: 2025-12-05 08:43:48.682412311 +0000 UTC m=+0.253369348 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z)
Dec 05 08:43:48 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:43:48 np0005546420.localdomain podman[89102]: 2025-12-05 08:43:48.726249739 +0000 UTC m=+0.289087785 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, batch=17.1_20251118.1, container_name=ovn_metadata_agent, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4)
Dec 05 08:43:48 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:43:55 np0005546420.localdomain sshd[89232]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:43:56 np0005546420.localdomain sshd[89232]: Received disconnect from 93.157.248.178 port 36186:11: Bye Bye [preauth]
Dec 05 08:43:56 np0005546420.localdomain sshd[89232]: Disconnected from authenticating user root 93.157.248.178 port 36186 [preauth]
Dec 05 08:44:00 np0005546420.localdomain sshd[89234]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:44:01 np0005546420.localdomain sshd[89234]: Received disconnect from 195.250.72.168 port 37330:11: Bye Bye [preauth]
Dec 05 08:44:01 np0005546420.localdomain sshd[89234]: Disconnected from authenticating user root 195.250.72.168 port 37330 [preauth]
Dec 05 08:44:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:44:02 np0005546420.localdomain podman[89236]: 2025-12-05 08:44:02.046558972 +0000 UTC m=+0.095674924 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public)
Dec 05 08:44:02 np0005546420.localdomain podman[89236]: 2025-12-05 08:44:02.265367979 +0000 UTC m=+0.314483891 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-qdrouterd, version=17.1.12)
Dec 05 08:44:02 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:44:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:44:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:44:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:44:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:44:11 np0005546420.localdomain podman[89267]: 2025-12-05 08:44:11.516381041 +0000 UTC m=+0.090866585 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Dec 05 08:44:11 np0005546420.localdomain podman[89267]: 2025-12-05 08:44:11.573518981 +0000 UTC m=+0.148004545 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec 05 08:44:11 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:44:11 np0005546420.localdomain podman[89269]: 2025-12-05 08:44:11.574609265 +0000 UTC m=+0.144716033 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:44:11 np0005546420.localdomain podman[89266]: 2025-12-05 08:44:11.629582668 +0000 UTC m=+0.205333360 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:44:11 np0005546420.localdomain podman[89269]: 2025-12-05 08:44:11.660742983 +0000 UTC m=+0.230849731 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z)
Dec 05 08:44:11 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:44:11 np0005546420.localdomain podman[89268]: 2025-12-05 08:44:11.673752066 +0000 UTC m=+0.243276676 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:44:11 np0005546420.localdomain podman[89268]: 2025-12-05 08:44:11.707347486 +0000 UTC m=+0.276872146 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step5, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 05 08:44:11 np0005546420.localdomain podman[89266]: 2025-12-05 08:44:11.716897782 +0000 UTC m=+0.292648524 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, tcib_managed=true, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 05 08:44:11 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:44:11 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:44:12 np0005546420.localdomain systemd[1]: tmp-crun.p4abVQ.mount: Deactivated successfully.
Dec 05 08:44:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:44:14 np0005546420.localdomain podman[89362]: 2025-12-05 08:44:14.87787329 +0000 UTC m=+0.077340366 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:44:15 np0005546420.localdomain podman[89362]: 2025-12-05 08:44:15.242836094 +0000 UTC m=+0.442303180 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:44:15 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:44:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:44:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:44:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:44:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:44:19 np0005546420.localdomain podman[89386]: 2025-12-05 08:44:19.53287075 +0000 UTC m=+0.104971111 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, container_name=collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, vcs-type=git, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:44:19 np0005546420.localdomain podman[89385]: 2025-12-05 08:44:19.575364796 +0000 UTC m=+0.146936411 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container)
Dec 05 08:44:19 np0005546420.localdomain podman[89387]: 2025-12-05 08:44:19.621944729 +0000 UTC m=+0.186636221 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-iscsid-container)
Dec 05 08:44:19 np0005546420.localdomain podman[89385]: 2025-12-05 08:44:19.627623355 +0000 UTC m=+0.199194980 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:44:19 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:44:19 np0005546420.localdomain podman[89387]: 2025-12-05 08:44:19.662485034 +0000 UTC m=+0.227176526 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, architecture=x86_64, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 05 08:44:19 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:44:19 np0005546420.localdomain podman[89389]: 2025-12-05 08:44:19.678668546 +0000 UTC m=+0.236648660 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 05 08:44:19 np0005546420.localdomain podman[89386]: 2025-12-05 08:44:19.699999887 +0000 UTC m=+0.272100278 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 05 08:44:19 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:44:19 np0005546420.localdomain podman[89389]: 2025-12-05 08:44:19.722091271 +0000 UTC m=+0.280071435 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent)
Dec 05 08:44:19 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:44:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:44:32 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:44:32 np0005546420.localdomain recover_tripleo_nova_virtqemud[89474]: 62579
Dec 05 08:44:32 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:44:32 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:44:32 np0005546420.localdomain podman[89472]: 2025-12-05 08:44:32.517761916 +0000 UTC m=+0.092539687 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:44:32 np0005546420.localdomain podman[89472]: 2025-12-05 08:44:32.705445829 +0000 UTC m=+0.280223600 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, distribution-scope=public, container_name=metrics_qdr, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Dec 05 08:44:32 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:44:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:44:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:44:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:44:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:44:42 np0005546420.localdomain podman[89506]: 2025-12-05 08:44:42.51849987 +0000 UTC m=+0.086957854 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute)
Dec 05 08:44:42 np0005546420.localdomain systemd[1]: tmp-crun.gIeFVy.mount: Deactivated successfully.
Dec 05 08:44:42 np0005546420.localdomain podman[89503]: 2025-12-05 08:44:42.536625062 +0000 UTC m=+0.106392856 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 05 08:44:42 np0005546420.localdomain podman[89506]: 2025-12-05 08:44:42.555252938 +0000 UTC m=+0.123710932 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=)
Dec 05 08:44:42 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:44:42 np0005546420.localdomain podman[89505]: 2025-12-05 08:44:42.572983028 +0000 UTC m=+0.141585376 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-type=git, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044)
Dec 05 08:44:42 np0005546420.localdomain podman[89504]: 2025-12-05 08:44:42.623935776 +0000 UTC m=+0.193843174 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:44:42 np0005546420.localdomain podman[89503]: 2025-12-05 08:44:42.651340944 +0000 UTC m=+0.221108808 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:44:42 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:44:42 np0005546420.localdomain podman[89504]: 2025-12-05 08:44:42.679351451 +0000 UTC m=+0.249258879 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:44:42 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:44:42 np0005546420.localdomain podman[89505]: 2025-12-05 08:44:42.704671137 +0000 UTC m=+0.273273435 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc.)
Dec 05 08:44:42 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:44:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:44:45 np0005546420.localdomain podman[89600]: 2025-12-05 08:44:45.521511997 +0000 UTC m=+0.094456977 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4)
Dec 05 08:44:45 np0005546420.localdomain podman[89600]: 2025-12-05 08:44:45.921493745 +0000 UTC m=+0.494438755 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 05 08:44:45 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:44:46 np0005546420.localdomain sudo[89623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:44:46 np0005546420.localdomain sudo[89623]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:44:46 np0005546420.localdomain sudo[89623]: pam_unix(sudo:session): session closed for user root
Dec 05 08:44:46 np0005546420.localdomain sudo[89638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:44:46 np0005546420.localdomain sudo[89638]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:44:47 np0005546420.localdomain sudo[89638]: pam_unix(sudo:session): session closed for user root
Dec 05 08:44:48 np0005546420.localdomain sudo[89686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:44:48 np0005546420.localdomain sudo[89686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:44:48 np0005546420.localdomain sudo[89686]: pam_unix(sudo:session): session closed for user root
Dec 05 08:44:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:44:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:44:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:44:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:44:50 np0005546420.localdomain systemd[1]: tmp-crun.R0Ah06.mount: Deactivated successfully.
Dec 05 08:44:50 np0005546420.localdomain podman[89701]: 2025-12-05 08:44:50.511723979 +0000 UTC m=+0.086914203 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vendor=Red Hat, Inc.)
Dec 05 08:44:50 np0005546420.localdomain systemd[1]: tmp-crun.ZS59A4.mount: Deactivated successfully.
Dec 05 08:44:50 np0005546420.localdomain podman[89702]: 2025-12-05 08:44:50.570892171 +0000 UTC m=+0.143165195 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git)
Dec 05 08:44:50 np0005546420.localdomain podman[89702]: 2025-12-05 08:44:50.581456568 +0000 UTC m=+0.153729562 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 05 08:44:50 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:44:50 np0005546420.localdomain podman[89701]: 2025-12-05 08:44:50.622724977 +0000 UTC m=+0.197915241 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4)
Dec 05 08:44:50 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:44:50 np0005546420.localdomain podman[89704]: 2025-12-05 08:44:50.674375746 +0000 UTC m=+0.239866919 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 05 08:44:50 np0005546420.localdomain podman[89703]: 2025-12-05 08:44:50.733397635 +0000 UTC m=+0.302478559 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.openshift.expose-services=, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:44:50 np0005546420.localdomain podman[89704]: 2025-12-05 08:44:50.750605088 +0000 UTC m=+0.316096341 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Dec 05 08:44:50 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:44:50 np0005546420.localdomain podman[89703]: 2025-12-05 08:44:50.770154073 +0000 UTC m=+0.339235037 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12)
Dec 05 08:44:50 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:45:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:45:03 np0005546420.localdomain podman[89810]: 2025-12-05 08:45:03.517082897 +0000 UTC m=+0.087819431 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:45:03 np0005546420.localdomain podman[89810]: 2025-12-05 08:45:03.742528719 +0000 UTC m=+0.313265253 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, managed_by=tripleo_ansible, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, distribution-scope=public)
Dec 05 08:45:03 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:45:09 np0005546420.localdomain sshd[89841]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:45:10 np0005546420.localdomain sshd[89841]: Received disconnect from 195.250.72.168 port 50936:11: Bye Bye [preauth]
Dec 05 08:45:10 np0005546420.localdomain sshd[89841]: Disconnected from authenticating user root 195.250.72.168 port 50936 [preauth]
Dec 05 08:45:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:45:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:45:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:45:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:45:13 np0005546420.localdomain podman[89843]: 2025-12-05 08:45:13.530978417 +0000 UTC m=+0.105248271 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 05 08:45:13 np0005546420.localdomain podman[89844]: 2025-12-05 08:45:13.562532965 +0000 UTC m=+0.136396896 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, maintainer=OpenStack TripleO Team)
Dec 05 08:45:13 np0005546420.localdomain podman[89845]: 2025-12-05 08:45:13.620169879 +0000 UTC m=+0.190843131 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, release=1761123044, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:45:13 np0005546420.localdomain podman[89844]: 2025-12-05 08:45:13.620979095 +0000 UTC m=+0.194843046 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:45:13 np0005546420.localdomain podman[89845]: 2025-12-05 08:45:13.674044558 +0000 UTC m=+0.244717810 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.buildah.version=1.41.4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1761123044, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 08:45:13 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:45:13 np0005546420.localdomain podman[89843]: 2025-12-05 08:45:13.694475661 +0000 UTC m=+0.268745555 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team)
Dec 05 08:45:13 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:45:13 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:45:13 np0005546420.localdomain podman[89846]: 2025-12-05 08:45:13.676652698 +0000 UTC m=+0.245564576 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team)
Dec 05 08:45:13 np0005546420.localdomain podman[89846]: 2025-12-05 08:45:13.812471935 +0000 UTC m=+0.381383773 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, tcib_managed=true, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:45:13 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:45:14 np0005546420.localdomain sshd[89938]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:45:15 np0005546420.localdomain sshd[89938]: Received disconnect from 93.157.248.178 port 53326:11: Bye Bye [preauth]
Dec 05 08:45:15 np0005546420.localdomain sshd[89938]: Disconnected from authenticating user root 93.157.248.178 port 53326 [preauth]
Dec 05 08:45:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:45:16 np0005546420.localdomain podman[89940]: 2025-12-05 08:45:16.512163697 +0000 UTC m=+0.088863183 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:45:16 np0005546420.localdomain podman[89940]: 2025-12-05 08:45:16.889410761 +0000 UTC m=+0.466110027 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:45:16 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:45:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:45:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:45:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:45:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:45:21 np0005546420.localdomain systemd[1]: tmp-crun.dRHnmj.mount: Deactivated successfully.
Dec 05 08:45:21 np0005546420.localdomain podman[89965]: 2025-12-05 08:45:21.503096271 +0000 UTC m=+0.081469014 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:45:21 np0005546420.localdomain podman[89965]: 2025-12-05 08:45:21.510413458 +0000 UTC m=+0.088786251 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, batch=17.1_20251118.1, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:45:21 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:45:21 np0005546420.localdomain podman[89964]: 2025-12-05 08:45:21.598173726 +0000 UTC m=+0.175558508 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc.)
Dec 05 08:45:21 np0005546420.localdomain podman[89964]: 2025-12-05 08:45:21.609412894 +0000 UTC m=+0.186797736 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:45:21 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:45:21 np0005546420.localdomain podman[89971]: 2025-12-05 08:45:21.662081696 +0000 UTC m=+0.231963026 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 05 08:45:21 np0005546420.localdomain podman[89963]: 2025-12-05 08:45:21.709814784 +0000 UTC m=+0.291934112 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc.)
Dec 05 08:45:21 np0005546420.localdomain podman[89971]: 2025-12-05 08:45:21.724374855 +0000 UTC m=+0.294256165 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:14:25Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 05 08:45:21 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:45:21 np0005546420.localdomain podman[89963]: 2025-12-05 08:45:21.781311609 +0000 UTC m=+0.363430977 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git)
Dec 05 08:45:21 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:45:22 np0005546420.localdomain systemd[1]: tmp-crun.ih6wch.mount: Deactivated successfully.
Dec 05 08:45:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:45:34 np0005546420.localdomain podman[90051]: 2025-12-05 08:45:34.513405635 +0000 UTC m=+0.085910232 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, version=17.1.12, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z)
Dec 05 08:45:34 np0005546420.localdomain podman[90051]: 2025-12-05 08:45:34.687346223 +0000 UTC m=+0.259850770 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-qdrouterd)
Dec 05 08:45:34 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:45:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:45:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:45:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:45:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:45:44 np0005546420.localdomain systemd[1]: tmp-crun.3NjzEm.mount: Deactivated successfully.
Dec 05 08:45:44 np0005546420.localdomain podman[90080]: 2025-12-05 08:45:44.526016996 +0000 UTC m=+0.097665926 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-cron-container)
Dec 05 08:45:44 np0005546420.localdomain podman[90080]: 2025-12-05 08:45:44.592499354 +0000 UTC m=+0.164148294 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, architecture=x86_64, managed_by=tripleo_ansible)
Dec 05 08:45:44 np0005546420.localdomain podman[90083]: 2025-12-05 08:45:44.553269719 +0000 UTC m=+0.111699910 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:45:44 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:45:44 np0005546420.localdomain podman[90081]: 2025-12-05 08:45:44.570750852 +0000 UTC m=+0.138295925 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible)
Dec 05 08:45:44 np0005546420.localdomain podman[90083]: 2025-12-05 08:45:44.637352024 +0000 UTC m=+0.195782245 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com)
Dec 05 08:45:44 np0005546420.localdomain podman[90082]: 2025-12-05 08:45:44.595736325 +0000 UTC m=+0.157502629 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Dec 05 08:45:44 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:45:44 np0005546420.localdomain podman[90081]: 2025-12-05 08:45:44.652451212 +0000 UTC m=+0.219996295 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4)
Dec 05 08:45:44 np0005546420.localdomain podman[90082]: 2025-12-05 08:45:44.676392943 +0000 UTC m=+0.238159157 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:45:44 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:45:44 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:45:45 np0005546420.localdomain systemd[1]: tmp-crun.e8Rwis.mount: Deactivated successfully.
Dec 05 08:45:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:45:47 np0005546420.localdomain systemd[1]: tmp-crun.bn5uU1.mount: Deactivated successfully.
Dec 05 08:45:47 np0005546420.localdomain podman[90178]: 2025-12-05 08:45:47.509368484 +0000 UTC m=+0.088847433 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z)
Dec 05 08:45:47 np0005546420.localdomain podman[90178]: 2025-12-05 08:45:47.899876298 +0000 UTC m=+0.479355287 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Dec 05 08:45:47 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:45:48 np0005546420.localdomain sudo[90201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:45:48 np0005546420.localdomain sudo[90201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:45:48 np0005546420.localdomain sudo[90201]: pam_unix(sudo:session): session closed for user root
Dec 05 08:45:48 np0005546420.localdomain sudo[90216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 08:45:48 np0005546420.localdomain sudo[90216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:45:49 np0005546420.localdomain podman[90303]: 2025-12-05 08:45:49.391091862 +0000 UTC m=+0.078530483 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, release=1763362218, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 08:45:49 np0005546420.localdomain podman[90303]: 2025-12-05 08:45:49.516925549 +0000 UTC m=+0.204364120 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, release=1763362218, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 08:45:49 np0005546420.localdomain sudo[90216]: pam_unix(sudo:session): session closed for user root
Dec 05 08:45:49 np0005546420.localdomain sudo[90368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:45:49 np0005546420.localdomain sudo[90368]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:45:49 np0005546420.localdomain sudo[90368]: pam_unix(sudo:session): session closed for user root
Dec 05 08:45:49 np0005546420.localdomain sudo[90383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:45:49 np0005546420.localdomain sudo[90383]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:45:50 np0005546420.localdomain sudo[90383]: pam_unix(sudo:session): session closed for user root
Dec 05 08:45:51 np0005546420.localdomain sudo[90430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:45:51 np0005546420.localdomain sudo[90430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:45:51 np0005546420.localdomain sudo[90430]: pam_unix(sudo:session): session closed for user root
Dec 05 08:45:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:45:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:45:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:45:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:45:52 np0005546420.localdomain podman[90447]: 2025-12-05 08:45:52.520300267 +0000 UTC m=+0.089963277 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-type=git, container_name=iscsid, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid)
Dec 05 08:45:52 np0005546420.localdomain podman[90447]: 2025-12-05 08:45:52.55750677 +0000 UTC m=+0.127169790 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, architecture=x86_64, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 05 08:45:52 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:45:52 np0005546420.localdomain podman[90446]: 2025-12-05 08:45:52.562031319 +0000 UTC m=+0.133871817 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, container_name=collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_id=tripleo_step3, version=17.1.12)
Dec 05 08:45:52 np0005546420.localdomain podman[90445]: 2025-12-05 08:45:52.619082347 +0000 UTC m=+0.190909344 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 05 08:45:52 np0005546420.localdomain podman[90445]: 2025-12-05 08:45:52.642907594 +0000 UTC m=+0.214734561 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller)
Dec 05 08:45:52 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:45:52 np0005546420.localdomain podman[90448]: 2025-12-05 08:45:52.730114055 +0000 UTC m=+0.296750882 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z)
Dec 05 08:45:52 np0005546420.localdomain podman[90446]: 2025-12-05 08:45:52.745866043 +0000 UTC m=+0.317706551 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible)
Dec 05 08:45:52 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:45:52 np0005546420.localdomain podman[90448]: 2025-12-05 08:45:52.775290364 +0000 UTC m=+0.341927171 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:45:52 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:46:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:46:05 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:46:05 np0005546420.localdomain recover_tripleo_nova_virtqemud[90559]: 62579
Dec 05 08:46:05 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:46:05 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:46:05 np0005546420.localdomain podman[90552]: 2025-12-05 08:46:05.51476319 +0000 UTC m=+0.092612279 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true)
Dec 05 08:46:05 np0005546420.localdomain podman[90552]: 2025-12-05 08:46:05.710538574 +0000 UTC m=+0.288387633 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1)
Dec 05 08:46:05 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:46:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:46:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:46:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:46:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:46:15 np0005546420.localdomain podman[90582]: 2025-12-05 08:46:15.527523295 +0000 UTC m=+0.099704849 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:46:15 np0005546420.localdomain podman[90582]: 2025-12-05 08:46:15.563403077 +0000 UTC m=+0.135584601 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T22:49:32Z, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 05 08:46:15 np0005546420.localdomain systemd[1]: tmp-crun.sIBpAG.mount: Deactivated successfully.
Dec 05 08:46:15 np0005546420.localdomain podman[90583]: 2025-12-05 08:46:15.586484941 +0000 UTC m=+0.154623269 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, build-date=2025-11-19T00:12:45Z, architecture=x86_64, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=)
Dec 05 08:46:15 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:46:15 np0005546420.localdomain podman[90583]: 2025-12-05 08:46:15.620281888 +0000 UTC m=+0.188420276 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, distribution-scope=public)
Dec 05 08:46:15 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:46:15 np0005546420.localdomain podman[90584]: 2025-12-05 08:46:15.634327203 +0000 UTC m=+0.197277210 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, distribution-scope=public, container_name=nova_compute, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:46:15 np0005546420.localdomain podman[90584]: 2025-12-05 08:46:15.671323659 +0000 UTC m=+0.234273626 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 08:46:15 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:46:15 np0005546420.localdomain podman[90587]: 2025-12-05 08:46:15.697594123 +0000 UTC m=+0.257657971 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public)
Dec 05 08:46:15 np0005546420.localdomain podman[90587]: 2025-12-05 08:46:15.732489023 +0000 UTC m=+0.292552881 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute)
Dec 05 08:46:15 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:46:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:46:18 np0005546420.localdomain sshd[90692]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:46:18 np0005546420.localdomain systemd[1]: tmp-crun.0rOPoe.mount: Deactivated successfully.
Dec 05 08:46:18 np0005546420.localdomain podman[90681]: 2025-12-05 08:46:18.522348928 +0000 UTC m=+0.097011557 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12)
Dec 05 08:46:18 np0005546420.localdomain podman[90681]: 2025-12-05 08:46:18.898654103 +0000 UTC m=+0.473316741 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 05 08:46:18 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:46:19 np0005546420.localdomain sshd[90692]: Received disconnect from 195.250.72.168 port 57556:11: Bye Bye [preauth]
Dec 05 08:46:19 np0005546420.localdomain sshd[90692]: Disconnected from authenticating user root 195.250.72.168 port 57556 [preauth]
Dec 05 08:46:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:46:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:46:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:46:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:46:23 np0005546420.localdomain systemd[1]: tmp-crun.Y72fOx.mount: Deactivated successfully.
Dec 05 08:46:23 np0005546420.localdomain podman[90705]: 2025-12-05 08:46:23.532987534 +0000 UTC m=+0.104656994 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:51:28Z, container_name=collectd, release=1761123044, vendor=Red Hat, Inc., vcs-type=git)
Dec 05 08:46:23 np0005546420.localdomain podman[90705]: 2025-12-05 08:46:23.576946674 +0000 UTC m=+0.148616114 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, config_id=tripleo_step3, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:46:23 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:46:23 np0005546420.localdomain podman[90707]: 2025-12-05 08:46:23.624677343 +0000 UTC m=+0.188571772 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:46:23 np0005546420.localdomain podman[90707]: 2025-12-05 08:46:23.674267119 +0000 UTC m=+0.238161588 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=)
Dec 05 08:46:23 np0005546420.localdomain podman[90706]: 2025-12-05 08:46:23.685796396 +0000 UTC m=+0.251652776 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, build-date=2025-11-18T23:44:13Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:46:23 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:46:23 np0005546420.localdomain podman[90706]: 2025-12-05 08:46:23.699284144 +0000 UTC m=+0.265140524 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044)
Dec 05 08:46:23 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:46:23 np0005546420.localdomain podman[90704]: 2025-12-05 08:46:23.57679739 +0000 UTC m=+0.148746549 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:46:23 np0005546420.localdomain podman[90704]: 2025-12-05 08:46:23.763378999 +0000 UTC m=+0.335328178 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, release=1761123044, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc.)
Dec 05 08:46:23 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:46:31 np0005546420.localdomain sshd[90788]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:46:32 np0005546420.localdomain sshd[90788]: Received disconnect from 93.157.248.178 port 54352:11: Bye Bye [preauth]
Dec 05 08:46:32 np0005546420.localdomain sshd[90788]: Disconnected from authenticating user root 93.157.248.178 port 54352 [preauth]
Dec 05 08:46:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:46:36 np0005546420.localdomain podman[90790]: 2025-12-05 08:46:36.512333097 +0000 UTC m=+0.090975400 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1)
Dec 05 08:46:36 np0005546420.localdomain podman[90790]: 2025-12-05 08:46:36.705328484 +0000 UTC m=+0.283970767 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, container_name=metrics_qdr, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true)
Dec 05 08:46:36 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:46:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:46:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:46:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:46:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:46:46 np0005546420.localdomain podman[90820]: 2025-12-05 08:46:46.519363684 +0000 UTC m=+0.092269119 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:46:46 np0005546420.localdomain systemd[1]: tmp-crun.cKJjZc.mount: Deactivated successfully.
Dec 05 08:46:46 np0005546420.localdomain podman[90822]: 2025-12-05 08:46:46.57799829 +0000 UTC m=+0.143791124 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, release=1761123044, container_name=nova_compute, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Dec 05 08:46:46 np0005546420.localdomain podman[90820]: 2025-12-05 08:46:46.582273552 +0000 UTC m=+0.155179007 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, io.openshift.expose-services=, distribution-scope=public, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64)
Dec 05 08:46:46 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:46:46 np0005546420.localdomain podman[90822]: 2025-12-05 08:46:46.662191147 +0000 UTC m=+0.227983971 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public)
Dec 05 08:46:46 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:46:46 np0005546420.localdomain podman[90821]: 2025-12-05 08:46:46.680174024 +0000 UTC m=+0.250413156 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12)
Dec 05 08:46:46 np0005546420.localdomain podman[90825]: 2025-12-05 08:46:46.648857084 +0000 UTC m=+0.209274832 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12)
Dec 05 08:46:46 np0005546420.localdomain podman[90825]: 2025-12-05 08:46:46.731442202 +0000 UTC m=+0.291859980 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_compute, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:46:46 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:46:46 np0005546420.localdomain podman[90821]: 2025-12-05 08:46:46.786646351 +0000 UTC m=+0.356885453 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1)
Dec 05 08:46:46 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:46:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:46:49 np0005546420.localdomain podman[90920]: 2025-12-05 08:46:49.508230672 +0000 UTC m=+0.087478131 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public)
Dec 05 08:46:49 np0005546420.localdomain podman[90920]: 2025-12-05 08:46:49.908375135 +0000 UTC m=+0.487622554 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:46:49 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:46:51 np0005546420.localdomain sudo[90943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:46:51 np0005546420.localdomain sudo[90943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:46:51 np0005546420.localdomain sudo[90943]: pam_unix(sudo:session): session closed for user root
Dec 05 08:46:51 np0005546420.localdomain sudo[90958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:46:51 np0005546420.localdomain sudo[90958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:46:52 np0005546420.localdomain sudo[90958]: pam_unix(sudo:session): session closed for user root
Dec 05 08:46:52 np0005546420.localdomain sudo[91006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:46:52 np0005546420.localdomain sudo[91006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:46:52 np0005546420.localdomain sudo[91006]: pam_unix(sudo:session): session closed for user root
Dec 05 08:46:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:46:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:46:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:46:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:46:54 np0005546420.localdomain systemd[1]: tmp-crun.EHn1cu.mount: Deactivated successfully.
Dec 05 08:46:54 np0005546420.localdomain podman[91024]: 2025-12-05 08:46:54.509683122 +0000 UTC m=+0.079422001 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=)
Dec 05 08:46:54 np0005546420.localdomain podman[91022]: 2025-12-05 08:46:54.565704067 +0000 UTC m=+0.135878779 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:46:54 np0005546420.localdomain podman[91022]: 2025-12-05 08:46:54.59970275 +0000 UTC m=+0.169877502 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:46:54 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:46:54 np0005546420.localdomain podman[91023]: 2025-12-05 08:46:54.614740506 +0000 UTC m=+0.184296059 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, distribution-scope=public, container_name=iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:46:54 np0005546420.localdomain podman[91023]: 2025-12-05 08:46:54.654450066 +0000 UTC m=+0.224005589 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, distribution-scope=public)
Dec 05 08:46:54 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:46:54 np0005546420.localdomain podman[91021]: 2025-12-05 08:46:54.678001215 +0000 UTC m=+0.250414377 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:46:54 np0005546420.localdomain podman[91024]: 2025-12-05 08:46:54.689298915 +0000 UTC m=+0.259037844 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:46:54 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:46:54 np0005546420.localdomain podman[91021]: 2025-12-05 08:46:54.734501475 +0000 UTC m=+0.306914597 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, version=17.1.12, release=1761123044, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller)
Dec 05 08:46:54 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:47:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:47:07 np0005546420.localdomain podman[91106]: 2025-12-05 08:47:07.507845279 +0000 UTC m=+0.085902451 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step1, container_name=metrics_qdr)
Dec 05 08:47:07 np0005546420.localdomain podman[91106]: 2025-12-05 08:47:07.68964443 +0000 UTC m=+0.267701582 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-qdrouterd)
Dec 05 08:47:07 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:47:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:47:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:47:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:47:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:47:17 np0005546420.localdomain systemd[1]: tmp-crun.G2NDsB.mount: Deactivated successfully.
Dec 05 08:47:17 np0005546420.localdomain podman[91135]: 2025-12-05 08:47:17.537770448 +0000 UTC m=+0.107024297 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-19T00:12:45Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible)
Dec 05 08:47:17 np0005546420.localdomain podman[91136]: 2025-12-05 08:47:17.58146653 +0000 UTC m=+0.145373923 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z)
Dec 05 08:47:17 np0005546420.localdomain podman[91135]: 2025-12-05 08:47:17.623393949 +0000 UTC m=+0.192647778 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:47:17 np0005546420.localdomain podman[91136]: 2025-12-05 08:47:17.639458167 +0000 UTC m=+0.203365590 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, tcib_managed=true, release=1761123044, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:47:17 np0005546420.localdomain podman[91134]: 2025-12-05 08:47:17.638465285 +0000 UTC m=+0.207127825 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, container_name=logrotate_crond, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com)
Dec 05 08:47:17 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:47:17 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:47:17 np0005546420.localdomain podman[91138]: 2025-12-05 08:47:17.723164259 +0000 UTC m=+0.283174501 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, vcs-type=git, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team)
Dec 05 08:47:17 np0005546420.localdomain podman[91134]: 2025-12-05 08:47:17.776019546 +0000 UTC m=+0.344682086 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, distribution-scope=public, container_name=logrotate_crond, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, url=https://www.redhat.com)
Dec 05 08:47:17 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:47:17 np0005546420.localdomain podman[91138]: 2025-12-05 08:47:17.831820724 +0000 UTC m=+0.391830966 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 05 08:47:17 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:47:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:47:20 np0005546420.localdomain systemd[1]: tmp-crun.VyVP87.mount: Deactivated successfully.
Dec 05 08:47:20 np0005546420.localdomain podman[91231]: 2025-12-05 08:47:20.507534164 +0000 UTC m=+0.089365489 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, tcib_managed=true)
Dec 05 08:47:20 np0005546420.localdomain podman[91231]: 2025-12-05 08:47:20.881389582 +0000 UTC m=+0.463220827 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=nova_migration_target, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 08:47:20 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:47:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:47:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:47:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:47:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:47:25 np0005546420.localdomain systemd[1]: tmp-crun.o9v9eq.mount: Deactivated successfully.
Dec 05 08:47:25 np0005546420.localdomain podman[91255]: 2025-12-05 08:47:25.518978184 +0000 UTC m=+0.092887779 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-type=git, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:47:25 np0005546420.localdomain systemd[1]: tmp-crun.oWZ50n.mount: Deactivated successfully.
Dec 05 08:47:25 np0005546420.localdomain podman[91258]: 2025-12-05 08:47:25.576253087 +0000 UTC m=+0.138371826 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12)
Dec 05 08:47:25 np0005546420.localdomain podman[91258]: 2025-12-05 08:47:25.617257307 +0000 UTC m=+0.179375986 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:47:25 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:47:25 np0005546420.localdomain podman[91256]: 2025-12-05 08:47:25.628151844 +0000 UTC m=+0.194716601 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step3, container_name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, vcs-type=git)
Dec 05 08:47:25 np0005546420.localdomain podman[91255]: 2025-12-05 08:47:25.648902547 +0000 UTC m=+0.222812172 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, release=1761123044, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:47:25 np0005546420.localdomain podman[91257]: 2025-12-05 08:47:25.684297504 +0000 UTC m=+0.248351883 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64)
Dec 05 08:47:25 np0005546420.localdomain podman[91257]: 2025-12-05 08:47:25.70033741 +0000 UTC m=+0.264391759 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, container_name=iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 05 08:47:25 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:47:25 np0005546420.localdomain podman[91256]: 2025-12-05 08:47:25.716848831 +0000 UTC m=+0.283413618 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, architecture=x86_64)
Dec 05 08:47:25 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:47:25 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:47:27 np0005546420.localdomain sshd[91341]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:47:28 np0005546420.localdomain sshd[91341]: Received disconnect from 195.250.72.168 port 44018:11: Bye Bye [preauth]
Dec 05 08:47:28 np0005546420.localdomain sshd[91341]: Disconnected from authenticating user root 195.250.72.168 port 44018 [preauth]
Dec 05 08:47:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:47:38 np0005546420.localdomain podman[91343]: 2025-12-05 08:47:38.515134517 +0000 UTC m=+0.086855751 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, release=1761123044, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:47:38 np0005546420.localdomain podman[91343]: 2025-12-05 08:47:38.708519577 +0000 UTC m=+0.280240771 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, release=1761123044, managed_by=tripleo_ansible)
Dec 05 08:47:38 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:47:48 np0005546420.localdomain recover_tripleo_nova_virtqemud[91397]: 62579
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:47:48 np0005546420.localdomain podman[91378]: 2025-12-05 08:47:48.529265217 +0000 UTC m=+0.094839748 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=ceilometer_agent_compute, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:47:48 np0005546420.localdomain podman[91378]: 2025-12-05 08:47:48.560918018 +0000 UTC m=+0.126492549 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, architecture=x86_64)
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: tmp-crun.lgkTiT.mount: Deactivated successfully.
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:47:48 np0005546420.localdomain podman[91377]: 2025-12-05 08:47:48.576753218 +0000 UTC m=+0.147030765 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044)
Dec 05 08:47:48 np0005546420.localdomain podman[91376]: 2025-12-05 08:47:48.625337873 +0000 UTC m=+0.195789385 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1)
Dec 05 08:47:48 np0005546420.localdomain podman[91377]: 2025-12-05 08:47:48.63072018 +0000 UTC m=+0.200997747 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, config_id=tripleo_step5, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:47:48 np0005546420.localdomain podman[91375]: 2025-12-05 08:47:48.670464871 +0000 UTC m=+0.247194847 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:47:48 np0005546420.localdomain podman[91376]: 2025-12-05 08:47:48.677069835 +0000 UTC m=+0.247521347 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, release=1761123044, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:47:48 np0005546420.localdomain podman[91375]: 2025-12-05 08:47:48.701889654 +0000 UTC m=+0.278619640 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.12, url=https://www.redhat.com, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 05 08:47:48 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:47:50 np0005546420.localdomain sshd[91474]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:47:51 np0005546420.localdomain sshd[91474]: Received disconnect from 93.157.248.178 port 42410:11: Bye Bye [preauth]
Dec 05 08:47:51 np0005546420.localdomain sshd[91474]: Disconnected from authenticating user root 93.157.248.178 port 42410 [preauth]
Dec 05 08:47:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:47:51 np0005546420.localdomain systemd[1]: tmp-crun.uTnqUy.mount: Deactivated successfully.
Dec 05 08:47:51 np0005546420.localdomain podman[91476]: 2025-12-05 08:47:51.497522887 +0000 UTC m=+0.085163498 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1761123044, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1)
Dec 05 08:47:51 np0005546420.localdomain podman[91476]: 2025-12-05 08:47:51.860068185 +0000 UTC m=+0.447708776 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4)
Dec 05 08:47:51 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:47:52 np0005546420.localdomain sudo[91500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:47:52 np0005546420.localdomain sudo[91500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:47:52 np0005546420.localdomain sudo[91500]: pam_unix(sudo:session): session closed for user root
Dec 05 08:47:53 np0005546420.localdomain sudo[91515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:47:53 np0005546420.localdomain sudo[91515]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:47:53 np0005546420.localdomain sudo[91515]: pam_unix(sudo:session): session closed for user root
Dec 05 08:47:54 np0005546420.localdomain sudo[91562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:47:54 np0005546420.localdomain sudo[91562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:47:54 np0005546420.localdomain sudo[91562]: pam_unix(sudo:session): session closed for user root
Dec 05 08:47:54 np0005546420.localdomain sshd[91577]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:47:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:47:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:47:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:47:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:47:56 np0005546420.localdomain podman[91581]: 2025-12-05 08:47:56.529039788 +0000 UTC m=+0.095196889 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:47:56 np0005546420.localdomain podman[91580]: 2025-12-05 08:47:56.562168315 +0000 UTC m=+0.130994609 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 08:47:56 np0005546420.localdomain podman[91581]: 2025-12-05 08:47:56.582312278 +0000 UTC m=+0.148469369 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4)
Dec 05 08:47:56 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:47:56 np0005546420.localdomain podman[91580]: 2025-12-05 08:47:56.60432808 +0000 UTC m=+0.173154394 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:47:56 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:47:56 np0005546420.localdomain podman[91578]: 2025-12-05 08:47:56.672994906 +0000 UTC m=+0.241153679 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:47:56 np0005546420.localdomain podman[91578]: 2025-12-05 08:47:56.724325297 +0000 UTC m=+0.292484090 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 05 08:47:56 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:47:56 np0005546420.localdomain podman[91579]: 2025-12-05 08:47:56.727789444 +0000 UTC m=+0.295948207 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, distribution-scope=public, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4)
Dec 05 08:47:56 np0005546420.localdomain podman[91579]: 2025-12-05 08:47:56.81224626 +0000 UTC m=+0.380405003 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-collectd, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:47:56 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:48:05 np0005546420.localdomain sshd[91577]: error: kex_exchange_identification: read: Connection timed out
Dec 05 08:48:05 np0005546420.localdomain sshd[91577]: banner exchange: Connection from 180.184.182.87 port 43996: Connection timed out
Dec 05 08:48:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:48:09 np0005546420.localdomain systemd[1]: tmp-crun.Xt4f4V.mount: Deactivated successfully.
Dec 05 08:48:09 np0005546420.localdomain podman[91665]: 2025-12-05 08:48:09.52027012 +0000 UTC m=+0.094659382 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vcs-type=git)
Dec 05 08:48:09 np0005546420.localdomain podman[91665]: 2025-12-05 08:48:09.766461525 +0000 UTC m=+0.340850737 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:48:09 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:48:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:48:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:48:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:48:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:48:19 np0005546420.localdomain podman[91694]: 2025-12-05 08:48:19.5085256 +0000 UTC m=+0.087373517 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=logrotate_crond, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:48:19 np0005546420.localdomain podman[91694]: 2025-12-05 08:48:19.521405188 +0000 UTC m=+0.100253135 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true)
Dec 05 08:48:19 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:48:19 np0005546420.localdomain podman[91696]: 2025-12-05 08:48:19.564300707 +0000 UTC m=+0.134889958 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 05 08:48:19 np0005546420.localdomain systemd[1]: tmp-crun.TRNOZx.mount: Deactivated successfully.
Dec 05 08:48:19 np0005546420.localdomain podman[91696]: 2025-12-05 08:48:19.616607037 +0000 UTC m=+0.187196268 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public)
Dec 05 08:48:19 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:48:19 np0005546420.localdomain podman[91699]: 2025-12-05 08:48:19.636870555 +0000 UTC m=+0.203048830 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=)
Dec 05 08:48:19 np0005546420.localdomain podman[91695]: 2025-12-05 08:48:19.609479197 +0000 UTC m=+0.185927910 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.12, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Dec 05 08:48:19 np0005546420.localdomain podman[91699]: 2025-12-05 08:48:19.66351255 +0000 UTC m=+0.229690775 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_compute, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z)
Dec 05 08:48:19 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:48:19 np0005546420.localdomain podman[91695]: 2025-12-05 08:48:19.68838512 +0000 UTC m=+0.264833913 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:48:19 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:48:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:48:22 np0005546420.localdomain systemd[1]: tmp-crun.aGlA7l.mount: Deactivated successfully.
Dec 05 08:48:22 np0005546420.localdomain podman[91791]: 2025-12-05 08:48:22.518097319 +0000 UTC m=+0.097730078 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.buildah.version=1.41.4)
Dec 05 08:48:22 np0005546420.localdomain podman[91791]: 2025-12-05 08:48:22.901407181 +0000 UTC m=+0.481039900 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec 05 08:48:22 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:48:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:48:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:48:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:48:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:48:27 np0005546420.localdomain podman[91818]: 2025-12-05 08:48:27.512188052 +0000 UTC m=+0.077996597 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, container_name=ovn_metadata_agent, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:48:27 np0005546420.localdomain podman[91815]: 2025-12-05 08:48:27.580851268 +0000 UTC m=+0.150085299 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.expose-services=)
Dec 05 08:48:27 np0005546420.localdomain podman[91818]: 2025-12-05 08:48:27.609924659 +0000 UTC m=+0.175733214 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4)
Dec 05 08:48:27 np0005546420.localdomain podman[91815]: 2025-12-05 08:48:27.609541167 +0000 UTC m=+0.178775208 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ovn_controller, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-ovn-controller-container)
Dec 05 08:48:27 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:48:27 np0005546420.localdomain podman[91816]: 2025-12-05 08:48:27.642317672 +0000 UTC m=+0.208756586 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, name=rhosp17/openstack-collectd, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_id=tripleo_step3)
Dec 05 08:48:27 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:48:27 np0005546420.localdomain podman[91817]: 2025-12-05 08:48:27.685474269 +0000 UTC m=+0.251438249 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true)
Dec 05 08:48:27 np0005546420.localdomain podman[91817]: 2025-12-05 08:48:27.699251515 +0000 UTC m=+0.265215485 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, container_name=iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec 05 08:48:27 np0005546420.localdomain podman[91816]: 2025-12-05 08:48:27.706723286 +0000 UTC m=+0.273162200 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, container_name=collectd, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container)
Dec 05 08:48:27 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:48:27 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:48:28 np0005546420.localdomain systemd[1]: tmp-crun.iSedUQ.mount: Deactivated successfully.
Dec 05 08:48:33 np0005546420.localdomain sshd[91900]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:48:34 np0005546420.localdomain sshd[91900]: Received disconnect from 195.250.72.168 port 35358:11: Bye Bye [preauth]
Dec 05 08:48:34 np0005546420.localdomain sshd[91900]: Disconnected from authenticating user root 195.250.72.168 port 35358 [preauth]
Dec 05 08:48:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:48:40 np0005546420.localdomain podman[91902]: 2025-12-05 08:48:40.518115538 +0000 UTC m=+0.091602118 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.buildah.version=1.41.4, config_id=tripleo_step1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, architecture=x86_64, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:48:40 np0005546420.localdomain podman[91902]: 2025-12-05 08:48:40.729447003 +0000 UTC m=+0.302933623 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, architecture=x86_64, vcs-type=git, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., container_name=metrics_qdr, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:48:40 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:48:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:48:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:48:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:48:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:48:50 np0005546420.localdomain systemd[1]: tmp-crun.DFcdj5.mount: Deactivated successfully.
Dec 05 08:48:50 np0005546420.localdomain podman[91931]: 2025-12-05 08:48:50.535089566 +0000 UTC m=+0.106935513 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git)
Dec 05 08:48:50 np0005546420.localdomain podman[91931]: 2025-12-05 08:48:50.566455927 +0000 UTC m=+0.138301864 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:48:50 np0005546420.localdomain podman[91932]: 2025-12-05 08:48:50.581717781 +0000 UTC m=+0.151549045 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:48:50 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:48:50 np0005546420.localdomain podman[91933]: 2025-12-05 08:48:50.631177353 +0000 UTC m=+0.195777275 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, release=1761123044, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z)
Dec 05 08:48:50 np0005546420.localdomain podman[91930]: 2025-12-05 08:48:50.668065385 +0000 UTC m=+0.241701067 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, container_name=logrotate_crond, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:48:50 np0005546420.localdomain podman[91933]: 2025-12-05 08:48:50.687758925 +0000 UTC m=+0.252358777 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:48:50 np0005546420.localdomain podman[91930]: 2025-12-05 08:48:50.702997036 +0000 UTC m=+0.276632698 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:48:50 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:48:50 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:48:50 np0005546420.localdomain podman[91932]: 2025-12-05 08:48:50.73993757 +0000 UTC m=+0.309768834 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:48:50 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:48:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:48:53 np0005546420.localdomain podman[92025]: 2025-12-05 08:48:53.512129038 +0000 UTC m=+0.085633863 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=nova_migration_target, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=)
Dec 05 08:48:53 np0005546420.localdomain podman[92025]: 2025-12-05 08:48:53.885643046 +0000 UTC m=+0.459147881 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:48:53 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:48:54 np0005546420.localdomain sudo[92048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:48:54 np0005546420.localdomain sudo[92048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:48:54 np0005546420.localdomain sudo[92048]: pam_unix(sudo:session): session closed for user root
Dec 05 08:48:54 np0005546420.localdomain sudo[92063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:48:54 np0005546420.localdomain sudo[92063]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:48:55 np0005546420.localdomain sudo[92063]: pam_unix(sudo:session): session closed for user root
Dec 05 08:48:55 np0005546420.localdomain sudo[92111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:48:55 np0005546420.localdomain sudo[92111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:48:55 np0005546420.localdomain sudo[92111]: pam_unix(sudo:session): session closed for user root
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: tmp-crun.jeL5Ug.mount: Deactivated successfully.
Dec 05 08:48:58 np0005546420.localdomain podman[92126]: 2025-12-05 08:48:58.536430335 +0000 UTC m=+0.105234300 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible)
Dec 05 08:48:58 np0005546420.localdomain podman[92127]: 2025-12-05 08:48:58.572670498 +0000 UTC m=+0.140982877 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z)
Dec 05 08:48:58 np0005546420.localdomain podman[92127]: 2025-12-05 08:48:58.584386712 +0000 UTC m=+0.152699111 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:48:58 np0005546420.localdomain podman[92126]: 2025-12-05 08:48:58.671361175 +0000 UTC m=+0.240165109 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12)
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:48:58 np0005546420.localdomain podman[92129]: 2025-12-05 08:48:58.73515516 +0000 UTC m=+0.296417750 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:48:58 np0005546420.localdomain podman[92128]: 2025-12-05 08:48:58.640775308 +0000 UTC m=+0.203507734 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, container_name=iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, release=1761123044, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container)
Dec 05 08:48:58 np0005546420.localdomain podman[92128]: 2025-12-05 08:48:58.776501391 +0000 UTC m=+0.339233747 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 08:48:58 np0005546420.localdomain podman[92129]: 2025-12-05 08:48:58.78843525 +0000 UTC m=+0.349697810 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4)
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:48:58 np0005546420.localdomain recover_tripleo_nova_virtqemud[92213]: 62579
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:48:58 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:48:59 np0005546420.localdomain systemd[1]: tmp-crun.UcXmCd.mount: Deactivated successfully.
Dec 05 08:49:09 np0005546420.localdomain sshd[92214]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:49:10 np0005546420.localdomain sshd[92214]: Received disconnect from 93.157.248.178 port 43272:11: Bye Bye [preauth]
Dec 05 08:49:10 np0005546420.localdomain sshd[92214]: Disconnected from authenticating user root 93.157.248.178 port 43272 [preauth]
Dec 05 08:49:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:49:11 np0005546420.localdomain podman[92216]: 2025-12-05 08:49:11.043664478 +0000 UTC m=+0.095273762 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=metrics_qdr, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd)
Dec 05 08:49:11 np0005546420.localdomain podman[92216]: 2025-12-05 08:49:11.276689255 +0000 UTC m=+0.328298209 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible)
Dec 05 08:49:11 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:49:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:49:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:49:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:49:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:49:21 np0005546420.localdomain podman[92247]: 2025-12-05 08:49:21.52054482 +0000 UTC m=+0.091879407 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64)
Dec 05 08:49:21 np0005546420.localdomain podman[92245]: 2025-12-05 08:49:21.568268568 +0000 UTC m=+0.144820827 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:49:21 np0005546420.localdomain podman[92247]: 2025-12-05 08:49:21.600444795 +0000 UTC m=+0.171779372 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:49:21 np0005546420.localdomain podman[92245]: 2025-12-05 08:49:21.606584395 +0000 UTC m=+0.183136664 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Dec 05 08:49:21 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:49:21 np0005546420.localdomain systemd[1]: tmp-crun.zO3eTZ.mount: Deactivated successfully.
Dec 05 08:49:21 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:49:21 np0005546420.localdomain podman[92246]: 2025-12-05 08:49:21.63615659 +0000 UTC m=+0.208017723 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:49:21 np0005546420.localdomain podman[92246]: 2025-12-05 08:49:21.668328737 +0000 UTC m=+0.240189860 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:49:21 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:49:21 np0005546420.localdomain podman[92248]: 2025-12-05 08:49:21.681805705 +0000 UTC m=+0.247257480 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:49:21 np0005546420.localdomain podman[92248]: 2025-12-05 08:49:21.741441471 +0000 UTC m=+0.306893216 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z)
Dec 05 08:49:21 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:49:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:49:24 np0005546420.localdomain podman[92343]: 2025-12-05 08:49:24.521865943 +0000 UTC m=+0.097436108 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, url=https://www.redhat.com)
Dec 05 08:49:24 np0005546420.localdomain podman[92343]: 2025-12-05 08:49:24.890376457 +0000 UTC m=+0.465946702 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:49:24 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:49:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:49:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:49:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:49:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:49:29 np0005546420.localdomain podman[92369]: 2025-12-05 08:49:29.51871865 +0000 UTC m=+0.088901585 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:49:29 np0005546420.localdomain systemd[1]: tmp-crun.efM9jn.mount: Deactivated successfully.
Dec 05 08:49:29 np0005546420.localdomain podman[92366]: 2025-12-05 08:49:29.58363619 +0000 UTC m=+0.158757747 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, architecture=x86_64)
Dec 05 08:49:29 np0005546420.localdomain podman[92366]: 2025-12-05 08:49:29.60946798 +0000 UTC m=+0.184589577 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 05 08:49:29 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:49:29 np0005546420.localdomain podman[92367]: 2025-12-05 08:49:29.630046838 +0000 UTC m=+0.203835174 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com)
Dec 05 08:49:29 np0005546420.localdomain podman[92369]: 2025-12-05 08:49:29.634775314 +0000 UTC m=+0.204958249 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 05 08:49:29 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:49:29 np0005546420.localdomain podman[92368]: 2025-12-05 08:49:29.684410552 +0000 UTC m=+0.256658830 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 05 08:49:29 np0005546420.localdomain podman[92367]: 2025-12-05 08:49:29.691344077 +0000 UTC m=+0.265132393 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64)
Dec 05 08:49:29 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:49:29 np0005546420.localdomain podman[92368]: 2025-12-05 08:49:29.720574882 +0000 UTC m=+0.292823120 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, architecture=x86_64, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:49:29 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:49:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:49:41 np0005546420.localdomain podman[92455]: 2025-12-05 08:49:41.512357884 +0000 UTC m=+0.092274009 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, container_name=metrics_qdr, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:49:41 np0005546420.localdomain podman[92455]: 2025-12-05 08:49:41.704581938 +0000 UTC m=+0.284498063 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=metrics_qdr, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:49:41 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:49:44 np0005546420.localdomain sshd[92486]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:49:45 np0005546420.localdomain sshd[92486]: Received disconnect from 195.250.72.168 port 46314:11: Bye Bye [preauth]
Dec 05 08:49:45 np0005546420.localdomain sshd[92486]: Disconnected from authenticating user root 195.250.72.168 port 46314 [preauth]
Dec 05 08:49:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:49:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:49:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:49:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:49:52 np0005546420.localdomain systemd[1]: tmp-crun.szwFpp.mount: Deactivated successfully.
Dec 05 08:49:52 np0005546420.localdomain podman[92491]: 2025-12-05 08:49:52.513945635 +0000 UTC m=+0.077625755 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4)
Dec 05 08:49:52 np0005546420.localdomain podman[92489]: 2025-12-05 08:49:52.567793003 +0000 UTC m=+0.134201828 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:49:52 np0005546420.localdomain podman[92491]: 2025-12-05 08:49:52.571354643 +0000 UTC m=+0.135034763 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, tcib_managed=true, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4)
Dec 05 08:49:52 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:49:52 np0005546420.localdomain podman[92489]: 2025-12-05 08:49:52.601423974 +0000 UTC m=+0.167832859 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:49:52 np0005546420.localdomain podman[92488]: 2025-12-05 08:49:52.622516347 +0000 UTC m=+0.193089311 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4)
Dec 05 08:49:52 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:49:52 np0005546420.localdomain podman[92488]: 2025-12-05 08:49:52.62906408 +0000 UTC m=+0.199637034 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:49:52 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:49:52 np0005546420.localdomain podman[92490]: 2025-12-05 08:49:52.700334797 +0000 UTC m=+0.267924219 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container)
Dec 05 08:49:52 np0005546420.localdomain podman[92490]: 2025-12-05 08:49:52.747403685 +0000 UTC m=+0.314993097 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, release=1761123044)
Dec 05 08:49:52 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:49:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:49:55 np0005546420.localdomain podman[92588]: 2025-12-05 08:49:55.492556895 +0000 UTC m=+0.074901871 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:49:55 np0005546420.localdomain podman[92588]: 2025-12-05 08:49:55.855870708 +0000 UTC m=+0.438215694 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:49:55 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:49:56 np0005546420.localdomain sudo[92611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:49:56 np0005546420.localdomain sudo[92611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:49:56 np0005546420.localdomain sudo[92611]: pam_unix(sudo:session): session closed for user root
Dec 05 08:49:56 np0005546420.localdomain sudo[92626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:49:56 np0005546420.localdomain sudo[92626]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:49:56 np0005546420.localdomain sudo[92626]: pam_unix(sudo:session): session closed for user root
Dec 05 08:49:58 np0005546420.localdomain sudo[92674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:49:58 np0005546420.localdomain sudo[92674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:49:58 np0005546420.localdomain sudo[92674]: pam_unix(sudo:session): session closed for user root
Dec 05 08:50:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:50:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:50:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:50:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:50:00 np0005546420.localdomain podman[92691]: 2025-12-05 08:50:00.517022267 +0000 UTC m=+0.086916613 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, config_id=tripleo_step3, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:50:00 np0005546420.localdomain podman[92691]: 2025-12-05 08:50:00.557379457 +0000 UTC m=+0.127273753 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T23:44:13Z)
Dec 05 08:50:00 np0005546420.localdomain systemd[1]: tmp-crun.lej81c.mount: Deactivated successfully.
Dec 05 08:50:00 np0005546420.localdomain podman[92690]: 2025-12-05 08:50:00.570917667 +0000 UTC m=+0.145361373 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=collectd)
Dec 05 08:50:00 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:50:00 np0005546420.localdomain podman[92690]: 2025-12-05 08:50:00.611474422 +0000 UTC m=+0.185918148 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-collectd)
Dec 05 08:50:00 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:50:00 np0005546420.localdomain podman[92689]: 2025-12-05 08:50:00.674210505 +0000 UTC m=+0.249337593 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public)
Dec 05 08:50:00 np0005546420.localdomain podman[92689]: 2025-12-05 08:50:00.703324747 +0000 UTC m=+0.278451795 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public)
Dec 05 08:50:00 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:50:00 np0005546420.localdomain podman[92692]: 2025-12-05 08:50:00.624839666 +0000 UTC m=+0.192652127 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z)
Dec 05 08:50:00 np0005546420.localdomain podman[92692]: 2025-12-05 08:50:00.758396333 +0000 UTC m=+0.326208804 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Dec 05 08:50:00 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:50:01 np0005546420.localdomain systemd[1]: tmp-crun.y4D8gr.mount: Deactivated successfully.
Dec 05 08:50:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:50:12 np0005546420.localdomain systemd[1]: tmp-crun.Nh7dN2.mount: Deactivated successfully.
Dec 05 08:50:12 np0005546420.localdomain podman[92773]: 2025-12-05 08:50:12.504722459 +0000 UTC m=+0.084627123 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git)
Dec 05 08:50:12 np0005546420.localdomain podman[92773]: 2025-12-05 08:50:12.695798877 +0000 UTC m=+0.275703531 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.openshift.expose-services=, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64)
Dec 05 08:50:13 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:50:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:50:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:50:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:50:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:50:23 np0005546420.localdomain podman[92803]: 2025-12-05 08:50:23.51211129 +0000 UTC m=+0.088298006 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com)
Dec 05 08:50:23 np0005546420.localdomain podman[92803]: 2025-12-05 08:50:23.567044521 +0000 UTC m=+0.143231197 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12)
Dec 05 08:50:23 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:50:23 np0005546420.localdomain podman[92802]: 2025-12-05 08:50:23.569076744 +0000 UTC m=+0.146882450 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, vcs-type=git, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:50:23 np0005546420.localdomain podman[92805]: 2025-12-05 08:50:23.628556096 +0000 UTC m=+0.198467327 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:50:23 np0005546420.localdomain podman[92804]: 2025-12-05 08:50:23.679133712 +0000 UTC m=+0.251919662 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:50:23 np0005546420.localdomain podman[92805]: 2025-12-05 08:50:23.688486492 +0000 UTC m=+0.258397753 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:50:23 np0005546420.localdomain podman[92802]: 2025-12-05 08:50:23.698723479 +0000 UTC m=+0.276529165 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=logrotate_crond, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z)
Dec 05 08:50:23 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:50:23 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:50:23 np0005546420.localdomain podman[92804]: 2025-12-05 08:50:23.734544148 +0000 UTC m=+0.307330088 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step5, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:50:23 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:50:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:50:26 np0005546420.localdomain podman[92901]: 2025-12-05 08:50:26.518555042 +0000 UTC m=+0.092752553 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:50:26 np0005546420.localdomain podman[92901]: 2025-12-05 08:50:26.912466142 +0000 UTC m=+0.486663643 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, container_name=nova_migration_target, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:50:26 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:50:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:50:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:50:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:50:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:50:31 np0005546420.localdomain podman[92932]: 2025-12-05 08:50:31.515802992 +0000 UTC m=+0.082409193 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 08:50:31 np0005546420.localdomain podman[92932]: 2025-12-05 08:50:31.555348487 +0000 UTC m=+0.121954768 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044)
Dec 05 08:50:31 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:50:31 np0005546420.localdomain podman[92924]: 2025-12-05 08:50:31.573938793 +0000 UTC m=+0.154455305 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, version=17.1.12)
Dec 05 08:50:31 np0005546420.localdomain podman[92925]: 2025-12-05 08:50:31.619268397 +0000 UTC m=+0.194634500 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044)
Dec 05 08:50:31 np0005546420.localdomain podman[92925]: 2025-12-05 08:50:31.63033441 +0000 UTC m=+0.205700603 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, release=1761123044, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public)
Dec 05 08:50:31 np0005546420.localdomain podman[92924]: 2025-12-05 08:50:31.630651829 +0000 UTC m=+0.211168391 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-ovn-controller-container)
Dec 05 08:50:31 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:50:31 np0005546420.localdomain podman[92926]: 2025-12-05 08:50:31.679536783 +0000 UTC m=+0.251313264 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.component=openstack-iscsid-container, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:50:31 np0005546420.localdomain podman[92926]: 2025-12-05 08:50:31.693623399 +0000 UTC m=+0.265399860 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, container_name=iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, name=rhosp17/openstack-iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible)
Dec 05 08:50:31 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Deactivated successfully.
Dec 05 08:50:31 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:50:32 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:50:32 np0005546420.localdomain recover_tripleo_nova_virtqemud[93008]: 62579
Dec 05 08:50:32 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:50:32 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:50:33 np0005546420.localdomain sshd[93009]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:50:34 np0005546420.localdomain sshd[93009]: Received disconnect from 93.157.248.178 port 38944:11: Bye Bye [preauth]
Dec 05 08:50:34 np0005546420.localdomain sshd[93009]: Disconnected from authenticating user root 93.157.248.178 port 38944 [preauth]
Dec 05 08:50:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:50:43 np0005546420.localdomain podman[93011]: 2025-12-05 08:50:43.527326943 +0000 UTC m=+0.094674753 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, distribution-scope=public)
Dec 05 08:50:43 np0005546420.localdomain podman[93011]: 2025-12-05 08:50:43.73450188 +0000 UTC m=+0.301849730 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-18T22:49:46Z, release=1761123044, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:50:43 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:50:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:50:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:50:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:50:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:50:54 np0005546420.localdomain podman[93042]: 2025-12-05 08:50:54.525393535 +0000 UTC m=+0.095279291 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:50:54 np0005546420.localdomain podman[93042]: 2025-12-05 08:50:54.540418191 +0000 UTC m=+0.110303947 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:32Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:50:54 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:50:54 np0005546420.localdomain podman[93050]: 2025-12-05 08:50:54.635023142 +0000 UTC m=+0.193107872 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible)
Dec 05 08:50:54 np0005546420.localdomain podman[93050]: 2025-12-05 08:50:54.677640221 +0000 UTC m=+0.235724961 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-type=git, release=1761123044, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:50:54 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:50:54 np0005546420.localdomain podman[93043]: 2025-12-05 08:50:54.683487832 +0000 UTC m=+0.247612739 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git)
Dec 05 08:50:54 np0005546420.localdomain podman[93043]: 2025-12-05 08:50:54.766596426 +0000 UTC m=+0.330721313 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc.)
Dec 05 08:50:54 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:50:54 np0005546420.localdomain podman[93044]: 2025-12-05 08:50:54.734064588 +0000 UTC m=+0.294303115 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:50:54 np0005546420.localdomain podman[93044]: 2025-12-05 08:50:54.81613889 +0000 UTC m=+0.376377407 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, container_name=nova_compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044)
Dec 05 08:50:54 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:50:55 np0005546420.localdomain systemd[1]: tmp-crun.tOeqSI.mount: Deactivated successfully.
Dec 05 08:50:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:50:57 np0005546420.localdomain systemd[1]: tmp-crun.MhLX5q.mount: Deactivated successfully.
Dec 05 08:50:57 np0005546420.localdomain podman[93139]: 2025-12-05 08:50:57.525279744 +0000 UTC m=+0.101075098 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, release=1761123044)
Dec 05 08:50:57 np0005546420.localdomain podman[93139]: 2025-12-05 08:50:57.89929149 +0000 UTC m=+0.475086864 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:50:57 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:50:58 np0005546420.localdomain sudo[93162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:50:58 np0005546420.localdomain sudo[93162]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:50:58 np0005546420.localdomain sudo[93162]: pam_unix(sudo:session): session closed for user root
Dec 05 08:50:58 np0005546420.localdomain sudo[93177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:50:58 np0005546420.localdomain sudo[93177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:50:58 np0005546420.localdomain sshd[93209]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:50:58 np0005546420.localdomain sudo[93177]: pam_unix(sudo:session): session closed for user root
Dec 05 08:50:59 np0005546420.localdomain sudo[93225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:50:59 np0005546420.localdomain sudo[93225]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:50:59 np0005546420.localdomain sudo[93225]: pam_unix(sudo:session): session closed for user root
Dec 05 08:50:59 np0005546420.localdomain sudo[93240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 05 08:50:59 np0005546420.localdomain sudo[93240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:50:59 np0005546420.localdomain sudo[93240]: pam_unix(sudo:session): session closed for user root
Dec 05 08:50:59 np0005546420.localdomain sshd[93209]: Received disconnect from 195.250.72.168 port 54494:11: Bye Bye [preauth]
Dec 05 08:50:59 np0005546420.localdomain sshd[93209]: Disconnected from authenticating user root 195.250.72.168 port 54494 [preauth]
Dec 05 08:51:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:51:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:51:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:51:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:51:02 np0005546420.localdomain podman[93276]: 2025-12-05 08:51:02.536901081 +0000 UTC m=+0.102229793 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:51:02 np0005546420.localdomain podman[93276]: 2025-12-05 08:51:02.571755333 +0000 UTC m=+0.137084025 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step3, url=https://www.redhat.com, name=rhosp17/openstack-collectd)
Dec 05 08:51:02 np0005546420.localdomain systemd[1]: tmp-crun.Kf1o4s.mount: Deactivated successfully.
Dec 05 08:51:02 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:51:02 np0005546420.localdomain podman[93275]: 2025-12-05 08:51:02.635851872 +0000 UTC m=+0.200991418 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Dec 05 08:51:02 np0005546420.localdomain podman[93278]: 2025-12-05 08:51:02.691267251 +0000 UTC m=+0.250914067 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 08:51:02 np0005546420.localdomain podman[93277]: 2025-12-05 08:51:02.606221912 +0000 UTC m=+0.168678685 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public)
Dec 05 08:51:02 np0005546420.localdomain podman[93275]: 2025-12-05 08:51:02.711048295 +0000 UTC m=+0.276187611 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:51:02 np0005546420.localdomain podman[93275]: unhealthy
Dec 05 08:51:02 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:51:02 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:51:02 np0005546420.localdomain podman[93277]: 2025-12-05 08:51:02.740579922 +0000 UTC m=+0.303036775 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 05 08:51:02 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:51:02 np0005546420.localdomain podman[93278]: 2025-12-05 08:51:02.79981178 +0000 UTC m=+0.359458606 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1)
Dec 05 08:51:02 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:51:02 np0005546420.localdomain sudo[93365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:51:02 np0005546420.localdomain sudo[93365]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:51:02 np0005546420.localdomain sudo[93365]: pam_unix(sudo:session): session closed for user root
Dec 05 08:51:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:51:14 np0005546420.localdomain podman[93380]: 2025-12-05 08:51:14.5574168 +0000 UTC m=+0.104062720 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1)
Dec 05 08:51:14 np0005546420.localdomain podman[93380]: 2025-12-05 08:51:14.757692765 +0000 UTC m=+0.304338655 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, container_name=metrics_qdr)
Dec 05 08:51:14 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:51:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:51:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:51:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:51:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:51:25 np0005546420.localdomain systemd[1]: tmp-crun.NNyMsw.mount: Deactivated successfully.
Dec 05 08:51:25 np0005546420.localdomain podman[93412]: 2025-12-05 08:51:25.522556759 +0000 UTC m=+0.097090763 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 05 08:51:25 np0005546420.localdomain podman[93412]: 2025-12-05 08:51:25.552380965 +0000 UTC m=+0.126914969 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, distribution-scope=public, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:51:25 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:51:25 np0005546420.localdomain podman[93409]: 2025-12-05 08:51:25.606747852 +0000 UTC m=+0.185466386 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:51:25 np0005546420.localdomain podman[93409]: 2025-12-05 08:51:25.61826546 +0000 UTC m=+0.196983944 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, name=rhosp17/openstack-cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Dec 05 08:51:25 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:51:25 np0005546420.localdomain podman[93411]: 2025-12-05 08:51:25.706535879 +0000 UTC m=+0.277935606 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:51:25 np0005546420.localdomain podman[93411]: 2025-12-05 08:51:25.756880851 +0000 UTC m=+0.328280578 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, config_id=tripleo_step5, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-type=git, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute)
Dec 05 08:51:25 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:51:25 np0005546420.localdomain podman[93410]: 2025-12-05 08:51:25.769882355 +0000 UTC m=+0.345969768 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible)
Dec 05 08:51:25 np0005546420.localdomain podman[93410]: 2025-12-05 08:51:25.802400533 +0000 UTC m=+0.378487866 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team)
Dec 05 08:51:25 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:51:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:51:28 np0005546420.localdomain podman[93510]: 2025-12-05 08:51:28.513270184 +0000 UTC m=+0.087990921 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1)
Dec 05 08:51:28 np0005546420.localdomain podman[93510]: 2025-12-05 08:51:28.87953163 +0000 UTC m=+0.454252407 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_migration_target)
Dec 05 08:51:28 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:51:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:51:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:51:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:51:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:51:33 np0005546420.localdomain podman[93534]: 2025-12-05 08:51:33.522950341 +0000 UTC m=+0.098626192 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12)
Dec 05 08:51:33 np0005546420.localdomain podman[93537]: 2025-12-05 08:51:33.573145218 +0000 UTC m=+0.140253403 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044)
Dec 05 08:51:33 np0005546420.localdomain podman[93536]: 2025-12-05 08:51:33.623398918 +0000 UTC m=+0.192446433 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 05 08:51:33 np0005546420.localdomain podman[93535]: 2025-12-05 08:51:33.681753739 +0000 UTC m=+0.254812728 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, container_name=collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:51:33 np0005546420.localdomain podman[93537]: 2025-12-05 08:51:33.694160754 +0000 UTC m=+0.261268949 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:51:33 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Deactivated successfully.
Dec 05 08:51:33 np0005546420.localdomain podman[93536]: 2025-12-05 08:51:33.708613323 +0000 UTC m=+0.277660878 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., container_name=iscsid, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.12)
Dec 05 08:51:33 np0005546420.localdomain podman[93535]: 2025-12-05 08:51:33.720509242 +0000 UTC m=+0.293568231 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:51:33 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:51:33 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:51:33 np0005546420.localdomain podman[93534]: 2025-12-05 08:51:33.748401037 +0000 UTC m=+0.324076918 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Dec 05 08:51:33 np0005546420.localdomain podman[93534]: unhealthy
Dec 05 08:51:33 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:51:33 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:51:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:51:45 np0005546420.localdomain podman[93623]: 2025-12-05 08:51:45.520979994 +0000 UTC m=+0.101172801 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, container_name=metrics_qdr, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 05 08:51:45 np0005546420.localdomain podman[93623]: 2025-12-05 08:51:45.769538837 +0000 UTC m=+0.349731644 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:51:45 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:51:56 np0005546420.localdomain sshd[93653]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:51:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:51:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:51:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:51:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:51:56 np0005546420.localdomain podman[93656]: 2025-12-05 08:51:56.68767707 +0000 UTC m=+0.260452114 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, version=17.1.12, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.)
Dec 05 08:51:56 np0005546420.localdomain podman[93656]: 2025-12-05 08:51:56.714287926 +0000 UTC m=+0.287063000 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.12)
Dec 05 08:51:56 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:51:56 np0005546420.localdomain podman[93655]: 2025-12-05 08:51:56.885703115 +0000 UTC m=+0.458649614 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 05 08:51:56 np0005546420.localdomain podman[93655]: 2025-12-05 08:51:56.924411656 +0000 UTC m=+0.497358125 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.41.4)
Dec 05 08:51:56 np0005546420.localdomain podman[93658]: 2025-12-05 08:51:56.9326036 +0000 UTC m=+0.500851333 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, architecture=x86_64)
Dec 05 08:51:56 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:51:56 np0005546420.localdomain podman[93657]: 2025-12-05 08:51:56.542127333 +0000 UTC m=+0.113018168 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, config_id=tripleo_step5, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12)
Dec 05 08:51:56 np0005546420.localdomain podman[93657]: 2025-12-05 08:51:56.97932796 +0000 UTC m=+0.550218835 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:51:56 np0005546420.localdomain podman[93658]: 2025-12-05 08:51:56.988579717 +0000 UTC m=+0.556827460 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team)
Dec 05 08:51:56 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:51:57 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:51:57 np0005546420.localdomain sshd[93653]: Received disconnect from 93.157.248.178 port 59938:11: Bye Bye [preauth]
Dec 05 08:51:57 np0005546420.localdomain sshd[93653]: Disconnected from authenticating user root 93.157.248.178 port 59938 [preauth]
Dec 05 08:51:57 np0005546420.localdomain systemd[1]: tmp-crun.LS4TJL.mount: Deactivated successfully.
Dec 05 08:51:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:51:59 np0005546420.localdomain podman[93752]: 2025-12-05 08:51:59.500381551 +0000 UTC m=+0.080597042 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:51:59 np0005546420.localdomain podman[93752]: 2025-12-05 08:51:59.845583703 +0000 UTC m=+0.425799244 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public)
Dec 05 08:51:59 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:52:03 np0005546420.localdomain sudo[93775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:52:03 np0005546420.localdomain sudo[93775]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:52:03 np0005546420.localdomain sudo[93775]: pam_unix(sudo:session): session closed for user root
Dec 05 08:52:03 np0005546420.localdomain sudo[93790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:52:03 np0005546420.localdomain sudo[93790]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:52:03 np0005546420.localdomain sudo[93790]: pam_unix(sudo:session): session closed for user root
Dec 05 08:52:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:52:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:52:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:52:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:52:04 np0005546420.localdomain podman[93839]: 2025-12-05 08:52:04.523380531 +0000 UTC m=+0.095161234 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, container_name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible)
Dec 05 08:52:04 np0005546420.localdomain podman[93839]: 2025-12-05 08:52:04.562237137 +0000 UTC m=+0.134017820 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, distribution-scope=public, version=17.1.12, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:52:04 np0005546420.localdomain systemd[1]: tmp-crun.8Q2spR.mount: Deactivated successfully.
Dec 05 08:52:04 np0005546420.localdomain podman[93838]: 2025-12-05 08:52:04.572183056 +0000 UTC m=+0.145514456 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, distribution-scope=public, version=17.1.12, vcs-type=git, config_id=tripleo_step3, container_name=collectd, tcib_managed=true, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=)
Dec 05 08:52:04 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:52:04 np0005546420.localdomain podman[93838]: 2025-12-05 08:52:04.5855467 +0000 UTC m=+0.158878180 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, vcs-type=git, release=1761123044, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:52:04 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:52:04 np0005546420.localdomain podman[93840]: 2025-12-05 08:52:04.63838139 +0000 UTC m=+0.203300189 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z)
Dec 05 08:52:04 np0005546420.localdomain podman[93840]: 2025-12-05 08:52:04.655430699 +0000 UTC m=+0.220349468 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:52:04 np0005546420.localdomain podman[93840]: unhealthy
Dec 05 08:52:04 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:52:04 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:52:04 np0005546420.localdomain podman[93837]: 2025-12-05 08:52:04.720105546 +0000 UTC m=+0.292691064 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:52:04 np0005546420.localdomain podman[93837]: 2025-12-05 08:52:04.737653561 +0000 UTC m=+0.310239089 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:52:04 np0005546420.localdomain podman[93837]: unhealthy
Dec 05 08:52:04 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:52:04 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3600.1 total, 600.0 interval
                                                          Cumulative writes: 5715 writes, 25K keys, 5715 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5715 writes, 734 syncs, 7.79 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 08:52:09 np0005546420.localdomain sudo[93914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:52:09 np0005546420.localdomain sudo[93914]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:52:09 np0005546420.localdomain sudo[93914]: pam_unix(sudo:session): session closed for user root
Dec 05 08:52:10 np0005546420.localdomain sshd[93929]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 08:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3600.1 total, 600.0 interval
                                                          Cumulative writes: 4690 writes, 21K keys, 4690 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4690 writes, 584 syncs, 8.03 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 08:52:11 np0005546420.localdomain sshd[93929]: Received disconnect from 195.250.72.168 port 38630:11: Bye Bye [preauth]
Dec 05 08:52:11 np0005546420.localdomain sshd[93929]: Disconnected from authenticating user root 195.250.72.168 port 38630 [preauth]
Dec 05 08:52:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:52:16 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:52:16 np0005546420.localdomain recover_tripleo_nova_virtqemud[93933]: 62579
Dec 05 08:52:16 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:52:16 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:52:16 np0005546420.localdomain podman[93931]: 2025-12-05 08:52:16.517207414 +0000 UTC m=+0.090021454 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr)
Dec 05 08:52:16 np0005546420.localdomain podman[93931]: 2025-12-05 08:52:16.7465084 +0000 UTC m=+0.319322460 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, version=17.1.12, container_name=metrics_qdr, release=1761123044, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 05 08:52:16 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:52:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:52:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:52:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:52:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:52:27 np0005546420.localdomain systemd[1]: tmp-crun.qywN4c.mount: Deactivated successfully.
Dec 05 08:52:27 np0005546420.localdomain podman[93963]: 2025-12-05 08:52:27.526423001 +0000 UTC m=+0.107849338 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:52:27 np0005546420.localdomain podman[93963]: 2025-12-05 08:52:27.538815445 +0000 UTC m=+0.120241742 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:52:27 np0005546420.localdomain systemd[1]: tmp-crun.ygQF6J.mount: Deactivated successfully.
Dec 05 08:52:27 np0005546420.localdomain podman[93965]: 2025-12-05 08:52:27.553858162 +0000 UTC m=+0.124913857 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:52:27 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:52:27 np0005546420.localdomain podman[93964]: 2025-12-05 08:52:27.621591504 +0000 UTC m=+0.196407516 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:12:45Z, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044)
Dec 05 08:52:27 np0005546420.localdomain podman[93965]: 2025-12-05 08:52:27.634764683 +0000 UTC m=+0.205820388 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container)
Dec 05 08:52:27 np0005546420.localdomain podman[93971]: 2025-12-05 08:52:27.645271719 +0000 UTC m=+0.211122872 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:52:27 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:52:27 np0005546420.localdomain podman[93964]: 2025-12-05 08:52:27.653261957 +0000 UTC m=+0.228077999 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, build-date=2025-11-19T00:12:45Z)
Dec 05 08:52:27 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:52:27 np0005546420.localdomain podman[93971]: 2025-12-05 08:52:27.709687958 +0000 UTC m=+0.275539121 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:52:27 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:52:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:52:30 np0005546420.localdomain systemd[1]: tmp-crun.SyQdIc.mount: Deactivated successfully.
Dec 05 08:52:30 np0005546420.localdomain podman[94059]: 2025-12-05 08:52:30.528661333 +0000 UTC m=+0.104366659 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.12, tcib_managed=true, vcs-type=git, container_name=nova_migration_target, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:52:30 np0005546420.localdomain podman[94059]: 2025-12-05 08:52:30.936662934 +0000 UTC m=+0.512368230 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 08:52:30 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:52:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:52:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:52:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:52:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:52:35 np0005546420.localdomain systemd[1]: tmp-crun.RWS97Y.mount: Deactivated successfully.
Dec 05 08:52:35 np0005546420.localdomain podman[94082]: 2025-12-05 08:52:35.526377029 +0000 UTC m=+0.101837981 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:52:35 np0005546420.localdomain podman[94084]: 2025-12-05 08:52:35.568061463 +0000 UTC m=+0.132154542 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044)
Dec 05 08:52:35 np0005546420.localdomain podman[94082]: 2025-12-05 08:52:35.569996923 +0000 UTC m=+0.145457835 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git)
Dec 05 08:52:35 np0005546420.localdomain podman[94082]: unhealthy
Dec 05 08:52:35 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:52:35 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:52:35 np0005546420.localdomain podman[94085]: 2025-12-05 08:52:35.6301808 +0000 UTC m=+0.195264889 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, release=1761123044, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 05 08:52:35 np0005546420.localdomain podman[94083]: 2025-12-05 08:52:35.681041808 +0000 UTC m=+0.250516674 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1)
Dec 05 08:52:35 np0005546420.localdomain podman[94083]: 2025-12-05 08:52:35.69010517 +0000 UTC m=+0.259580086 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z)
Dec 05 08:52:35 np0005546420.localdomain podman[94085]: 2025-12-05 08:52:35.698606714 +0000 UTC m=+0.263690823 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4)
Dec 05 08:52:35 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:52:35 np0005546420.localdomain podman[94085]: unhealthy
Dec 05 08:52:35 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:52:35 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:52:35 np0005546420.localdomain podman[94084]: 2025-12-05 08:52:35.753369723 +0000 UTC m=+0.317462802 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:52:35 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:52:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:52:47 np0005546420.localdomain podman[94158]: 2025-12-05 08:52:47.511241779 +0000 UTC m=+0.087982221 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:52:47 np0005546420.localdomain podman[94158]: 2025-12-05 08:52:47.756726307 +0000 UTC m=+0.333466739 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, distribution-scope=public, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=)
Dec 05 08:52:47 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:52:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:52:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:52:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:52:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:52:58 np0005546420.localdomain systemd[1]: tmp-crun.3gNOaj.mount: Deactivated successfully.
Dec 05 08:52:58 np0005546420.localdomain podman[94188]: 2025-12-05 08:52:58.488501995 +0000 UTC m=+0.068188227 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com)
Dec 05 08:52:58 np0005546420.localdomain podman[94187]: 2025-12-05 08:52:58.559925081 +0000 UTC m=+0.137813348 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=logrotate_crond, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-cron, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, vcs-type=git)
Dec 05 08:52:58 np0005546420.localdomain podman[94187]: 2025-12-05 08:52:58.56829318 +0000 UTC m=+0.146181487 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, config_id=tripleo_step4)
Dec 05 08:52:58 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:52:58 np0005546420.localdomain podman[94190]: 2025-12-05 08:52:58.531919172 +0000 UTC m=+0.101495061 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:52:58 np0005546420.localdomain podman[94190]: 2025-12-05 08:52:58.612680368 +0000 UTC m=+0.182256217 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:52:58 np0005546420.localdomain podman[94188]: 2025-12-05 08:52:58.624503264 +0000 UTC m=+0.204189456 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true)
Dec 05 08:52:58 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:52:58 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:52:58 np0005546420.localdomain podman[94189]: 2025-12-05 08:52:58.676045754 +0000 UTC m=+0.248364388 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:52:58 np0005546420.localdomain podman[94189]: 2025-12-05 08:52:58.735471288 +0000 UTC m=+0.307789922 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 05 08:52:58 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:53:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:53:01 np0005546420.localdomain systemd[1]: tmp-crun.KdYuiw.mount: Deactivated successfully.
Dec 05 08:53:01 np0005546420.localdomain podman[94285]: 2025-12-05 08:53:01.52081408 +0000 UTC m=+0.097888808 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Dec 05 08:53:01 np0005546420.localdomain podman[94285]: 2025-12-05 08:53:01.874081342 +0000 UTC m=+0.451156120 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, container_name=nova_migration_target)
Dec 05 08:53:01 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:53:02 np0005546420.localdomain sshd[94308]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:53:05 np0005546420.localdomain sshd[94308]: Received disconnect from 180.184.182.87 port 23896:11: Bye Bye [preauth]
Dec 05 08:53:05 np0005546420.localdomain sshd[94308]: Disconnected from authenticating user root 180.184.182.87 port 23896 [preauth]
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:53:06 np0005546420.localdomain podman[94310]: 2025-12-05 08:53:06.534648995 +0000 UTC m=+0.103567695 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:53:06 np0005546420.localdomain podman[94310]: 2025-12-05 08:53:06.573640875 +0000 UTC m=+0.142559565 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., container_name=ovn_controller, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12)
Dec 05 08:53:06 np0005546420.localdomain podman[94310]: unhealthy
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: tmp-crun.eOJaea.mount: Deactivated successfully.
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:53:06 np0005546420.localdomain podman[94311]: 2025-12-05 08:53:06.584391299 +0000 UTC m=+0.148764788 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64)
Dec 05 08:53:06 np0005546420.localdomain podman[94311]: 2025-12-05 08:53:06.597259237 +0000 UTC m=+0.161632726 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, release=1761123044, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd)
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: tmp-crun.pjEnxf.mount: Deactivated successfully.
Dec 05 08:53:06 np0005546420.localdomain podman[94313]: 2025-12-05 08:53:06.646146175 +0000 UTC m=+0.202499345 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64)
Dec 05 08:53:06 np0005546420.localdomain podman[94313]: 2025-12-05 08:53:06.683655369 +0000 UTC m=+0.240008509 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 05 08:53:06 np0005546420.localdomain podman[94313]: unhealthy
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:53:06 np0005546420.localdomain podman[94312]: 2025-12-05 08:53:06.69755903 +0000 UTC m=+0.257520382 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:53:06 np0005546420.localdomain podman[94312]: 2025-12-05 08:53:06.732535736 +0000 UTC m=+0.292497088 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, container_name=iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:53:06 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:53:09 np0005546420.localdomain sudo[94389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:53:09 np0005546420.localdomain sudo[94389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:53:09 np0005546420.localdomain sudo[94389]: pam_unix(sudo:session): session closed for user root
Dec 05 08:53:09 np0005546420.localdomain sudo[94404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:53:09 np0005546420.localdomain sudo[94404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:53:10 np0005546420.localdomain sudo[94404]: pam_unix(sudo:session): session closed for user root
Dec 05 08:53:10 np0005546420.localdomain sudo[94450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:53:10 np0005546420.localdomain sudo[94450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:53:10 np0005546420.localdomain sudo[94450]: pam_unix(sudo:session): session closed for user root
Dec 05 08:53:10 np0005546420.localdomain sudo[94465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b -- inventory --format=json-pretty --filter-for-batch
Dec 05 08:53:10 np0005546420.localdomain sudo[94465]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:53:11 np0005546420.localdomain podman[94521]: 
Dec 05 08:53:11 np0005546420.localdomain podman[94521]: 2025-12-05 08:53:11.049874679 +0000 UTC m=+0.085924338 container create 33569b5fc0d84c033bcc7b4bc4450da487aaabb0d06ced626c4dc4607f8bd7c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_proskuriakova, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=1763362218, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_CLEAN=True)
Dec 05 08:53:11 np0005546420.localdomain systemd[1]: Started libpod-conmon-33569b5fc0d84c033bcc7b4bc4450da487aaabb0d06ced626c4dc4607f8bd7c2.scope.
Dec 05 08:53:11 np0005546420.localdomain podman[94521]: 2025-12-05 08:53:11.015390859 +0000 UTC m=+0.051440558 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 08:53:11 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:53:11 np0005546420.localdomain podman[94521]: 2025-12-05 08:53:11.141790281 +0000 UTC m=+0.177839940 container init 33569b5fc0d84c033bcc7b4bc4450da487aaabb0d06ced626c4dc4607f8bd7c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_proskuriakova, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, ceph=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, version=7, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph)
Dec 05 08:53:11 np0005546420.localdomain podman[94521]: 2025-12-05 08:53:11.150697927 +0000 UTC m=+0.186747556 container start 33569b5fc0d84c033bcc7b4bc4450da487aaabb0d06ced626c4dc4607f8bd7c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_proskuriakova, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, version=7, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_CLEAN=True, vcs-type=git, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7)
Dec 05 08:53:11 np0005546420.localdomain podman[94521]: 2025-12-05 08:53:11.150941905 +0000 UTC m=+0.186991574 container attach 33569b5fc0d84c033bcc7b4bc4450da487aaabb0d06ced626c4dc4607f8bd7c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_proskuriakova, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_BRANCH=main, distribution-scope=public, architecture=x86_64)
Dec 05 08:53:11 np0005546420.localdomain determined_proskuriakova[94535]: 167 167
Dec 05 08:53:11 np0005546420.localdomain systemd[1]: libpod-33569b5fc0d84c033bcc7b4bc4450da487aaabb0d06ced626c4dc4607f8bd7c2.scope: Deactivated successfully.
Dec 05 08:53:11 np0005546420.localdomain podman[94521]: 2025-12-05 08:53:11.156950611 +0000 UTC m=+0.193000270 container died 33569b5fc0d84c033bcc7b4bc4450da487aaabb0d06ced626c4dc4607f8bd7c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_proskuriakova, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, ceph=True, build-date=2025-11-26T19:44:28Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=)
Dec 05 08:53:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-95ba196d78d1540984ea1e586f854feb03649f0f41ed85385cede88dc472bd32-merged.mount: Deactivated successfully.
Dec 05 08:53:11 np0005546420.localdomain podman[94540]: 2025-12-05 08:53:11.267775571 +0000 UTC m=+0.096511827 container remove 33569b5fc0d84c033bcc7b4bc4450da487aaabb0d06ced626c4dc4607f8bd7c2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_proskuriakova, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Dec 05 08:53:11 np0005546420.localdomain systemd[1]: libpod-conmon-33569b5fc0d84c033bcc7b4bc4450da487aaabb0d06ced626c4dc4607f8bd7c2.scope: Deactivated successfully.
Dec 05 08:53:11 np0005546420.localdomain podman[94561]: 
Dec 05 08:53:11 np0005546420.localdomain podman[94561]: 2025-12-05 08:53:11.512992429 +0000 UTC m=+0.089476537 container create b43397adf0790d74306c569e0b4afb4e1126c8d6788e1309471f5f3321aef7cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_beaver, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph)
Dec 05 08:53:11 np0005546420.localdomain systemd[1]: Started libpod-conmon-b43397adf0790d74306c569e0b4afb4e1126c8d6788e1309471f5f3321aef7cb.scope.
Dec 05 08:53:11 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 08:53:11 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd09c892a18f5efbba39c1a6b9eacb804cf3726fb75cc081cb6c32e80986face/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 08:53:11 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd09c892a18f5efbba39c1a6b9eacb804cf3726fb75cc081cb6c32e80986face/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 08:53:11 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd09c892a18f5efbba39c1a6b9eacb804cf3726fb75cc081cb6c32e80986face/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 08:53:11 np0005546420.localdomain podman[94561]: 2025-12-05 08:53:11.478894181 +0000 UTC m=+0.055378309 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 08:53:11 np0005546420.localdomain podman[94561]: 2025-12-05 08:53:11.580736481 +0000 UTC m=+0.157220549 container init b43397adf0790d74306c569e0b4afb4e1126c8d6788e1309471f5f3321aef7cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_beaver, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, distribution-scope=public, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 08:53:11 np0005546420.localdomain podman[94561]: 2025-12-05 08:53:11.592720494 +0000 UTC m=+0.169204592 container start b43397adf0790d74306c569e0b4afb4e1126c8d6788e1309471f5f3321aef7cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_beaver, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7)
Dec 05 08:53:11 np0005546420.localdomain podman[94561]: 2025-12-05 08:53:11.593060954 +0000 UTC m=+0.169545012 container attach b43397adf0790d74306c569e0b4afb4e1126c8d6788e1309471f5f3321aef7cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_beaver, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=)
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]: [
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:     {
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:         "available": false,
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:         "ceph_device": false,
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:         "lsm_data": {},
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:         "lvs": [],
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:         "path": "/dev/sr0",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:         "rejected_reasons": [
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "Has a FileSystem",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "Insufficient space (<5GB)"
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:         ],
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:         "sys_api": {
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "actuators": null,
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "device_nodes": "sr0",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "human_readable_size": "482.00 KB",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "id_bus": "ata",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "model": "QEMU DVD-ROM",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "nr_requests": "2",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "partitions": {},
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "path": "/dev/sr0",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "removable": "1",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "rev": "2.5+",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "ro": "0",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "rotational": "1",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "sas_address": "",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "sas_device_handle": "",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "scheduler_mode": "mq-deadline",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "sectors": 0,
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "sectorsize": "2048",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "size": 493568.0,
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "support_discard": "0",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "type": "disk",
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:             "vendor": "QEMU"
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:         }
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]:     }
Dec 05 08:53:12 np0005546420.localdomain gifted_beaver[94576]: ]
Dec 05 08:53:12 np0005546420.localdomain systemd[1]: libpod-b43397adf0790d74306c569e0b4afb4e1126c8d6788e1309471f5f3321aef7cb.scope: Deactivated successfully.
Dec 05 08:53:12 np0005546420.localdomain systemd[1]: libpod-b43397adf0790d74306c569e0b4afb4e1126c8d6788e1309471f5f3321aef7cb.scope: Consumed 1.158s CPU time.
Dec 05 08:53:12 np0005546420.localdomain podman[94561]: 2025-12-05 08:53:12.6937406 +0000 UTC m=+1.270224708 container died b43397adf0790d74306c569e0b4afb4e1126c8d6788e1309471f5f3321aef7cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_beaver, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, release=1763362218, name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, architecture=x86_64)
Dec 05 08:53:12 np0005546420.localdomain systemd[1]: tmp-crun.JxJEgx.mount: Deactivated successfully.
Dec 05 08:53:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cd09c892a18f5efbba39c1a6b9eacb804cf3726fb75cc081cb6c32e80986face-merged.mount: Deactivated successfully.
Dec 05 08:53:12 np0005546420.localdomain podman[96530]: 2025-12-05 08:53:12.808838991 +0000 UTC m=+0.101079867 container remove b43397adf0790d74306c569e0b4afb4e1126c8d6788e1309471f5f3321aef7cb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_beaver, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, version=7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, release=1763362218, io.buildah.version=1.41.4)
Dec 05 08:53:12 np0005546420.localdomain systemd[1]: libpod-conmon-b43397adf0790d74306c569e0b4afb4e1126c8d6788e1309471f5f3321aef7cb.scope: Deactivated successfully.
Dec 05 08:53:12 np0005546420.localdomain sudo[94465]: pam_unix(sudo:session): session closed for user root
Dec 05 08:53:13 np0005546420.localdomain sudo[96545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:53:13 np0005546420.localdomain sudo[96545]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:53:13 np0005546420.localdomain sudo[96545]: pam_unix(sudo:session): session closed for user root
Dec 05 08:53:15 np0005546420.localdomain sshd[96560]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:53:16 np0005546420.localdomain sshd[96560]: Received disconnect from 93.157.248.178 port 48298:11: Bye Bye [preauth]
Dec 05 08:53:16 np0005546420.localdomain sshd[96560]: Disconnected from authenticating user root 93.157.248.178 port 48298 [preauth]
Dec 05 08:53:17 np0005546420.localdomain sshd[96562]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:53:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:53:18 np0005546420.localdomain podman[96564]: 2025-12-05 08:53:18.535455935 +0000 UTC m=+0.102888414 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, container_name=metrics_qdr, version=17.1.12, architecture=x86_64, release=1761123044, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:53:18 np0005546420.localdomain podman[96564]: 2025-12-05 08:53:18.733654355 +0000 UTC m=+0.301086834 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, distribution-scope=public)
Dec 05 08:53:18 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:53:18 np0005546420.localdomain sshd[96562]: Received disconnect from 195.250.72.168 port 42020:11: Bye Bye [preauth]
Dec 05 08:53:18 np0005546420.localdomain sshd[96562]: Disconnected from authenticating user root 195.250.72.168 port 42020 [preauth]
Dec 05 08:53:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:53:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:53:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:53:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:53:29 np0005546420.localdomain systemd[1]: tmp-crun.qAldOO.mount: Deactivated successfully.
Dec 05 08:53:29 np0005546420.localdomain podman[96595]: 2025-12-05 08:53:29.50138491 +0000 UTC m=+0.080107596 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team)
Dec 05 08:53:29 np0005546420.localdomain systemd[1]: tmp-crun.HBtVUB.mount: Deactivated successfully.
Dec 05 08:53:29 np0005546420.localdomain podman[96594]: 2025-12-05 08:53:29.539033789 +0000 UTC m=+0.121216662 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, name=rhosp17/openstack-cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, version=17.1.12, build-date=2025-11-18T22:49:32Z, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible)
Dec 05 08:53:29 np0005546420.localdomain podman[96602]: 2025-12-05 08:53:29.551030971 +0000 UTC m=+0.119288622 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=)
Dec 05 08:53:29 np0005546420.localdomain podman[96595]: 2025-12-05 08:53:29.555307524 +0000 UTC m=+0.134030230 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:12:45Z, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:53:29 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:53:29 np0005546420.localdomain podman[96596]: 2025-12-05 08:53:29.59901922 +0000 UTC m=+0.172275556 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, config_id=tripleo_step5, container_name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.)
Dec 05 08:53:29 np0005546420.localdomain podman[96602]: 2025-12-05 08:53:29.610325781 +0000 UTC m=+0.178583472 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:53:29 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:53:29 np0005546420.localdomain podman[96594]: 2025-12-05 08:53:29.631464407 +0000 UTC m=+0.213647260 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:49:32Z, vcs-type=git, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Dec 05 08:53:29 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:53:29 np0005546420.localdomain podman[96596]: 2025-12-05 08:53:29.684689599 +0000 UTC m=+0.257945955 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container)
Dec 05 08:53:29 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:53:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:53:32 np0005546420.localdomain podman[96691]: 2025-12-05 08:53:32.511825738 +0000 UTC m=+0.090964594 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, container_name=nova_migration_target, config_id=tripleo_step4, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, tcib_managed=true)
Dec 05 08:53:32 np0005546420.localdomain podman[96691]: 2025-12-05 08:53:32.90859937 +0000 UTC m=+0.487738206 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:53:32 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:53:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:53:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:53:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:53:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:53:37 np0005546420.localdomain podman[96716]: 2025-12-05 08:53:37.51398369 +0000 UTC m=+0.091683547 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, version=17.1.12, distribution-scope=public, build-date=2025-11-18T23:34:05Z)
Dec 05 08:53:37 np0005546420.localdomain podman[96717]: 2025-12-05 08:53:37.577285984 +0000 UTC m=+0.150468860 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:53:37 np0005546420.localdomain podman[96716]: 2025-12-05 08:53:37.583060413 +0000 UTC m=+0.160760270 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4)
Dec 05 08:53:37 np0005546420.localdomain podman[96716]: unhealthy
Dec 05 08:53:37 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:53:37 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:53:37 np0005546420.localdomain podman[96717]: 2025-12-05 08:53:37.636640376 +0000 UTC m=+0.209823282 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, build-date=2025-11-18T22:51:28Z, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true)
Dec 05 08:53:37 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:53:37 np0005546420.localdomain podman[96718]: 2025-12-05 08:53:37.728202777 +0000 UTC m=+0.298133042 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-iscsid-container)
Dec 05 08:53:37 np0005546420.localdomain podman[96718]: 2025-12-05 08:53:37.763197683 +0000 UTC m=+0.333127948 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Dec 05 08:53:37 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:53:37 np0005546420.localdomain podman[96719]: 2025-12-05 08:53:37.775983059 +0000 UTC m=+0.344952295 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:53:37 np0005546420.localdomain podman[96719]: 2025-12-05 08:53:37.812299516 +0000 UTC m=+0.381268682 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, vcs-type=git, container_name=ovn_metadata_agent, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true)
Dec 05 08:53:37 np0005546420.localdomain podman[96719]: unhealthy
Dec 05 08:53:37 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:53:37 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:53:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:53:49 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:53:49 np0005546420.localdomain recover_tripleo_nova_virtqemud[96803]: 62579
Dec 05 08:53:49 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:53:49 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:53:49 np0005546420.localdomain podman[96796]: 2025-12-05 08:53:49.515331767 +0000 UTC m=+0.089224930 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible)
Dec 05 08:53:49 np0005546420.localdomain podman[96796]: 2025-12-05 08:53:49.710036628 +0000 UTC m=+0.283929791 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:53:49 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:54:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:54:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:54:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:54:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:54:00 np0005546420.localdomain podman[96828]: 2025-12-05 08:54:00.52920074 +0000 UTC m=+0.092153851 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com)
Dec 05 08:54:00 np0005546420.localdomain systemd[1]: tmp-crun.Y7vEvp.mount: Deactivated successfully.
Dec 05 08:54:00 np0005546420.localdomain podman[96827]: 2025-12-05 08:54:00.582636418 +0000 UTC m=+0.147426615 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4)
Dec 05 08:54:00 np0005546420.localdomain podman[96827]: 2025-12-05 08:54:00.631501015 +0000 UTC m=+0.196291242 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, distribution-scope=public, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:54:00 np0005546420.localdomain podman[96831]: 2025-12-05 08:54:00.63940433 +0000 UTC m=+0.197957774 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1)
Dec 05 08:54:00 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:54:00 np0005546420.localdomain podman[96828]: 2025-12-05 08:54:00.686273824 +0000 UTC m=+0.249226935 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:54:00 np0005546420.localdomain podman[96831]: 2025-12-05 08:54:00.701636931 +0000 UTC m=+0.260190375 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:54:00 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:54:00 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:54:00 np0005546420.localdomain podman[96826]: 2025-12-05 08:54:00.688334358 +0000 UTC m=+0.260127773 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, url=https://www.redhat.com)
Dec 05 08:54:00 np0005546420.localdomain podman[96826]: 2025-12-05 08:54:00.768680752 +0000 UTC m=+0.340474167 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:54:00 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:54:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:54:03 np0005546420.localdomain systemd[1]: tmp-crun.hfN4VN.mount: Deactivated successfully.
Dec 05 08:54:03 np0005546420.localdomain podman[96927]: 2025-12-05 08:54:03.521020829 +0000 UTC m=+0.090761487 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=nova_migration_target, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:54:03 np0005546420.localdomain podman[96927]: 2025-12-05 08:54:03.917474112 +0000 UTC m=+0.487214770 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 08:54:03 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:54:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:54:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:54:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:54:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:54:08 np0005546420.localdomain podman[96953]: 2025-12-05 08:54:08.544936578 +0000 UTC m=+0.088284071 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, version=17.1.12, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 08:54:08 np0005546420.localdomain podman[96953]: 2025-12-05 08:54:08.555504745 +0000 UTC m=+0.098852198 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, distribution-scope=public, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, container_name=iscsid, version=17.1.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:54:08 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:54:08 np0005546420.localdomain systemd[1]: tmp-crun.8H7xBE.mount: Deactivated successfully.
Dec 05 08:54:08 np0005546420.localdomain podman[96952]: 2025-12-05 08:54:08.602013989 +0000 UTC m=+0.142714760 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, com.redhat.component=openstack-collectd-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z)
Dec 05 08:54:08 np0005546420.localdomain podman[96951]: 2025-12-05 08:54:08.638264944 +0000 UTC m=+0.183079212 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 05 08:54:08 np0005546420.localdomain podman[96952]: 2025-12-05 08:54:08.643228318 +0000 UTC m=+0.183929099 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 08:54:08 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:54:08 np0005546420.localdomain podman[96954]: 2025-12-05 08:54:08.71578718 +0000 UTC m=+0.250566997 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, url=https://www.redhat.com, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Dec 05 08:54:08 np0005546420.localdomain podman[96951]: 2025-12-05 08:54:08.728888366 +0000 UTC m=+0.273702644 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:54:08 np0005546420.localdomain podman[96951]: unhealthy
Dec 05 08:54:08 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:54:08 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:54:08 np0005546420.localdomain podman[96954]: 2025-12-05 08:54:08.761581101 +0000 UTC m=+0.296360888 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.expose-services=, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:54:08 np0005546420.localdomain podman[96954]: unhealthy
Dec 05 08:54:08 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:54:08 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:54:13 np0005546420.localdomain sudo[97029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:54:13 np0005546420.localdomain sudo[97029]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:54:13 np0005546420.localdomain sudo[97029]: pam_unix(sudo:session): session closed for user root
Dec 05 08:54:13 np0005546420.localdomain sudo[97044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 08:54:13 np0005546420.localdomain sudo[97044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:54:14 np0005546420.localdomain sudo[97044]: pam_unix(sudo:session): session closed for user root
Dec 05 08:54:14 np0005546420.localdomain sudo[97080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:54:14 np0005546420.localdomain sudo[97080]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:54:14 np0005546420.localdomain sudo[97080]: pam_unix(sudo:session): session closed for user root
Dec 05 08:54:14 np0005546420.localdomain sudo[97095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:54:14 np0005546420.localdomain sudo[97095]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:54:14 np0005546420.localdomain sudo[97095]: pam_unix(sudo:session): session closed for user root
Dec 05 08:54:15 np0005546420.localdomain sudo[97141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:54:15 np0005546420.localdomain sudo[97141]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:54:15 np0005546420.localdomain sudo[97141]: pam_unix(sudo:session): session closed for user root
Dec 05 08:54:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:54:20 np0005546420.localdomain podman[97156]: 2025-12-05 08:54:20.521098863 +0000 UTC m=+0.092160781 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible)
Dec 05 08:54:20 np0005546420.localdomain podman[97156]: 2025-12-05 08:54:20.715187406 +0000 UTC m=+0.286249374 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1)
Dec 05 08:54:20 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:54:25 np0005546420.localdomain sshd[97183]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:54:26 np0005546420.localdomain sshd[97183]: Received disconnect from 195.250.72.168 port 40034:11: Bye Bye [preauth]
Dec 05 08:54:26 np0005546420.localdomain sshd[97183]: Disconnected from authenticating user root 195.250.72.168 port 40034 [preauth]
Dec 05 08:54:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:54:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:54:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:54:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:54:31 np0005546420.localdomain podman[97186]: 2025-12-05 08:54:31.526141782 +0000 UTC m=+0.096380722 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 05 08:54:31 np0005546420.localdomain podman[97186]: 2025-12-05 08:54:31.561257742 +0000 UTC m=+0.131496682 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4)
Dec 05 08:54:31 np0005546420.localdomain podman[97187]: 2025-12-05 08:54:31.571336504 +0000 UTC m=+0.136664012 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible)
Dec 05 08:54:31 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:54:31 np0005546420.localdomain podman[97188]: 2025-12-05 08:54:31.633894596 +0000 UTC m=+0.196435278 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:54:31 np0005546420.localdomain podman[97188]: 2025-12-05 08:54:31.6685347 +0000 UTC m=+0.231075422 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, release=1761123044, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:54:31 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:54:31 np0005546420.localdomain podman[97185]: 2025-12-05 08:54:31.683188435 +0000 UTC m=+0.252288710 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, config_id=tripleo_step4, container_name=logrotate_crond, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, release=1761123044)
Dec 05 08:54:31 np0005546420.localdomain podman[97185]: 2025-12-05 08:54:31.696419476 +0000 UTC m=+0.265519751 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 05 08:54:31 np0005546420.localdomain podman[97187]: 2025-12-05 08:54:31.707529211 +0000 UTC m=+0.272856689 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-type=git, version=17.1.12, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:54:31 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:54:31 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:54:32 np0005546420.localdomain sshd[97283]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:54:32 np0005546420.localdomain systemd[1]: tmp-crun.OqAu9h.mount: Deactivated successfully.
Dec 05 08:54:33 np0005546420.localdomain sshd[97283]: Received disconnect from 93.157.248.178 port 60120:11: Bye Bye [preauth]
Dec 05 08:54:33 np0005546420.localdomain sshd[97283]: Disconnected from authenticating user root 93.157.248.178 port 60120 [preauth]
Dec 05 08:54:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:54:34 np0005546420.localdomain podman[97285]: 2025-12-05 08:54:34.511664706 +0000 UTC m=+0.088952611 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:54:34 np0005546420.localdomain podman[97285]: 2025-12-05 08:54:34.883496415 +0000 UTC m=+0.460784360 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:54:34 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:54:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:54:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:54:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:54:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:54:39 np0005546420.localdomain podman[97306]: 2025-12-05 08:54:39.537184904 +0000 UTC m=+0.104463013 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller)
Dec 05 08:54:39 np0005546420.localdomain podman[97309]: 2025-12-05 08:54:39.586233096 +0000 UTC m=+0.147184588 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, tcib_managed=true, version=17.1.12, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent)
Dec 05 08:54:39 np0005546420.localdomain podman[97309]: 2025-12-05 08:54:39.604283296 +0000 UTC m=+0.165234728 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:54:39 np0005546420.localdomain podman[97309]: unhealthy
Dec 05 08:54:39 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:54:39 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:54:39 np0005546420.localdomain podman[97306]: 2025-12-05 08:54:39.657731775 +0000 UTC m=+0.225009864 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, tcib_managed=true, release=1761123044, vcs-type=git, build-date=2025-11-18T23:34:05Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:54:39 np0005546420.localdomain podman[97306]: unhealthy
Dec 05 08:54:39 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:54:39 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:54:39 np0005546420.localdomain podman[97308]: 2025-12-05 08:54:39.674815905 +0000 UTC m=+0.238950116 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.buildah.version=1.41.4, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 08:54:39 np0005546420.localdomain podman[97308]: 2025-12-05 08:54:39.690351557 +0000 UTC m=+0.254485798 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, vcs-type=git)
Dec 05 08:54:39 np0005546420.localdomain podman[97307]: 2025-12-05 08:54:39.505225232 +0000 UTC m=+0.074472411 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=collectd, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, architecture=x86_64)
Dec 05 08:54:39 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:54:39 np0005546420.localdomain podman[97307]: 2025-12-05 08:54:39.737530281 +0000 UTC m=+0.306777490 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step3, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:54:39 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:54:40 np0005546420.localdomain systemd[1]: tmp-crun.XWBNux.mount: Deactivated successfully.
Dec 05 08:54:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:54:51 np0005546420.localdomain podman[97384]: 2025-12-05 08:54:51.505852064 +0000 UTC m=+0.085391500 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z)
Dec 05 08:54:51 np0005546420.localdomain podman[97384]: 2025-12-05 08:54:51.725391497 +0000 UTC m=+0.304930893 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044)
Dec 05 08:54:51 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:55:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:55:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:55:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:55:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:55:02 np0005546420.localdomain systemd[1]: tmp-crun.X02zfY.mount: Deactivated successfully.
Dec 05 08:55:02 np0005546420.localdomain podman[97414]: 2025-12-05 08:55:02.561726451 +0000 UTC m=+0.132178182 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64)
Dec 05 08:55:02 np0005546420.localdomain podman[97414]: 2025-12-05 08:55:02.614352235 +0000 UTC m=+0.184804016 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible)
Dec 05 08:55:02 np0005546420.localdomain podman[97415]: 2025-12-05 08:55:02.624406006 +0000 UTC m=+0.191475412 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step5, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:55:02 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:55:02 np0005546420.localdomain podman[97413]: 2025-12-05 08:55:02.537120417 +0000 UTC m=+0.111478160 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1)
Dec 05 08:55:02 np0005546420.localdomain podman[97413]: 2025-12-05 08:55:02.670426235 +0000 UTC m=+0.244783968 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, name=rhosp17/openstack-cron, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 05 08:55:02 np0005546420.localdomain podman[97415]: 2025-12-05 08:55:02.682331184 +0000 UTC m=+0.249400580 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:55:02 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:55:02 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:55:02 np0005546420.localdomain podman[97416]: 2025-12-05 08:55:02.731388716 +0000 UTC m=+0.297682209 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z)
Dec 05 08:55:02 np0005546420.localdomain podman[97416]: 2025-12-05 08:55:02.789348654 +0000 UTC m=+0.355642127 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:55:02 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:55:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:55:05 np0005546420.localdomain podman[97514]: 2025-12-05 08:55:05.516332886 +0000 UTC m=+0.091053376 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:55:05 np0005546420.localdomain podman[97514]: 2025-12-05 08:55:05.905724289 +0000 UTC m=+0.480444739 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:55:05 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:55:06 np0005546420.localdomain sshd[97538]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:55:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:55:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:55:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:55:10 np0005546420.localdomain podman[97540]: 2025-12-05 08:55:10.514348321 +0000 UTC m=+0.087055583 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible)
Dec 05 08:55:10 np0005546420.localdomain podman[97540]: 2025-12-05 08:55:10.535362863 +0000 UTC m=+0.108070225 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com)
Dec 05 08:55:10 np0005546420.localdomain podman[97541]: 2025-12-05 08:55:10.570986938 +0000 UTC m=+0.138629323 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible)
Dec 05 08:55:10 np0005546420.localdomain podman[97541]: 2025-12-05 08:55:10.580742591 +0000 UTC m=+0.148384976 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, version=17.1.12, container_name=collectd, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd)
Dec 05 08:55:10 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:55:10 np0005546420.localdomain systemd[1]: tmp-crun.H66gAN.mount: Deactivated successfully.
Dec 05 08:55:10 np0005546420.localdomain podman[97542]: 2025-12-05 08:55:10.625170589 +0000 UTC m=+0.189758409 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Dec 05 08:55:10 np0005546420.localdomain podman[97542]: 2025-12-05 08:55:10.636344306 +0000 UTC m=+0.200932166 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:55:10 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:55:10 np0005546420.localdomain podman[97543]: 2025-12-05 08:55:10.727449154 +0000 UTC m=+0.291118945 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Dec 05 08:55:10 np0005546420.localdomain podman[97540]: unhealthy
Dec 05 08:55:10 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:55:10 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:55:10 np0005546420.localdomain podman[97543]: 2025-12-05 08:55:10.771459499 +0000 UTC m=+0.335129240 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=ovn_metadata_agent, vcs-type=git, io.openshift.expose-services=)
Dec 05 08:55:10 np0005546420.localdomain podman[97543]: unhealthy
Dec 05 08:55:10 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:55:10 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:55:11 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:55:11 np0005546420.localdomain recover_tripleo_nova_virtqemud[97619]: 62579
Dec 05 08:55:11 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:55:11 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:55:13 np0005546420.localdomain sshd[97538]: Received disconnect from 180.184.182.87 port 54888:11: Bye Bye [preauth]
Dec 05 08:55:13 np0005546420.localdomain sshd[97538]: Disconnected from authenticating user root 180.184.182.87 port 54888 [preauth]
Dec 05 08:55:15 np0005546420.localdomain sudo[97620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:55:15 np0005546420.localdomain sudo[97620]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:55:15 np0005546420.localdomain sudo[97620]: pam_unix(sudo:session): session closed for user root
Dec 05 08:55:16 np0005546420.localdomain sudo[97635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:55:16 np0005546420.localdomain sudo[97635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:55:16 np0005546420.localdomain sudo[97635]: pam_unix(sudo:session): session closed for user root
Dec 05 08:55:17 np0005546420.localdomain sudo[97682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:55:17 np0005546420.localdomain sudo[97682]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:55:17 np0005546420.localdomain sudo[97682]: pam_unix(sudo:session): session closed for user root
Dec 05 08:55:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:55:22 np0005546420.localdomain systemd[1]: tmp-crun.BQf7tE.mount: Deactivated successfully.
Dec 05 08:55:22 np0005546420.localdomain podman[97697]: 2025-12-05 08:55:22.53127052 +0000 UTC m=+0.102242603 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z)
Dec 05 08:55:22 np0005546420.localdomain podman[97697]: 2025-12-05 08:55:22.751507024 +0000 UTC m=+0.322479097 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:55:22 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:55:29 np0005546420.localdomain sshd[97726]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:31 np0005546420.localdomain sshd[97726]: Invalid user admin from 45.135.232.92 port 55766
Dec 05 08:55:31 np0005546420.localdomain sshd[97728]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:31 np0005546420.localdomain sshd[97726]: Connection reset by invalid user admin 45.135.232.92 port 55766 [preauth]
Dec 05 08:55:32 np0005546420.localdomain sshd[97730]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:32 np0005546420.localdomain sshd[97728]: Received disconnect from 195.250.72.168 port 46248:11: Bye Bye [preauth]
Dec 05 08:55:32 np0005546420.localdomain sshd[97728]: Disconnected from authenticating user root 195.250.72.168 port 46248 [preauth]
Dec 05 08:55:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:55:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:55:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:55:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:55:33 np0005546420.localdomain systemd[1]: tmp-crun.02INyL.mount: Deactivated successfully.
Dec 05 08:55:33 np0005546420.localdomain podman[97733]: 2025-12-05 08:55:33.523096238 +0000 UTC m=+0.097188788 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, architecture=x86_64)
Dec 05 08:55:33 np0005546420.localdomain podman[97733]: 2025-12-05 08:55:33.559464137 +0000 UTC m=+0.133556626 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, release=1761123044)
Dec 05 08:55:33 np0005546420.localdomain podman[97732]: 2025-12-05 08:55:33.567795746 +0000 UTC m=+0.143535895 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, container_name=logrotate_crond, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:55:33 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:55:33 np0005546420.localdomain podman[97732]: 2025-12-05 08:55:33.604443793 +0000 UTC m=+0.180183942 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, container_name=logrotate_crond, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 05 08:55:33 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:55:33 np0005546420.localdomain podman[97735]: 2025-12-05 08:55:33.622014148 +0000 UTC m=+0.189697318 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true)
Dec 05 08:55:33 np0005546420.localdomain podman[97734]: 2025-12-05 08:55:33.671489383 +0000 UTC m=+0.241337179 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Dec 05 08:55:33 np0005546420.localdomain podman[97735]: 2025-12-05 08:55:33.68041072 +0000 UTC m=+0.248093940 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:55:33 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:55:33 np0005546420.localdomain podman[97734]: 2025-12-05 08:55:33.728305026 +0000 UTC m=+0.298152802 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:55:33 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:55:33 np0005546420.localdomain sshd[97730]: Connection reset by authenticating user root 45.135.232.92 port 55782 [preauth]
Dec 05 08:55:34 np0005546420.localdomain sshd[97833]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:35 np0005546420.localdomain sshd[97833]: Connection reset by authenticating user root 45.135.232.92 port 54218 [preauth]
Dec 05 08:55:36 np0005546420.localdomain sshd[97835]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:55:36 np0005546420.localdomain podman[97837]: 2025-12-05 08:55:36.500507121 +0000 UTC m=+0.080148508 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4)
Dec 05 08:55:36 np0005546420.localdomain podman[97837]: 2025-12-05 08:55:36.904353883 +0000 UTC m=+0.483995220 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z)
Dec 05 08:55:36 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:55:38 np0005546420.localdomain sshd[97835]: Connection reset by authenticating user root 45.135.232.92 port 54226 [preauth]
Dec 05 08:55:38 np0005546420.localdomain sshd[97860]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:39 np0005546420.localdomain sshd[97860]: Invalid user admin from 45.135.232.92 port 54234
Dec 05 08:55:39 np0005546420.localdomain sshd[97860]: Connection reset by invalid user admin 45.135.232.92 port 54234 [preauth]
Dec 05 08:55:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:55:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:55:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:55:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:55:41 np0005546420.localdomain podman[97864]: 2025-12-05 08:55:41.48841667 +0000 UTC m=+0.062214162 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z)
Dec 05 08:55:41 np0005546420.localdomain systemd[1]: tmp-crun.LIDlTG.mount: Deactivated successfully.
Dec 05 08:55:41 np0005546420.localdomain podman[97862]: 2025-12-05 08:55:41.508898935 +0000 UTC m=+0.084245335 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public)
Dec 05 08:55:41 np0005546420.localdomain podman[97869]: 2025-12-05 08:55:41.549146354 +0000 UTC m=+0.112708858 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:55:41 np0005546420.localdomain podman[97869]: 2025-12-05 08:55:41.564378597 +0000 UTC m=+0.127941131 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:55:41 np0005546420.localdomain podman[97869]: unhealthy
Dec 05 08:55:41 np0005546420.localdomain podman[97864]: 2025-12-05 08:55:41.57477813 +0000 UTC m=+0.148575682 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, managed_by=tripleo_ansible)
Dec 05 08:55:41 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:55:41 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:55:41 np0005546420.localdomain podman[97863]: 2025-12-05 08:55:41.607098973 +0000 UTC m=+0.180585515 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git)
Dec 05 08:55:41 np0005546420.localdomain podman[97862]: 2025-12-05 08:55:41.626525035 +0000 UTC m=+0.201871455 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container)
Dec 05 08:55:41 np0005546420.localdomain podman[97862]: unhealthy
Dec 05 08:55:41 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:55:41 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:55:41 np0005546420.localdomain podman[97863]: 2025-12-05 08:55:41.64020371 +0000 UTC m=+0.213690282 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 05 08:55:41 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:55:41 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:55:46 np0005546420.localdomain sshd[97939]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:48 np0005546420.localdomain sshd[97939]: Connection reset by authenticating user root 45.140.17.124 port 35026 [preauth]
Dec 05 08:55:49 np0005546420.localdomain sshd[97941]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:51 np0005546420.localdomain sshd[97941]: Connection reset by authenticating user root 45.140.17.124 port 35048 [preauth]
Dec 05 08:55:51 np0005546420.localdomain sshd[97943]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:51 np0005546420.localdomain sshd[97944]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:52 np0005546420.localdomain sshd[97944]: Received disconnect from 93.157.248.178 port 32872:11: Bye Bye [preauth]
Dec 05 08:55:52 np0005546420.localdomain sshd[97944]: Disconnected from authenticating user root 93.157.248.178 port 32872 [preauth]
Dec 05 08:55:53 np0005546420.localdomain sshd[97943]: Connection reset by authenticating user root 45.140.17.124 port 30014 [preauth]
Dec 05 08:55:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:55:53 np0005546420.localdomain podman[97947]: 2025-12-05 08:55:53.313304219 +0000 UTC m=+0.087903638 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, container_name=metrics_qdr, release=1761123044, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:55:53 np0005546420.localdomain sshd[97975]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:53 np0005546420.localdomain podman[97947]: 2025-12-05 08:55:53.527521477 +0000 UTC m=+0.302120826 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Dec 05 08:55:53 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:55:55 np0005546420.localdomain sshd[97975]: Invalid user ubuntu from 45.140.17.124 port 30022
Dec 05 08:55:55 np0005546420.localdomain sshd[97975]: Connection reset by invalid user ubuntu 45.140.17.124 port 30022 [preauth]
Dec 05 08:55:55 np0005546420.localdomain sshd[97977]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:55:58 np0005546420.localdomain sshd[97977]: Connection reset by authenticating user root 45.140.17.124 port 30026 [preauth]
Dec 05 08:56:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:56:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:56:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:56:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:56:04 np0005546420.localdomain podman[97979]: 2025-12-05 08:56:04.521570105 +0000 UTC m=+0.096038041 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container)
Dec 05 08:56:04 np0005546420.localdomain podman[97979]: 2025-12-05 08:56:04.55974543 +0000 UTC m=+0.134213356 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1761123044, version=17.1.12, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:56:04 np0005546420.localdomain podman[97981]: 2025-12-05 08:56:04.571784093 +0000 UTC m=+0.140605653 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, batch=17.1_20251118.1, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team)
Dec 05 08:56:04 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:56:04 np0005546420.localdomain podman[97980]: 2025-12-05 08:56:04.629878436 +0000 UTC m=+0.200202604 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:56:04 np0005546420.localdomain podman[97982]: 2025-12-05 08:56:04.674605434 +0000 UTC m=+0.239969707 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 08:56:04 np0005546420.localdomain podman[97981]: 2025-12-05 08:56:04.682934893 +0000 UTC m=+0.251756443 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:56:04 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:56:04 np0005546420.localdomain podman[97982]: 2025-12-05 08:56:04.706324308 +0000 UTC m=+0.271688551 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible)
Dec 05 08:56:04 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:56:04 np0005546420.localdomain podman[97980]: 2025-12-05 08:56:04.730748177 +0000 UTC m=+0.301072305 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:56:04 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:56:05 np0005546420.localdomain systemd[1]: tmp-crun.F11s69.mount: Deactivated successfully.
Dec 05 08:56:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:56:07 np0005546420.localdomain podman[98077]: 2025-12-05 08:56:07.507191283 +0000 UTC m=+0.085752242 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12)
Dec 05 08:56:07 np0005546420.localdomain podman[98077]: 2025-12-05 08:56:07.872684565 +0000 UTC m=+0.451245514 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:36:58Z, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:56:07 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:56:12 np0005546420.localdomain podman[98101]: 2025-12-05 08:56:12.518273873 +0000 UTC m=+0.090145118 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T22:51:28Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:56:12 np0005546420.localdomain podman[98100]: 2025-12-05 08:56:12.579327628 +0000 UTC m=+0.152862815 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public)
Dec 05 08:56:12 np0005546420.localdomain podman[98100]: 2025-12-05 08:56:12.622520697 +0000 UTC m=+0.196055874 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:56:12 np0005546420.localdomain podman[98100]: unhealthy
Dec 05 08:56:12 np0005546420.localdomain podman[98105]: 2025-12-05 08:56:12.633902281 +0000 UTC m=+0.196349204 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:56:12 np0005546420.localdomain podman[98102]: 2025-12-05 08:56:12.682177308 +0000 UTC m=+0.247347926 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, release=1761123044, version=17.1.12)
Dec 05 08:56:12 np0005546420.localdomain podman[98102]: 2025-12-05 08:56:12.695602935 +0000 UTC m=+0.260773593 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, config_id=tripleo_step3)
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:56:12 np0005546420.localdomain podman[98105]: 2025-12-05 08:56:12.750013013 +0000 UTC m=+0.312459896 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:56:12 np0005546420.localdomain podman[98105]: unhealthy
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:56:12 np0005546420.localdomain podman[98101]: 2025-12-05 08:56:12.80274513 +0000 UTC m=+0.374616425 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, container_name=collectd, tcib_managed=true, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64)
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:56:12 np0005546420.localdomain recover_tripleo_nova_virtqemud[98179]: 62579
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:56:12 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:56:17 np0005546420.localdomain sudo[98180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:56:17 np0005546420.localdomain sudo[98180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:56:17 np0005546420.localdomain sudo[98180]: pam_unix(sudo:session): session closed for user root
Dec 05 08:56:17 np0005546420.localdomain sudo[98195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 08:56:17 np0005546420.localdomain sudo[98195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:56:18 np0005546420.localdomain podman[98285]: 2025-12-05 08:56:18.517842336 +0000 UTC m=+0.104764591 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.41.4, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, name=rhceph)
Dec 05 08:56:18 np0005546420.localdomain podman[98285]: 2025-12-05 08:56:18.622440823 +0000 UTC m=+0.209363088 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, release=1763362218, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, RELEASE=main)
Dec 05 08:56:18 np0005546420.localdomain sudo[98195]: pam_unix(sudo:session): session closed for user root
Dec 05 08:56:19 np0005546420.localdomain sudo[98354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:56:19 np0005546420.localdomain sudo[98354]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:56:19 np0005546420.localdomain sudo[98354]: pam_unix(sudo:session): session closed for user root
Dec 05 08:56:19 np0005546420.localdomain sudo[98369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:56:19 np0005546420.localdomain sudo[98369]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:56:19 np0005546420.localdomain sudo[98369]: pam_unix(sudo:session): session closed for user root
Dec 05 08:56:20 np0005546420.localdomain sudo[98416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:56:20 np0005546420.localdomain sudo[98416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:56:20 np0005546420.localdomain sudo[98416]: pam_unix(sudo:session): session closed for user root
Dec 05 08:56:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:56:24 np0005546420.localdomain podman[98431]: 2025-12-05 08:56:24.545865872 +0000 UTC m=+0.115502675 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container)
Dec 05 08:56:24 np0005546420.localdomain podman[98431]: 2025-12-05 08:56:24.772370641 +0000 UTC m=+0.342007464 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:56:24 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:56:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:56:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:56:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:56:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:56:35 np0005546420.localdomain podman[98462]: 2025-12-05 08:56:35.56279543 +0000 UTC m=+0.133422392 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:56:35 np0005546420.localdomain podman[98461]: 2025-12-05 08:56:35.618404636 +0000 UTC m=+0.193968541 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, container_name=logrotate_crond, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64)
Dec 05 08:56:35 np0005546420.localdomain podman[98461]: 2025-12-05 08:56:35.630282164 +0000 UTC m=+0.205846099 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z)
Dec 05 08:56:35 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:56:35 np0005546420.localdomain podman[98462]: 2025-12-05 08:56:35.644567607 +0000 UTC m=+0.215194629 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:56:35 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:56:35 np0005546420.localdomain podman[98469]: 2025-12-05 08:56:35.675754995 +0000 UTC m=+0.241366801 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git)
Dec 05 08:56:35 np0005546420.localdomain podman[98463]: 2025-12-05 08:56:35.528562338 +0000 UTC m=+0.094052620 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:56:35 np0005546420.localdomain podman[98469]: 2025-12-05 08:56:35.709391299 +0000 UTC m=+0.275003095 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z)
Dec 05 08:56:35 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:56:35 np0005546420.localdomain podman[98463]: 2025-12-05 08:56:35.759908856 +0000 UTC m=+0.325399148 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, architecture=x86_64, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:56:35 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:56:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:56:38 np0005546420.localdomain podman[98558]: 2025-12-05 08:56:38.51401008 +0000 UTC m=+0.090831749 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z)
Dec 05 08:56:38 np0005546420.localdomain podman[98558]: 2025-12-05 08:56:38.878015005 +0000 UTC m=+0.454836644 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=nova_migration_target, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:56:38 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:56:41 np0005546420.localdomain sshd[98581]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:56:42 np0005546420.localdomain sshd[98581]: Received disconnect from 195.250.72.168 port 50064:11: Bye Bye [preauth]
Dec 05 08:56:42 np0005546420.localdomain sshd[98581]: Disconnected from authenticating user root 195.250.72.168 port 50064 [preauth]
Dec 05 08:56:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:56:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:56:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:56:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:56:43 np0005546420.localdomain systemd[1]: tmp-crun.JsslVO.mount: Deactivated successfully.
Dec 05 08:56:43 np0005546420.localdomain podman[98586]: 2025-12-05 08:56:43.529377533 +0000 UTC m=+0.096856697 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Dec 05 08:56:43 np0005546420.localdomain podman[98583]: 2025-12-05 08:56:43.568829438 +0000 UTC m=+0.142503474 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, build-date=2025-11-18T23:34:05Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_controller, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:56:43 np0005546420.localdomain podman[98586]: 2025-12-05 08:56:43.570358034 +0000 UTC m=+0.137837229 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:56:43 np0005546420.localdomain podman[98586]: unhealthy
Dec 05 08:56:43 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:56:43 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:56:43 np0005546420.localdomain podman[98584]: 2025-12-05 08:56:43.637100756 +0000 UTC m=+0.207903503 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public)
Dec 05 08:56:43 np0005546420.localdomain podman[98584]: 2025-12-05 08:56:43.678395967 +0000 UTC m=+0.249198714 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, version=17.1.12, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-type=git)
Dec 05 08:56:43 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:56:43 np0005546420.localdomain podman[98583]: 2025-12-05 08:56:43.70395059 +0000 UTC m=+0.277624626 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1)
Dec 05 08:56:43 np0005546420.localdomain podman[98583]: unhealthy
Dec 05 08:56:43 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:56:43 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:56:43 np0005546420.localdomain podman[98585]: 2025-12-05 08:56:43.679622185 +0000 UTC m=+0.246000665 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step3)
Dec 05 08:56:43 np0005546420.localdomain podman[98585]: 2025-12-05 08:56:43.75937514 +0000 UTC m=+0.325753620 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, container_name=iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 08:56:43 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:56:44 np0005546420.localdomain systemd[1]: tmp-crun.mS3d7J.mount: Deactivated successfully.
Dec 05 08:56:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:56:55 np0005546420.localdomain podman[98658]: 2025-12-05 08:56:55.522019048 +0000 UTC m=+0.099017774 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com)
Dec 05 08:56:55 np0005546420.localdomain podman[98658]: 2025-12-05 08:56:55.718461284 +0000 UTC m=+0.295460060 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:56:55 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:57:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:57:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:57:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:57:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:57:06 np0005546420.localdomain podman[98693]: 2025-12-05 08:57:06.50458703 +0000 UTC m=+0.073022516 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:57:06 np0005546420.localdomain podman[98687]: 2025-12-05 08:57:06.560872017 +0000 UTC m=+0.139171260 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:57:06 np0005546420.localdomain podman[98687]: 2025-12-05 08:57:06.573184599 +0000 UTC m=+0.151483922 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044)
Dec 05 08:57:06 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:57:06 np0005546420.localdomain podman[98693]: 2025-12-05 08:57:06.611645902 +0000 UTC m=+0.180081388 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, container_name=ceilometer_agent_compute, architecture=x86_64, tcib_managed=true, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 08:57:06 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:57:06 np0005546420.localdomain podman[98688]: 2025-12-05 08:57:06.617189634 +0000 UTC m=+0.191668808 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z)
Dec 05 08:57:06 np0005546420.localdomain podman[98689]: 2025-12-05 08:57:06.671304603 +0000 UTC m=+0.241143964 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, container_name=nova_compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:57:06 np0005546420.localdomain podman[98688]: 2025-12-05 08:57:06.702344577 +0000 UTC m=+0.276823701 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4)
Dec 05 08:57:06 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:57:06 np0005546420.localdomain podman[98689]: 2025-12-05 08:57:06.754394522 +0000 UTC m=+0.324233823 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, version=17.1.12, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:57:06 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:57:07 np0005546420.localdomain systemd[1]: tmp-crun.2MjrSt.mount: Deactivated successfully.
Dec 05 08:57:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:57:09 np0005546420.localdomain systemd[1]: tmp-crun.J2smS1.mount: Deactivated successfully.
Dec 05 08:57:09 np0005546420.localdomain podman[98788]: 2025-12-05 08:57:09.513271633 +0000 UTC m=+0.089775626 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, container_name=nova_migration_target, version=17.1.12, managed_by=tripleo_ansible)
Dec 05 08:57:09 np0005546420.localdomain podman[98788]: 2025-12-05 08:57:09.955202907 +0000 UTC m=+0.531706950 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=nova_migration_target, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, distribution-scope=public, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:57:09 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:57:11 np0005546420.localdomain sshd[98812]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:57:12 np0005546420.localdomain sshd[98812]: Received disconnect from 93.157.248.178 port 39762:11: Bye Bye [preauth]
Dec 05 08:57:12 np0005546420.localdomain sshd[98812]: Disconnected from authenticating user root 93.157.248.178 port 39762 [preauth]
Dec 05 08:57:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:57:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:57:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:57:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:57:14 np0005546420.localdomain systemd[1]: tmp-crun.J0eeCT.mount: Deactivated successfully.
Dec 05 08:57:14 np0005546420.localdomain podman[98814]: 2025-12-05 08:57:14.525647292 +0000 UTC m=+0.100244792 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12)
Dec 05 08:57:14 np0005546420.localdomain podman[98815]: 2025-12-05 08:57:14.566977775 +0000 UTC m=+0.140523821 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-collectd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, container_name=collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 05 08:57:14 np0005546420.localdomain podman[98815]: 2025-12-05 08:57:14.580479124 +0000 UTC m=+0.154025110 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, container_name=collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, architecture=x86_64)
Dec 05 08:57:14 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:57:14 np0005546420.localdomain podman[98814]: 2025-12-05 08:57:14.592547588 +0000 UTC m=+0.167145098 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, container_name=ovn_controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container)
Dec 05 08:57:14 np0005546420.localdomain podman[98814]: unhealthy
Dec 05 08:57:14 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:57:14 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:57:14 np0005546420.localdomain podman[98817]: 2025-12-05 08:57:14.673423168 +0000 UTC m=+0.239530444 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 08:57:14 np0005546420.localdomain podman[98817]: 2025-12-05 08:57:14.721456709 +0000 UTC m=+0.287564015 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:57:14 np0005546420.localdomain podman[98817]: unhealthy
Dec 05 08:57:14 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:57:14 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:57:14 np0005546420.localdomain podman[98816]: 2025-12-05 08:57:14.743315067 +0000 UTC m=+0.311470346 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, distribution-scope=public, container_name=iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:57:14 np0005546420.localdomain podman[98816]: 2025-12-05 08:57:14.778413606 +0000 UTC m=+0.346568915 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container)
Dec 05 08:57:14 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:57:20 np0005546420.localdomain sudo[98891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:57:20 np0005546420.localdomain sudo[98891]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:57:20 np0005546420.localdomain sudo[98891]: pam_unix(sudo:session): session closed for user root
Dec 05 08:57:20 np0005546420.localdomain sudo[98906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:57:20 np0005546420.localdomain sudo[98906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:57:21 np0005546420.localdomain sudo[98906]: pam_unix(sudo:session): session closed for user root
Dec 05 08:57:21 np0005546420.localdomain sudo[98952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:57:21 np0005546420.localdomain sudo[98952]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:57:21 np0005546420.localdomain sudo[98952]: pam_unix(sudo:session): session closed for user root
Dec 05 08:57:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:57:26 np0005546420.localdomain podman[98967]: 2025-12-05 08:57:26.523166219 +0000 UTC m=+0.097619581 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container)
Dec 05 08:57:26 np0005546420.localdomain podman[98967]: 2025-12-05 08:57:26.727765718 +0000 UTC m=+0.302219050 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, vendor=Red Hat, Inc., container_name=metrics_qdr, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Dec 05 08:57:26 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:57:37 np0005546420.localdomain recover_tripleo_nova_virtqemud[99021]: 62579
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:57:37 np0005546420.localdomain podman[98995]: 2025-12-05 08:57:37.52408248 +0000 UTC m=+0.102033588 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64)
Dec 05 08:57:37 np0005546420.localdomain podman[98995]: 2025-12-05 08:57:37.537180607 +0000 UTC m=+0.115131745 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public)
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: tmp-crun.MfaTsU.mount: Deactivated successfully.
Dec 05 08:57:37 np0005546420.localdomain podman[98998]: 2025-12-05 08:57:37.629895754 +0000 UTC m=+0.200534865 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:11:48Z)
Dec 05 08:57:37 np0005546420.localdomain podman[98998]: 2025-12-05 08:57:37.665598531 +0000 UTC m=+0.236237642 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=)
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:57:37 np0005546420.localdomain podman[98996]: 2025-12-05 08:57:37.729022059 +0000 UTC m=+0.306992938 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:57:37 np0005546420.localdomain podman[98996]: 2025-12-05 08:57:37.765481711 +0000 UTC m=+0.343452620 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:57:37 np0005546420.localdomain podman[98997]: 2025-12-05 08:57:37.679020427 +0000 UTC m=+0.250247226 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true)
Dec 05 08:57:37 np0005546420.localdomain podman[98997]: 2025-12-05 08:57:37.809500397 +0000 UTC m=+0.380727266 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step5, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:57:37 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:57:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:57:40 np0005546420.localdomain systemd[1]: tmp-crun.GmHQi8.mount: Deactivated successfully.
Dec 05 08:57:40 np0005546420.localdomain podman[99093]: 2025-12-05 08:57:40.515836927 +0000 UTC m=+0.090775498 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute)
Dec 05 08:57:40 np0005546420.localdomain podman[99093]: 2025-12-05 08:57:40.893589619 +0000 UTC m=+0.468528220 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:57:40 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:57:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:57:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:57:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:57:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:57:45 np0005546420.localdomain systemd[1]: tmp-crun.40PUIi.mount: Deactivated successfully.
Dec 05 08:57:45 np0005546420.localdomain podman[99120]: 2025-12-05 08:57:45.535285977 +0000 UTC m=+0.089471438 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc.)
Dec 05 08:57:45 np0005546420.localdomain podman[99118]: 2025-12-05 08:57:45.571739518 +0000 UTC m=+0.138902522 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true)
Dec 05 08:57:45 np0005546420.localdomain podman[99117]: 2025-12-05 08:57:45.637677864 +0000 UTC m=+0.202266308 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc.)
Dec 05 08:57:45 np0005546420.localdomain podman[99119]: 2025-12-05 08:57:45.684318821 +0000 UTC m=+0.241405972 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vendor=Red Hat, Inc., container_name=iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 05 08:57:45 np0005546420.localdomain podman[99119]: 2025-12-05 08:57:45.692818055 +0000 UTC m=+0.249905186 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid)
Dec 05 08:57:45 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:57:45 np0005546420.localdomain podman[99120]: 2025-12-05 08:57:45.704169287 +0000 UTC m=+0.258354788 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:57:45 np0005546420.localdomain podman[99120]: unhealthy
Dec 05 08:57:45 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:57:45 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:57:45 np0005546420.localdomain podman[99118]: 2025-12-05 08:57:45.758053949 +0000 UTC m=+0.325216943 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, version=17.1.12, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 05 08:57:45 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:57:45 np0005546420.localdomain podman[99117]: 2025-12-05 08:57:45.808772913 +0000 UTC m=+0.373361397 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller)
Dec 05 08:57:45 np0005546420.localdomain podman[99117]: unhealthy
Dec 05 08:57:45 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:57:45 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:57:50 np0005546420.localdomain sshd[99196]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:57:51 np0005546420.localdomain sshd[99196]: Received disconnect from 195.250.72.168 port 59166:11: Bye Bye [preauth]
Dec 05 08:57:51 np0005546420.localdomain sshd[99196]: Disconnected from authenticating user root 195.250.72.168 port 59166 [preauth]
Dec 05 08:57:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:57:57 np0005546420.localdomain podman[99198]: 2025-12-05 08:57:57.512883034 +0000 UTC m=+0.087076823 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:57:57 np0005546420.localdomain podman[99198]: 2025-12-05 08:57:57.710715774 +0000 UTC m=+0.284909503 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, config_id=tripleo_step1, io.buildah.version=1.41.4)
Dec 05 08:57:57 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:58:06 np0005546420.localdomain sshd[99229]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:58:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:58:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:58:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:58:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:58:08 np0005546420.localdomain podman[99231]: 2025-12-05 08:58:08.51842426 +0000 UTC m=+0.093656248 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, version=17.1.12, name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container)
Dec 05 08:58:08 np0005546420.localdomain systemd[1]: tmp-crun.yW0zs7.mount: Deactivated successfully.
Dec 05 08:58:08 np0005546420.localdomain podman[99239]: 2025-12-05 08:58:08.585901163 +0000 UTC m=+0.148378365 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, release=1761123044, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc.)
Dec 05 08:58:08 np0005546420.localdomain podman[99233]: 2025-12-05 08:58:08.618066101 +0000 UTC m=+0.185620771 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z)
Dec 05 08:58:08 np0005546420.localdomain podman[99239]: 2025-12-05 08:58:08.671944274 +0000 UTC m=+0.234421446 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=tripleo_step4, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:58:08 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:58:08 np0005546420.localdomain podman[99232]: 2025-12-05 08:58:08.683721179 +0000 UTC m=+0.255576322 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:58:08 np0005546420.localdomain podman[99231]: 2025-12-05 08:58:08.704151803 +0000 UTC m=+0.279383771 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, tcib_managed=true, build-date=2025-11-18T22:49:32Z, version=17.1.12, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, managed_by=tripleo_ansible, container_name=logrotate_crond, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com)
Dec 05 08:58:08 np0005546420.localdomain podman[99232]: 2025-12-05 08:58:08.711546862 +0000 UTC m=+0.283402045 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible)
Dec 05 08:58:08 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:58:08 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:58:08 np0005546420.localdomain podman[99233]: 2025-12-05 08:58:08.757757226 +0000 UTC m=+0.325311876 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, config_id=tripleo_step5, io.openshift.expose-services=, container_name=nova_compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:58:08 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:58:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:58:11 np0005546420.localdomain systemd[1]: tmp-crun.HkNjmP.mount: Deactivated successfully.
Dec 05 08:58:11 np0005546420.localdomain podman[99330]: 2025-12-05 08:58:11.505229143 +0000 UTC m=+0.085690740 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:58:11 np0005546420.localdomain podman[99330]: 2025-12-05 08:58:11.842346724 +0000 UTC m=+0.422808291 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 08:58:11 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:58:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:58:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:58:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:58:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:58:16 np0005546420.localdomain systemd[1]: tmp-crun.5MpPrR.mount: Deactivated successfully.
Dec 05 08:58:16 np0005546420.localdomain podman[99354]: 2025-12-05 08:58:16.570932538 +0000 UTC m=+0.143700420 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, name=rhosp17/openstack-collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:58:16 np0005546420.localdomain podman[99354]: 2025-12-05 08:58:16.581511866 +0000 UTC m=+0.154279678 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, container_name=collectd, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 05 08:58:16 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:58:16 np0005546420.localdomain podman[99355]: 2025-12-05 08:58:16.640575489 +0000 UTC m=+0.209778631 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, container_name=iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid)
Dec 05 08:58:16 np0005546420.localdomain podman[99353]: 2025-12-05 08:58:16.540693319 +0000 UTC m=+0.116432843 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:58:16 np0005546420.localdomain podman[99353]: 2025-12-05 08:58:16.675444212 +0000 UTC m=+0.251183786 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 05 08:58:16 np0005546420.localdomain podman[99353]: unhealthy
Dec 05 08:58:16 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:58:16 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:58:16 np0005546420.localdomain podman[99356]: 2025-12-05 08:58:16.687275908 +0000 UTC m=+0.250670520 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 05 08:58:16 np0005546420.localdomain podman[99356]: 2025-12-05 08:58:16.707799235 +0000 UTC m=+0.271193837 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z)
Dec 05 08:58:16 np0005546420.localdomain podman[99356]: unhealthy
Dec 05 08:58:16 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:58:16 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:58:16 np0005546420.localdomain podman[99355]: 2025-12-05 08:58:16.775431874 +0000 UTC m=+0.344634976 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, container_name=iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, tcib_managed=true)
Dec 05 08:58:16 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:58:18 np0005546420.localdomain sshd[99229]: ssh_dispatch_run_fatal: Connection from 180.184.182.87 port 65374: Connection timed out [preauth]
Dec 05 08:58:21 np0005546420.localdomain sudo[99432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:58:21 np0005546420.localdomain sudo[99432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:58:21 np0005546420.localdomain sudo[99432]: pam_unix(sudo:session): session closed for user root
Dec 05 08:58:22 np0005546420.localdomain sudo[99447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:58:22 np0005546420.localdomain sudo[99447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:58:22 np0005546420.localdomain sudo[99447]: pam_unix(sudo:session): session closed for user root
Dec 05 08:58:23 np0005546420.localdomain sudo[99493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:58:23 np0005546420.localdomain sudo[99493]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:58:23 np0005546420.localdomain sudo[99493]: pam_unix(sudo:session): session closed for user root
Dec 05 08:58:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:58:28 np0005546420.localdomain systemd[1]: tmp-crun.L114ac.mount: Deactivated successfully.
Dec 05 08:58:28 np0005546420.localdomain podman[99508]: 2025-12-05 08:58:28.518274507 +0000 UTC m=+0.095603238 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z)
Dec 05 08:58:28 np0005546420.localdomain podman[99508]: 2025-12-05 08:58:28.690764449 +0000 UTC m=+0.268093170 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container)
Dec 05 08:58:28 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:58:31 np0005546420.localdomain sshd[99536]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:58:31 np0005546420.localdomain sshd[99536]: Received disconnect from 93.157.248.178 port 52164:11: Bye Bye [preauth]
Dec 05 08:58:31 np0005546420.localdomain sshd[99536]: Disconnected from authenticating user root 93.157.248.178 port 52164 [preauth]
Dec 05 08:58:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:58:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:58:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:58:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:58:39 np0005546420.localdomain systemd[1]: tmp-crun.FFAK2l.mount: Deactivated successfully.
Dec 05 08:58:39 np0005546420.localdomain podman[99542]: 2025-12-05 08:58:39.57801382 +0000 UTC m=+0.142984208 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:11:48Z, version=17.1.12)
Dec 05 08:58:39 np0005546420.localdomain podman[99539]: 2025-12-05 08:58:39.537752831 +0000 UTC m=+0.111800710 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=logrotate_crond, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:58:39 np0005546420.localdomain podman[99542]: 2025-12-05 08:58:39.613247483 +0000 UTC m=+0.178217891 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:58:39 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:58:39 np0005546420.localdomain podman[99540]: 2025-12-05 08:58:39.627907939 +0000 UTC m=+0.198838131 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 05 08:58:39 np0005546420.localdomain podman[99539]: 2025-12-05 08:58:39.66825233 +0000 UTC m=+0.242300269 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, url=https://www.redhat.com)
Dec 05 08:58:39 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:58:39 np0005546420.localdomain podman[99541]: 2025-12-05 08:58:39.680016395 +0000 UTC m=+0.247276864 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 05 08:58:39 np0005546420.localdomain podman[99540]: 2025-12-05 08:58:39.689562732 +0000 UTC m=+0.260492964 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, release=1761123044, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 08:58:39 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:58:39 np0005546420.localdomain podman[99541]: 2025-12-05 08:58:39.715450755 +0000 UTC m=+0.282711224 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, container_name=nova_compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec 05 08:58:39 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:58:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:58:42 np0005546420.localdomain podman[99639]: 2025-12-05 08:58:42.504184183 +0000 UTC m=+0.085608637 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git)
Dec 05 08:58:42 np0005546420.localdomain podman[99639]: 2025-12-05 08:58:42.895758674 +0000 UTC m=+0.477183148 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, container_name=nova_migration_target, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Dec 05 08:58:42 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:58:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:58:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:58:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:58:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:58:47 np0005546420.localdomain systemd[1]: tmp-crun.JOjlVg.mount: Deactivated successfully.
Dec 05 08:58:47 np0005546420.localdomain podman[99665]: 2025-12-05 08:58:47.569867828 +0000 UTC m=+0.138785568 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1)
Dec 05 08:58:47 np0005546420.localdomain podman[99664]: 2025-12-05 08:58:47.52129468 +0000 UTC m=+0.097011090 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, com.redhat.component=openstack-collectd-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd)
Dec 05 08:58:47 np0005546420.localdomain podman[99663]: 2025-12-05 08:58:47.577486204 +0000 UTC m=+0.153924917 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:58:47 np0005546420.localdomain podman[99663]: 2025-12-05 08:58:47.61761764 +0000 UTC m=+0.194056383 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 05 08:58:47 np0005546420.localdomain podman[99663]: unhealthy
Dec 05 08:58:47 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:58:47 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:58:47 np0005546420.localdomain podman[99667]: 2025-12-05 08:58:47.629156548 +0000 UTC m=+0.194475416 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true)
Dec 05 08:58:47 np0005546420.localdomain podman[99665]: 2025-12-05 08:58:47.650669646 +0000 UTC m=+0.219587346 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.buildah.version=1.41.4, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 08:58:47 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:58:47 np0005546420.localdomain podman[99667]: 2025-12-05 08:58:47.671480441 +0000 UTC m=+0.236799259 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, release=1761123044, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1)
Dec 05 08:58:47 np0005546420.localdomain podman[99667]: unhealthy
Dec 05 08:58:47 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:58:47 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:58:47 np0005546420.localdomain podman[99664]: 2025-12-05 08:58:47.756480848 +0000 UTC m=+0.332197268 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4)
Dec 05 08:58:47 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:58:48 np0005546420.localdomain systemd[1]: tmp-crun.11Vxay.mount: Deactivated successfully.
Dec 05 08:58:56 np0005546420.localdomain sshd[36374]: Received disconnect from 192.168.122.100 port 38146:11: disconnected by user
Dec 05 08:58:56 np0005546420.localdomain sshd[36374]: Disconnected from user tripleo-admin 192.168.122.100 port 38146
Dec 05 08:58:56 np0005546420.localdomain sshd[36354]: pam_unix(sshd:session): session closed for user tripleo-admin
Dec 05 08:58:56 np0005546420.localdomain systemd[1]: session-28.scope: Deactivated successfully.
Dec 05 08:58:56 np0005546420.localdomain systemd[1]: session-28.scope: Consumed 7min 16.287s CPU time.
Dec 05 08:58:56 np0005546420.localdomain systemd-logind[762]: Session 28 logged out. Waiting for processes to exit.
Dec 05 08:58:56 np0005546420.localdomain systemd-logind[762]: Removed session 28.
Dec 05 08:58:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:58:59 np0005546420.localdomain podman[99741]: 2025-12-05 08:58:59.525489824 +0000 UTC m=+0.096615679 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:58:59 np0005546420.localdomain sshd[99770]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:58:59 np0005546420.localdomain podman[99741]: 2025-12-05 08:58:59.752487387 +0000 UTC m=+0.323613292 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, url=https://www.redhat.com)
Dec 05 08:58:59 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:59:00 np0005546420.localdomain sshd[99770]: Received disconnect from 195.250.72.168 port 47860:11: Bye Bye [preauth]
Dec 05 08:59:00 np0005546420.localdomain sshd[99770]: Disconnected from authenticating user root 195.250.72.168 port 47860 [preauth]
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: Stopping User Manager for UID 1003...
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Activating special unit Exit the Session...
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Removed slice User Background Tasks Slice.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Stopped target Main User Target.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Stopped target Basic System.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Stopped target Paths.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Stopped target Sockets.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Stopped target Timers.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Closed D-Bus User Message Bus Socket.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Stopped Create User's Volatile Files and Directories.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Removed slice User Application Slice.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Reached target Shutdown.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Finished Exit the Session.
Dec 05 08:59:06 np0005546420.localdomain systemd[36358]: Reached target Exit the Session.
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: Stopped User Manager for UID 1003.
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: user@1003.service: Consumed 4.604s CPU time.
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Dec 05 08:59:06 np0005546420.localdomain recover_tripleo_nova_virtqemud[99773]: 62579
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Dec 05 08:59:06 np0005546420.localdomain systemd[1]: user-1003.slice: Consumed 7min 20.920s CPU time.
Dec 05 08:59:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:59:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:59:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:59:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:59:10 np0005546420.localdomain podman[99777]: 2025-12-05 08:59:10.527988145 +0000 UTC m=+0.094216955 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 05 08:59:10 np0005546420.localdomain podman[99777]: 2025-12-05 08:59:10.564348893 +0000 UTC m=+0.130577653 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond)
Dec 05 08:59:10 np0005546420.localdomain podman[99778]: 2025-12-05 08:59:10.578303076 +0000 UTC m=+0.144004449 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com)
Dec 05 08:59:10 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:59:10 np0005546420.localdomain podman[99778]: 2025-12-05 08:59:10.613300111 +0000 UTC m=+0.179001474 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 08:59:10 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:59:10 np0005546420.localdomain podman[99780]: 2025-12-05 08:59:10.633892601 +0000 UTC m=+0.190327667 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, name=rhosp17/openstack-ceilometer-compute)
Dec 05 08:59:10 np0005546420.localdomain podman[99780]: 2025-12-05 08:59:10.6738363 +0000 UTC m=+0.230271406 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public)
Dec 05 08:59:10 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:59:10 np0005546420.localdomain podman[99779]: 2025-12-05 08:59:10.691505498 +0000 UTC m=+0.254105395 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 05 08:59:10 np0005546420.localdomain podman[99779]: 2025-12-05 08:59:10.726451783 +0000 UTC m=+0.289051680 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, release=1761123044, version=17.1.12, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, tcib_managed=true, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 08:59:10 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:59:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:59:13 np0005546420.localdomain podman[99871]: 2025-12-05 08:59:13.503043095 +0000 UTC m=+0.081400007 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=nova_migration_target, io.openshift.expose-services=)
Dec 05 08:59:13 np0005546420.localdomain podman[99871]: 2025-12-05 08:59:13.871420305 +0000 UTC m=+0.449777227 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 05 08:59:13 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:59:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:59:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:59:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:59:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:59:18 np0005546420.localdomain podman[99894]: 2025-12-05 08:59:18.563124444 +0000 UTC m=+0.093573274 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 08:59:18 np0005546420.localdomain podman[99896]: 2025-12-05 08:59:18.613061884 +0000 UTC m=+0.137383254 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:59:18 np0005546420.localdomain podman[99894]: 2025-12-05 08:59:18.625597884 +0000 UTC m=+0.156046714 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:59:18 np0005546420.localdomain podman[99894]: unhealthy
Dec 05 08:59:18 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:59:18 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 08:59:18 np0005546420.localdomain systemd[1]: tmp-crun.kBKhu2.mount: Deactivated successfully.
Dec 05 08:59:18 np0005546420.localdomain podman[99896]: 2025-12-05 08:59:18.696506664 +0000 UTC m=+0.220828034 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-iscsid-container, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com)
Dec 05 08:59:18 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:59:18 np0005546420.localdomain podman[99895]: 2025-12-05 08:59:18.688119283 +0000 UTC m=+0.216085696 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.openshift.expose-services=, container_name=collectd, com.redhat.component=openstack-collectd-container)
Dec 05 08:59:18 np0005546420.localdomain podman[99895]: 2025-12-05 08:59:18.773347038 +0000 UTC m=+0.301313471 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z)
Dec 05 08:59:18 np0005546420.localdomain podman[99898]: 2025-12-05 08:59:18.77405557 +0000 UTC m=+0.294356976 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 08:59:18 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:59:18 np0005546420.localdomain podman[99898]: 2025-12-05 08:59:18.854056752 +0000 UTC m=+0.374358208 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64)
Dec 05 08:59:18 np0005546420.localdomain podman[99898]: unhealthy
Dec 05 08:59:18 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:59:18 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:59:23 np0005546420.localdomain sudo[99975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 08:59:23 np0005546420.localdomain sudo[99975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:59:23 np0005546420.localdomain sudo[99975]: pam_unix(sudo:session): session closed for user root
Dec 05 08:59:23 np0005546420.localdomain sudo[99990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 08:59:23 np0005546420.localdomain sudo[99990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:59:24 np0005546420.localdomain sudo[99990]: pam_unix(sudo:session): session closed for user root
Dec 05 08:59:25 np0005546420.localdomain sudo[100037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 08:59:25 np0005546420.localdomain sudo[100037]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 08:59:25 np0005546420.localdomain sudo[100037]: pam_unix(sudo:session): session closed for user root
Dec 05 08:59:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 08:59:30 np0005546420.localdomain podman[100052]: 2025-12-05 08:59:30.512763595 +0000 UTC m=+0.091522090 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=metrics_qdr, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 08:59:30 np0005546420.localdomain podman[100052]: 2025-12-05 08:59:30.719298624 +0000 UTC m=+0.298057129 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=metrics_qdr, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 08:59:30 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 08:59:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 08:59:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 08:59:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 08:59:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 08:59:41 np0005546420.localdomain podman[100083]: 2025-12-05 08:59:41.522942286 +0000 UTC m=+0.093868618 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible)
Dec 05 08:59:41 np0005546420.localdomain podman[100083]: 2025-12-05 08:59:41.555083902 +0000 UTC m=+0.126010244 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git)
Dec 05 08:59:41 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 08:59:41 np0005546420.localdomain podman[100082]: 2025-12-05 08:59:41.572552083 +0000 UTC m=+0.145322772 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 05 08:59:41 np0005546420.localdomain podman[100082]: 2025-12-05 08:59:41.606302058 +0000 UTC m=+0.179072707 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=)
Dec 05 08:59:41 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 08:59:41 np0005546420.localdomain systemd[1]: tmp-crun.8df8mP.mount: Deactivated successfully.
Dec 05 08:59:41 np0005546420.localdomain podman[100084]: 2025-12-05 08:59:41.689217895 +0000 UTC m=+0.257642790 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:59:41 np0005546420.localdomain podman[100084]: 2025-12-05 08:59:41.720832875 +0000 UTC m=+0.289257800 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_id=tripleo_step5, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 08:59:41 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 08:59:41 np0005546420.localdomain podman[100085]: 2025-12-05 08:59:41.741631589 +0000 UTC m=+0.304004576 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 08:59:41 np0005546420.localdomain podman[100085]: 2025-12-05 08:59:41.770670839 +0000 UTC m=+0.333043786 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, vcs-type=git, container_name=ceilometer_agent_compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true)
Dec 05 08:59:41 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 08:59:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 08:59:44 np0005546420.localdomain podman[100184]: 2025-12-05 08:59:44.506283682 +0000 UTC m=+0.081087813 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:59:44 np0005546420.localdomain podman[100184]: 2025-12-05 08:59:44.87592976 +0000 UTC m=+0.450733831 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, vendor=Red Hat, Inc., container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute)
Dec 05 08:59:44 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 08:59:47 np0005546420.localdomain sshd[100207]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 08:59:48 np0005546420.localdomain sshd[100207]: Received disconnect from 93.157.248.178 port 48260:11: Bye Bye [preauth]
Dec 05 08:59:48 np0005546420.localdomain sshd[100207]: Disconnected from authenticating user root 93.157.248.178 port 48260 [preauth]
Dec 05 08:59:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 08:59:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 08:59:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 08:59:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 08:59:49 np0005546420.localdomain podman[100210]: 2025-12-05 08:59:49.498149011 +0000 UTC m=+0.069134283 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step3, container_name=collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container)
Dec 05 08:59:49 np0005546420.localdomain podman[100210]: 2025-12-05 08:59:49.510180503 +0000 UTC m=+0.081165805 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true)
Dec 05 08:59:49 np0005546420.localdomain systemd[1]: tmp-crun.uUJ5wG.mount: Deactivated successfully.
Dec 05 08:59:49 np0005546420.localdomain podman[100212]: 2025-12-05 08:59:49.555033482 +0000 UTC m=+0.122380841 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 05 08:59:49 np0005546420.localdomain podman[100212]: 2025-12-05 08:59:49.567748086 +0000 UTC m=+0.135095545 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 08:59:49 np0005546420.localdomain podman[100212]: unhealthy
Dec 05 08:59:49 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 08:59:49 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:59:49 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 08:59:49 np0005546420.localdomain podman[100211]: 2025-12-05 08:59:49.662100238 +0000 UTC m=+0.232890294 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, container_name=iscsid, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid)
Dec 05 08:59:49 np0005546420.localdomain podman[100209]: 2025-12-05 08:59:49.620074346 +0000 UTC m=+0.190736828 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 08:59:49 np0005546420.localdomain podman[100211]: 2025-12-05 08:59:49.676465173 +0000 UTC m=+0.247255239 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, container_name=iscsid, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team)
Dec 05 08:59:49 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 08:59:49 np0005546420.localdomain podman[100209]: 2025-12-05 08:59:49.703355726 +0000 UTC m=+0.274018168 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 08:59:49 np0005546420.localdomain podman[100209]: unhealthy
Dec 05 08:59:49 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 08:59:49 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:00:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:00:01 np0005546420.localdomain CROND[100303]: (root) CMD (sleep `expr ${RANDOM} % 90`; /usr/sbin/logrotate -s /var/lib/logrotate/logrotate-crond.status /etc/logrotate-crond.conf 2>&1|logger -t logrotate-crond)
Dec 05 09:00:01 np0005546420.localdomain podman[100291]: 2025-12-05 09:00:01.515244575 +0000 UTC m=+0.086295783 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-18T22:49:46Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, release=1761123044, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=metrics_qdr, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12)
Dec 05 09:00:01 np0005546420.localdomain podman[100291]: 2025-12-05 09:00:01.680821853 +0000 UTC m=+0.251872971 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step1, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 09:00:01 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:00:05 np0005546420.localdomain sshd[100324]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:00:06 np0005546420.localdomain sshd[100324]: Received disconnect from 195.250.72.168 port 55906:11: Bye Bye [preauth]
Dec 05 09:00:06 np0005546420.localdomain sshd[100324]: Disconnected from authenticating user root 195.250.72.168 port 55906 [preauth]
Dec 05 09:00:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:00:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:00:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:00:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:00:12 np0005546420.localdomain podman[100327]: 2025-12-05 09:00:12.74127605 +0000 UTC m=+0.298797195 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, release=1761123044, distribution-scope=public, url=https://www.redhat.com)
Dec 05 09:00:12 np0005546420.localdomain podman[100327]: 2025-12-05 09:00:12.773439066 +0000 UTC m=+0.330960251 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z)
Dec 05 09:00:12 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:00:12 np0005546420.localdomain podman[100329]: 2025-12-05 09:00:12.798061458 +0000 UTC m=+0.349892927 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 05 09:00:12 np0005546420.localdomain podman[100326]: 2025-12-05 09:00:12.838395838 +0000 UTC m=+0.399732591 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, container_name=logrotate_crond, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 05 09:00:12 np0005546420.localdomain podman[100329]: 2025-12-05 09:00:12.855408495 +0000 UTC m=+0.407239944 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Dec 05 09:00:12 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:00:12 np0005546420.localdomain podman[100326]: 2025-12-05 09:00:12.87431938 +0000 UTC m=+0.435656143 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team)
Dec 05 09:00:12 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:00:12 np0005546420.localdomain podman[100328]: 2025-12-05 09:00:12.705007367 +0000 UTC m=+0.260328934 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, architecture=x86_64, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc.)
Dec 05 09:00:12 np0005546420.localdomain podman[100328]: 2025-12-05 09:00:12.940394607 +0000 UTC m=+0.495716224 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 09:00:12 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:00:13 np0005546420.localdomain systemd[1]: tmp-crun.Z96IWU.mount: Deactivated successfully.
Dec 05 09:00:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:00:15 np0005546420.localdomain podman[100425]: 2025-12-05 09:00:15.488756301 +0000 UTC m=+0.070228756 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z)
Dec 05 09:00:15 np0005546420.localdomain podman[100425]: 2025-12-05 09:00:15.873668752 +0000 UTC m=+0.455141167 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., container_name=nova_migration_target, tcib_managed=true)
Dec 05 09:00:15 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:00:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:00:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:00:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:00:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:00:20 np0005546420.localdomain systemd[1]: tmp-crun.hkNwR7.mount: Deactivated successfully.
Dec 05 09:00:20 np0005546420.localdomain podman[100456]: 2025-12-05 09:00:20.535403237 +0000 UTC m=+0.102895649 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, container_name=ovn_metadata_agent, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 09:00:20 np0005546420.localdomain podman[100449]: 2025-12-05 09:00:20.490420604 +0000 UTC m=+0.069068622 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, container_name=collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=)
Dec 05 09:00:20 np0005546420.localdomain podman[100450]: 2025-12-05 09:00:20.549107131 +0000 UTC m=+0.123356682 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.)
Dec 05 09:00:20 np0005546420.localdomain podman[100456]: 2025-12-05 09:00:20.55713165 +0000 UTC m=+0.124624052 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4)
Dec 05 09:00:20 np0005546420.localdomain podman[100456]: unhealthy
Dec 05 09:00:20 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:00:20 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:00:20 np0005546420.localdomain podman[100449]: 2025-12-05 09:00:20.576625914 +0000 UTC m=+0.155273892 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, container_name=collectd, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 09:00:20 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:00:20 np0005546420.localdomain podman[100448]: 2025-12-05 09:00:20.647302052 +0000 UTC m=+0.227870679 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4)
Dec 05 09:00:20 np0005546420.localdomain podman[100450]: 2025-12-05 09:00:20.66336486 +0000 UTC m=+0.237614391 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=iscsid, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-iscsid-container, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 09:00:20 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:00:20 np0005546420.localdomain podman[100448]: 2025-12-05 09:00:20.689426467 +0000 UTC m=+0.269995074 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 09:00:20 np0005546420.localdomain podman[100448]: unhealthy
Dec 05 09:00:20 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:00:20 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:00:25 np0005546420.localdomain sudo[100527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:00:25 np0005546420.localdomain sudo[100527]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:00:25 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 09:00:25 np0005546420.localdomain sudo[100527]: pam_unix(sudo:session): session closed for user root
Dec 05 09:00:25 np0005546420.localdomain recover_tripleo_nova_virtqemud[100543]: 62579
Dec 05 09:00:25 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 09:00:25 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 09:00:25 np0005546420.localdomain sudo[100544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:00:25 np0005546420.localdomain sudo[100544]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:00:25 np0005546420.localdomain sudo[100544]: pam_unix(sudo:session): session closed for user root
Dec 05 09:00:26 np0005546420.localdomain sudo[100592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:00:26 np0005546420.localdomain sudo[100592]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:00:26 np0005546420.localdomain sudo[100592]: pam_unix(sudo:session): session closed for user root
Dec 05 09:00:28 np0005546420.localdomain CROND[100302]: (root) CMDEND (sleep `expr ${RANDOM} % 90`; /usr/sbin/logrotate -s /var/lib/logrotate/logrotate-crond.status /etc/logrotate-crond.conf 2>&1|logger -t logrotate-crond)
Dec 05 09:00:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:00:32 np0005546420.localdomain podman[100609]: 2025-12-05 09:00:32.484471123 +0000 UTC m=+0.062760386 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, version=17.1.12, container_name=metrics_qdr, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4)
Dec 05 09:00:32 np0005546420.localdomain podman[100609]: 2025-12-05 09:00:32.71137483 +0000 UTC m=+0.289664093 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, config_id=tripleo_step1, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git)
Dec 05 09:00:32 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:00:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:00:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:00:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:00:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:00:43 np0005546420.localdomain systemd[1]: tmp-crun.S1CwVV.mount: Deactivated successfully.
Dec 05 09:00:43 np0005546420.localdomain systemd[1]: tmp-crun.chjgw7.mount: Deactivated successfully.
Dec 05 09:00:43 np0005546420.localdomain podman[100638]: 2025-12-05 09:00:43.619394585 +0000 UTC m=+0.194982739 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, container_name=logrotate_crond, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, release=1761123044)
Dec 05 09:00:43 np0005546420.localdomain podman[100638]: 2025-12-05 09:00:43.624944947 +0000 UTC m=+0.200533091 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12)
Dec 05 09:00:43 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:00:43 np0005546420.localdomain podman[100639]: 2025-12-05 09:00:43.591053138 +0000 UTC m=+0.160265515 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.)
Dec 05 09:00:43 np0005546420.localdomain podman[100640]: 2025-12-05 09:00:43.668677662 +0000 UTC m=+0.237202208 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:36:58Z, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:00:43 np0005546420.localdomain podman[100640]: 2025-12-05 09:00:43.693246463 +0000 UTC m=+0.261771029 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step5)
Dec 05 09:00:43 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:00:43 np0005546420.localdomain podman[100639]: 2025-12-05 09:00:43.722161348 +0000 UTC m=+0.291373735 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.)
Dec 05 09:00:43 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:00:43 np0005546420.localdomain podman[100642]: 2025-12-05 09:00:43.543388952 +0000 UTC m=+0.110286597 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, release=1761123044, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 09:00:43 np0005546420.localdomain podman[100642]: 2025-12-05 09:00:43.774116198 +0000 UTC m=+0.341013803 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ceilometer_agent_compute, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 09:00:43 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:00:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:00:46 np0005546420.localdomain systemd[1]: tmp-crun.welE6E.mount: Deactivated successfully.
Dec 05 09:00:46 np0005546420.localdomain podman[100733]: 2025-12-05 09:00:46.511037722 +0000 UTC m=+0.086851231 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container)
Dec 05 09:00:46 np0005546420.localdomain podman[100733]: 2025-12-05 09:00:46.919483831 +0000 UTC m=+0.495297340 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z)
Dec 05 09:00:46 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:00:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:00:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:00:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:00:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:00:51 np0005546420.localdomain podman[100755]: 2025-12-05 09:00:51.503231121 +0000 UTC m=+0.084386176 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 09:00:51 np0005546420.localdomain podman[100757]: 2025-12-05 09:00:51.517549724 +0000 UTC m=+0.088940286 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step3, url=https://www.redhat.com)
Dec 05 09:00:51 np0005546420.localdomain podman[100755]: 2025-12-05 09:00:51.525712457 +0000 UTC m=+0.106867552 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12)
Dec 05 09:00:51 np0005546420.localdomain podman[100757]: 2025-12-05 09:00:51.528489653 +0000 UTC m=+0.099880195 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, container_name=iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 09:00:51 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:00:51 np0005546420.localdomain podman[100755]: unhealthy
Dec 05 09:00:51 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:00:51 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:00:51 np0005546420.localdomain podman[100763]: 2025-12-05 09:00:51.579810902 +0000 UTC m=+0.145406724 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64)
Dec 05 09:00:51 np0005546420.localdomain podman[100763]: 2025-12-05 09:00:51.664411522 +0000 UTC m=+0.230007274 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:00:51 np0005546420.localdomain podman[100763]: unhealthy
Dec 05 09:00:51 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:00:51 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:00:51 np0005546420.localdomain podman[100756]: 2025-12-05 09:00:51.673179043 +0000 UTC m=+0.248333491 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12)
Dec 05 09:00:51 np0005546420.localdomain podman[100756]: 2025-12-05 09:00:51.756571256 +0000 UTC m=+0.331725774 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, tcib_managed=true)
Dec 05 09:00:51 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:01:01 np0005546420.localdomain CROND[100835]: (root) CMD (run-parts /etc/cron.hourly)
Dec 05 09:01:01 np0005546420.localdomain run-parts[100838]: (/etc/cron.hourly) starting 0anacron
Dec 05 09:01:01 np0005546420.localdomain anacron[100846]: Anacron started on 2025-12-05
Dec 05 09:01:01 np0005546420.localdomain anacron[100846]: Will run job `cron.daily' in 10 min.
Dec 05 09:01:01 np0005546420.localdomain anacron[100846]: Will run job `cron.weekly' in 30 min.
Dec 05 09:01:01 np0005546420.localdomain anacron[100846]: Will run job `cron.monthly' in 50 min.
Dec 05 09:01:01 np0005546420.localdomain anacron[100846]: Jobs will be executed sequentially
Dec 05 09:01:01 np0005546420.localdomain run-parts[100848]: (/etc/cron.hourly) finished 0anacron
Dec 05 09:01:01 np0005546420.localdomain CROND[100834]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 05 09:01:01 np0005546420.localdomain CROND[100850]: (root) CMD (run-parts /etc/cron.hourly)
Dec 05 09:01:01 np0005546420.localdomain run-parts[100853]: (/etc/cron.hourly) starting 0anacron
Dec 05 09:01:01 np0005546420.localdomain run-parts[100859]: (/etc/cron.hourly) finished 0anacron
Dec 05 09:01:01 np0005546420.localdomain CROND[100849]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 05 09:01:03 np0005546420.localdomain sshd[100860]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:01:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:01:03 np0005546420.localdomain podman[100862]: 2025-12-05 09:01:03.525519055 +0000 UTC m=+0.097859812 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 09:01:03 np0005546420.localdomain podman[100862]: 2025-12-05 09:01:03.730473732 +0000 UTC m=+0.302814539 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, distribution-scope=public)
Dec 05 09:01:03 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:01:03 np0005546420.localdomain sshd[100860]: Received disconnect from 93.157.248.178 port 52740:11: Bye Bye [preauth]
Dec 05 09:01:03 np0005546420.localdomain sshd[100860]: Disconnected from authenticating user root 93.157.248.178 port 52740 [preauth]
Dec 05 09:01:09 np0005546420.localdomain sshd[100891]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:01:10 np0005546420.localdomain sshd[100892]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:01:11 np0005546420.localdomain sshd[100892]: Received disconnect from 195.250.72.168 port 54670:11: Bye Bye [preauth]
Dec 05 09:01:11 np0005546420.localdomain sshd[100892]: Disconnected from authenticating user root 195.250.72.168 port 54670 [preauth]
Dec 05 09:01:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:01:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:01:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:01:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:01:14 np0005546420.localdomain systemd[1]: tmp-crun.ZciQri.mount: Deactivated successfully.
Dec 05 09:01:14 np0005546420.localdomain podman[100896]: 2025-12-05 09:01:14.521188895 +0000 UTC m=+0.093669881 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 09:01:14 np0005546420.localdomain podman[100895]: 2025-12-05 09:01:14.532754913 +0000 UTC m=+0.103536447 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true)
Dec 05 09:01:14 np0005546420.localdomain podman[100896]: 2025-12-05 09:01:14.610694157 +0000 UTC m=+0.183175223 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, release=1761123044, vcs-type=git, config_id=tripleo_step5, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:36:58Z, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 05 09:01:14 np0005546420.localdomain podman[100894]: 2025-12-05 09:01:14.61015428 +0000 UTC m=+0.184443583 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=logrotate_crond, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:01:14 np0005546420.localdomain podman[100895]: 2025-12-05 09:01:14.617460807 +0000 UTC m=+0.188242371 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 09:01:14 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:01:14 np0005546420.localdomain podman[100897]: 2025-12-05 09:01:14.592234906 +0000 UTC m=+0.158945484 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com)
Dec 05 09:01:14 np0005546420.localdomain podman[100897]: 2025-12-05 09:01:14.675406291 +0000 UTC m=+0.242116849 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, architecture=x86_64, container_name=ceilometer_agent_compute, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4)
Dec 05 09:01:14 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:01:14 np0005546420.localdomain podman[100894]: 2025-12-05 09:01:14.695543404 +0000 UTC m=+0.269832727 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, container_name=logrotate_crond, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true)
Dec 05 09:01:14 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:01:14 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:01:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:01:17 np0005546420.localdomain systemd[1]: tmp-crun.mH06xp.mount: Deactivated successfully.
Dec 05 09:01:17 np0005546420.localdomain podman[100994]: 2025-12-05 09:01:17.509335108 +0000 UTC m=+0.084645902 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 09:01:17 np0005546420.localdomain podman[100994]: 2025-12-05 09:01:17.878370408 +0000 UTC m=+0.453681202 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4)
Dec 05 09:01:17 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:01:20 np0005546420.localdomain sshd[100891]: error: kex_exchange_identification: read: Connection timed out
Dec 05 09:01:20 np0005546420.localdomain sshd[100891]: banner exchange: Connection from 180.184.182.87 port 27592: Connection timed out
Dec 05 09:01:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:01:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:01:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:01:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:01:22 np0005546420.localdomain podman[101019]: 2025-12-05 09:01:22.506875934 +0000 UTC m=+0.072293201 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_id=tripleo_step3, container_name=iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc.)
Dec 05 09:01:22 np0005546420.localdomain podman[101017]: 2025-12-05 09:01:22.562848517 +0000 UTC m=+0.135928271 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 09:01:22 np0005546420.localdomain podman[101017]: 2025-12-05 09:01:22.580584936 +0000 UTC m=+0.153664700 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T23:34:05Z, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 05 09:01:22 np0005546420.localdomain podman[101017]: unhealthy
Dec 05 09:01:22 np0005546420.localdomain podman[101019]: 2025-12-05 09:01:22.593036192 +0000 UTC m=+0.158453459 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step3, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Dec 05 09:01:22 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:01:22 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:01:22 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:01:22 np0005546420.localdomain podman[101018]: 2025-12-05 09:01:22.683953088 +0000 UTC m=+0.250766968 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.component=openstack-collectd-container, container_name=collectd, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 09:01:22 np0005546420.localdomain podman[101023]: 2025-12-05 09:01:22.690920833 +0000 UTC m=+0.250068146 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public)
Dec 05 09:01:22 np0005546420.localdomain podman[101018]: 2025-12-05 09:01:22.697475286 +0000 UTC m=+0.264289136 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vendor=Red Hat, Inc.)
Dec 05 09:01:22 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:01:22 np0005546420.localdomain podman[101023]: 2025-12-05 09:01:22.713299706 +0000 UTC m=+0.272447039 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:01:22 np0005546420.localdomain podman[101023]: unhealthy
Dec 05 09:01:22 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:01:22 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:01:23 np0005546420.localdomain systemd[1]: tmp-crun.azm5VB.mount: Deactivated successfully.
Dec 05 09:01:26 np0005546420.localdomain sudo[101093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:01:26 np0005546420.localdomain sudo[101093]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:01:26 np0005546420.localdomain sudo[101093]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:26 np0005546420.localdomain sudo[101108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:01:26 np0005546420.localdomain sudo[101108]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:01:27 np0005546420.localdomain sudo[101108]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:28 np0005546420.localdomain sudo[101154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:01:28 np0005546420.localdomain sudo[101154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:01:28 np0005546420.localdomain sudo[101154]: pam_unix(sudo:session): session closed for user root
Dec 05 09:01:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:01:34 np0005546420.localdomain podman[101169]: 2025-12-05 09:01:34.537186526 +0000 UTC m=+0.113351352 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T22:49:46Z, release=1761123044, version=17.1.12, architecture=x86_64, tcib_managed=true)
Dec 05 09:01:34 np0005546420.localdomain podman[101169]: 2025-12-05 09:01:34.779451389 +0000 UTC m=+0.355616245 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1)
Dec 05 09:01:34 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:01:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:01:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:01:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:01:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:01:45 np0005546420.localdomain podman[101199]: 2025-12-05 09:01:45.520648417 +0000 UTC m=+0.089644267 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, release=1761123044, maintainer=OpenStack TripleO Team)
Dec 05 09:01:45 np0005546420.localdomain podman[101200]: 2025-12-05 09:01:45.578316592 +0000 UTC m=+0.144760073 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, config_id=tripleo_step5, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute)
Dec 05 09:01:45 np0005546420.localdomain podman[101200]: 2025-12-05 09:01:45.611319425 +0000 UTC m=+0.177762876 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, release=1761123044, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute)
Dec 05 09:01:45 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:01:45 np0005546420.localdomain podman[101198]: 2025-12-05 09:01:45.685822073 +0000 UTC m=+0.254475273 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=logrotate_crond, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 05 09:01:45 np0005546420.localdomain podman[101201]: 2025-12-05 09:01:45.648699383 +0000 UTC m=+0.210913574 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4)
Dec 05 09:01:45 np0005546420.localdomain podman[101198]: 2025-12-05 09:01:45.722412696 +0000 UTC m=+0.291065916 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 09:01:45 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:01:45 np0005546420.localdomain podman[101199]: 2025-12-05 09:01:45.751408803 +0000 UTC m=+0.320404653 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Dec 05 09:01:45 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:01:45 np0005546420.localdomain podman[101201]: 2025-12-05 09:01:45.779939347 +0000 UTC m=+0.342153488 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=)
Dec 05 09:01:45 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:01:46 np0005546420.localdomain systemd[1]: tmp-crun.wunivl.mount: Deactivated successfully.
Dec 05 09:01:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:01:48 np0005546420.localdomain podman[101299]: 2025-12-05 09:01:48.50355553 +0000 UTC m=+0.081125504 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, version=17.1.12, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute)
Dec 05 09:01:48 np0005546420.localdomain podman[101299]: 2025-12-05 09:01:48.848369519 +0000 UTC m=+0.425939473 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:01:48 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:01:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:01:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:01:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:01:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:01:53 np0005546420.localdomain podman[101324]: 2025-12-05 09:01:53.527815352 +0000 UTC m=+0.093607400 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4)
Dec 05 09:01:53 np0005546420.localdomain podman[101325]: 2025-12-05 09:01:53.580763602 +0000 UTC m=+0.141143142 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-19T00:14:25Z, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4)
Dec 05 09:01:53 np0005546420.localdomain podman[101324]: 2025-12-05 09:01:53.592714182 +0000 UTC m=+0.158506230 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid)
Dec 05 09:01:53 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:01:53 np0005546420.localdomain podman[101325]: 2025-12-05 09:01:53.626424466 +0000 UTC m=+0.186804006 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:01:53 np0005546420.localdomain podman[101325]: unhealthy
Dec 05 09:01:53 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:01:53 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:01:53 np0005546420.localdomain systemd[1]: tmp-crun.f96CVd.mount: Deactivated successfully.
Dec 05 09:01:53 np0005546420.localdomain podman[101322]: 2025-12-05 09:01:53.729672473 +0000 UTC m=+0.301246430 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Dec 05 09:01:53 np0005546420.localdomain podman[101323]: 2025-12-05 09:01:53.697423855 +0000 UTC m=+0.265746521 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=collectd, config_id=tripleo_step3, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-collectd, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=)
Dec 05 09:01:53 np0005546420.localdomain podman[101322]: 2025-12-05 09:01:53.771341284 +0000 UTC m=+0.342915241 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller)
Dec 05 09:01:53 np0005546420.localdomain podman[101322]: unhealthy
Dec 05 09:01:53 np0005546420.localdomain podman[101323]: 2025-12-05 09:01:53.781584562 +0000 UTC m=+0.349907228 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=collectd, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12)
Dec 05 09:01:53 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:01:53 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:01:53 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:02:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:02:05 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 09:02:05 np0005546420.localdomain recover_tripleo_nova_virtqemud[101406]: 62579
Dec 05 09:02:05 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 09:02:05 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 09:02:05 np0005546420.localdomain podman[101404]: 2025-12-05 09:02:05.52311403 +0000 UTC m=+0.095005733 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z)
Dec 05 09:02:05 np0005546420.localdomain podman[101404]: 2025-12-05 09:02:05.730849034 +0000 UTC m=+0.302740727 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:02:05 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:02:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:02:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4200.1 total, 600.0 interval
                                                          Cumulative writes: 5715 writes, 25K keys, 5715 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5715 writes, 734 syncs, 7.79 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 09:02:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:02:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4200.1 total, 600.0 interval
                                                          Cumulative writes: 4690 writes, 21K keys, 4690 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4690 writes, 584 syncs, 8.03 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 09:02:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:02:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:02:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:02:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:02:16 np0005546420.localdomain systemd[1]: tmp-crun.yWKYG4.mount: Deactivated successfully.
Dec 05 09:02:16 np0005546420.localdomain podman[101436]: 2025-12-05 09:02:16.515303534 +0000 UTC m=+0.091085303 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, name=rhosp17/openstack-cron, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team)
Dec 05 09:02:16 np0005546420.localdomain podman[101436]: 2025-12-05 09:02:16.52745793 +0000 UTC m=+0.103239749 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 05 09:02:16 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:02:16 np0005546420.localdomain systemd[1]: tmp-crun.8DvSol.mount: Deactivated successfully.
Dec 05 09:02:16 np0005546420.localdomain podman[101438]: 2025-12-05 09:02:16.573501826 +0000 UTC m=+0.144723194 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 09:02:16 np0005546420.localdomain podman[101437]: 2025-12-05 09:02:16.624834696 +0000 UTC m=+0.198047216 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 09:02:16 np0005546420.localdomain podman[101438]: 2025-12-05 09:02:16.652431021 +0000 UTC m=+0.223652419 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z)
Dec 05 09:02:16 np0005546420.localdomain podman[101439]: 2025-12-05 09:02:16.673305797 +0000 UTC m=+0.240258992 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 09:02:16 np0005546420.localdomain podman[101437]: 2025-12-05 09:02:16.683379369 +0000 UTC m=+0.256591889 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com)
Dec 05 09:02:16 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:02:16 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:02:16 np0005546420.localdomain podman[101439]: 2025-12-05 09:02:16.755443451 +0000 UTC m=+0.322396646 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team)
Dec 05 09:02:16 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:02:19 np0005546420.localdomain sshd[101535]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:02:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:02:19 np0005546420.localdomain podman[101537]: 2025-12-05 09:02:19.52353745 +0000 UTC m=+0.097489200 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:02:19 np0005546420.localdomain podman[101537]: 2025-12-05 09:02:19.900397611 +0000 UTC m=+0.474349331 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:02:19 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:02:20 np0005546420.localdomain sshd[101535]: Received disconnect from 195.250.72.168 port 58748:11: Bye Bye [preauth]
Dec 05 09:02:20 np0005546420.localdomain sshd[101535]: Disconnected from authenticating user root 195.250.72.168 port 58748 [preauth]
Dec 05 09:02:22 np0005546420.localdomain sshd[101560]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:02:23 np0005546420.localdomain sshd[101560]: Received disconnect from 93.157.248.178 port 43202:11: Bye Bye [preauth]
Dec 05 09:02:23 np0005546420.localdomain sshd[101560]: Disconnected from authenticating user root 93.157.248.178 port 43202 [preauth]
Dec 05 09:02:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:02:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:02:23 np0005546420.localdomain podman[101562]: 2025-12-05 09:02:23.745403612 +0000 UTC m=+0.085012865 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible)
Dec 05 09:02:23 np0005546420.localdomain podman[101562]: 2025-12-05 09:02:23.758561019 +0000 UTC m=+0.098170262 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step3, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Dec 05 09:02:23 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:02:23 np0005546420.localdomain podman[101563]: 2025-12-05 09:02:23.801210851 +0000 UTC m=+0.136526540 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 09:02:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:02:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:02:23 np0005546420.localdomain podman[101563]: 2025-12-05 09:02:23.819349842 +0000 UTC m=+0.154665471 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4)
Dec 05 09:02:23 np0005546420.localdomain podman[101563]: unhealthy
Dec 05 09:02:23 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:02:23 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:02:23 np0005546420.localdomain podman[101601]: 2025-12-05 09:02:23.90482044 +0000 UTC m=+0.079490604 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=ovn_controller, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 09:02:23 np0005546420.localdomain podman[101602]: 2025-12-05 09:02:23.976918952 +0000 UTC m=+0.146317752 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, release=1761123044, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 09:02:23 np0005546420.localdomain podman[101602]: 2025-12-05 09:02:23.987123858 +0000 UTC m=+0.156522598 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, vcs-type=git, config_id=tripleo_step3, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 05 09:02:23 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:02:24 np0005546420.localdomain podman[101601]: 2025-12-05 09:02:24.042308518 +0000 UTC m=+0.216978662 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=)
Dec 05 09:02:24 np0005546420.localdomain podman[101601]: unhealthy
Dec 05 09:02:24 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:02:24 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:02:28 np0005546420.localdomain sudo[101641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:02:28 np0005546420.localdomain sudo[101641]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:02:28 np0005546420.localdomain sudo[101641]: pam_unix(sudo:session): session closed for user root
Dec 05 09:02:28 np0005546420.localdomain sudo[101656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:02:28 np0005546420.localdomain sudo[101656]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:02:29 np0005546420.localdomain sudo[101656]: pam_unix(sudo:session): session closed for user root
Dec 05 09:02:31 np0005546420.localdomain sudo[101702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:02:31 np0005546420.localdomain sudo[101702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:02:31 np0005546420.localdomain sudo[101702]: pam_unix(sudo:session): session closed for user root
Dec 05 09:02:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:02:36 np0005546420.localdomain systemd[1]: tmp-crun.JDBajN.mount: Deactivated successfully.
Dec 05 09:02:36 np0005546420.localdomain podman[101717]: 2025-12-05 09:02:36.504075552 +0000 UTC m=+0.084673734 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vcs-type=git, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd)
Dec 05 09:02:36 np0005546420.localdomain podman[101717]: 2025-12-05 09:02:36.723473528 +0000 UTC m=+0.304071750 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.)
Dec 05 09:02:36 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:02:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:02:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:02:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:02:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:02:47 np0005546420.localdomain podman[101749]: 2025-12-05 09:02:47.528232255 +0000 UTC m=+0.095618412 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.buildah.version=1.41.4, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5)
Dec 05 09:02:47 np0005546420.localdomain podman[101749]: 2025-12-05 09:02:47.613714833 +0000 UTC m=+0.181101010 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Dec 05 09:02:47 np0005546420.localdomain systemd[1]: tmp-crun.iYuXc3.mount: Deactivated successfully.
Dec 05 09:02:47 np0005546420.localdomain podman[101747]: 2025-12-05 09:02:47.62525421 +0000 UTC m=+0.199160729 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=logrotate_crond, io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4)
Dec 05 09:02:47 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:02:47 np0005546420.localdomain podman[101747]: 2025-12-05 09:02:47.635369774 +0000 UTC m=+0.209276303 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Dec 05 09:02:47 np0005546420.localdomain podman[101752]: 2025-12-05 09:02:47.588075239 +0000 UTC m=+0.152633548 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=ceilometer_agent_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public)
Dec 05 09:02:47 np0005546420.localdomain podman[101752]: 2025-12-05 09:02:47.667285292 +0000 UTC m=+0.231843551 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public)
Dec 05 09:02:47 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:02:47 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:02:47 np0005546420.localdomain podman[101748]: 2025-12-05 09:02:47.620813472 +0000 UTC m=+0.192112560 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, tcib_managed=true, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi)
Dec 05 09:02:47 np0005546420.localdomain podman[101748]: 2025-12-05 09:02:47.755550166 +0000 UTC m=+0.326849254 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:02:47 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:02:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:02:50 np0005546420.localdomain podman[101848]: 2025-12-05 09:02:50.517570357 +0000 UTC m=+0.092901079 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=nova_migration_target, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:02:50 np0005546420.localdomain podman[101848]: 2025-12-05 09:02:50.886947156 +0000 UTC m=+0.462277888 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-nova-compute-container)
Dec 05 09:02:50 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:02:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:02:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:02:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:02:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:02:54 np0005546420.localdomain podman[101871]: 2025-12-05 09:02:54.499285573 +0000 UTC m=+0.072259610 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=ovn_controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z)
Dec 05 09:02:54 np0005546420.localdomain podman[101872]: 2025-12-05 09:02:54.478714276 +0000 UTC m=+0.054390226 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:51:28Z, container_name=collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12)
Dec 05 09:02:54 np0005546420.localdomain podman[101871]: 2025-12-05 09:02:54.532735599 +0000 UTC m=+0.105709616 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 05 09:02:54 np0005546420.localdomain podman[101874]: 2025-12-05 09:02:54.532709198 +0000 UTC m=+0.102480325 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 05 09:02:54 np0005546420.localdomain podman[101874]: 2025-12-05 09:02:54.540794258 +0000 UTC m=+0.110565405 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc.)
Dec 05 09:02:54 np0005546420.localdomain podman[101874]: unhealthy
Dec 05 09:02:54 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:02:54 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:02:54 np0005546420.localdomain podman[101872]: 2025-12-05 09:02:54.557437764 +0000 UTC m=+0.133113794 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com)
Dec 05 09:02:54 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:02:54 np0005546420.localdomain podman[101871]: unhealthy
Dec 05 09:02:54 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:02:54 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:02:54 np0005546420.localdomain podman[101873]: 2025-12-05 09:02:54.654380956 +0000 UTC m=+0.226605700 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-iscsid-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, maintainer=OpenStack TripleO Team)
Dec 05 09:02:54 np0005546420.localdomain podman[101873]: 2025-12-05 09:02:54.662759765 +0000 UTC m=+0.234984579 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, container_name=iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 09:02:54 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:03:05 np0005546420.localdomain sshd[101949]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:03:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:03:07 np0005546420.localdomain podman[101951]: 2025-12-05 09:03:07.506026805 +0000 UTC m=+0.081887937 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Dec 05 09:03:07 np0005546420.localdomain podman[101951]: 2025-12-05 09:03:07.723343075 +0000 UTC m=+0.299204217 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, container_name=metrics_qdr, tcib_managed=true, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container)
Dec 05 09:03:07 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:03:10 np0005546420.localdomain sshd[101949]: Received disconnect from 124.163.255.210 port 41946:11:  [preauth]
Dec 05 09:03:10 np0005546420.localdomain sshd[101949]: Disconnected from authenticating user root 124.163.255.210 port 41946 [preauth]
Dec 05 09:03:12 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 09:03:12 np0005546420.localdomain recover_tripleo_nova_virtqemud[101981]: 62579
Dec 05 09:03:12 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 09:03:12 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 09:03:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:03:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:03:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:03:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:03:18 np0005546420.localdomain podman[101985]: 2025-12-05 09:03:18.553063326 +0000 UTC m=+0.115176508 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 05 09:03:18 np0005546420.localdomain podman[101982]: 2025-12-05 09:03:18.52475783 +0000 UTC m=+0.097693857 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-cron, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 09:03:18 np0005546420.localdomain podman[101985]: 2025-12-05 09:03:18.581804066 +0000 UTC m=+0.143917338 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 09:03:18 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:03:18 np0005546420.localdomain podman[101983]: 2025-12-05 09:03:18.57513862 +0000 UTC m=+0.143780474 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:12:45Z, architecture=x86_64, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1)
Dec 05 09:03:18 np0005546420.localdomain podman[101984]: 2025-12-05 09:03:18.637489871 +0000 UTC m=+0.202405930 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1)
Dec 05 09:03:18 np0005546420.localdomain podman[101983]: 2025-12-05 09:03:18.65841932 +0000 UTC m=+0.227061154 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, tcib_managed=true)
Dec 05 09:03:18 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:03:18 np0005546420.localdomain podman[101984]: 2025-12-05 09:03:18.693356601 +0000 UTC m=+0.258272670 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, container_name=nova_compute, io.buildah.version=1.41.4, config_id=tripleo_step5, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true)
Dec 05 09:03:18 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:03:18 np0005546420.localdomain podman[101982]: 2025-12-05 09:03:18.709980946 +0000 UTC m=+0.282917083 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, container_name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git)
Dec 05 09:03:18 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:03:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:03:21 np0005546420.localdomain podman[102080]: 2025-12-05 09:03:21.510328794 +0000 UTC m=+0.085043575 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 09:03:21 np0005546420.localdomain podman[102080]: 2025-12-05 09:03:21.894457281 +0000 UTC m=+0.469172112 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, vcs-type=git)
Dec 05 09:03:21 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:03:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:03:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:03:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:03:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:03:25 np0005546420.localdomain podman[102104]: 2025-12-05 09:03:25.528143208 +0000 UTC m=+0.091098302 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3)
Dec 05 09:03:25 np0005546420.localdomain podman[102103]: 2025-12-05 09:03:25.502935797 +0000 UTC m=+0.071789765 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20251118.1, vcs-type=git, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4)
Dec 05 09:03:25 np0005546420.localdomain podman[102105]: 2025-12-05 09:03:25.57115206 +0000 UTC m=+0.130933226 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container)
Dec 05 09:03:25 np0005546420.localdomain podman[102103]: 2025-12-05 09:03:25.583716239 +0000 UTC m=+0.152570217 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true)
Dec 05 09:03:25 np0005546420.localdomain podman[102103]: unhealthy
Dec 05 09:03:25 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:03:25 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:03:25 np0005546420.localdomain podman[102109]: 2025-12-05 09:03:25.634064699 +0000 UTC m=+0.191793712 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4)
Dec 05 09:03:25 np0005546420.localdomain podman[102104]: 2025-12-05 09:03:25.63991873 +0000 UTC m=+0.202873834 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=collectd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 09:03:25 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:03:25 np0005546420.localdomain podman[102105]: 2025-12-05 09:03:25.662931672 +0000 UTC m=+0.222712838 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, version=17.1.12, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com)
Dec 05 09:03:25 np0005546420.localdomain podman[102109]: 2025-12-05 09:03:25.672374525 +0000 UTC m=+0.230103488 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:03:25 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:03:25 np0005546420.localdomain podman[102109]: unhealthy
Dec 05 09:03:25 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:03:25 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:03:31 np0005546420.localdomain sudo[102179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:03:31 np0005546420.localdomain sudo[102179]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:03:31 np0005546420.localdomain sudo[102179]: pam_unix(sudo:session): session closed for user root
Dec 05 09:03:31 np0005546420.localdomain sudo[102194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:03:31 np0005546420.localdomain sudo[102194]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:03:32 np0005546420.localdomain sudo[102194]: pam_unix(sudo:session): session closed for user root
Dec 05 09:03:33 np0005546420.localdomain sudo[102240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:03:33 np0005546420.localdomain sudo[102240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:03:33 np0005546420.localdomain sudo[102240]: pam_unix(sudo:session): session closed for user root
Dec 05 09:03:34 np0005546420.localdomain sshd[102255]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:03:35 np0005546420.localdomain sshd[102255]: Received disconnect from 195.250.72.168 port 45894:11: Bye Bye [preauth]
Dec 05 09:03:35 np0005546420.localdomain sshd[102255]: Disconnected from authenticating user root 195.250.72.168 port 45894 [preauth]
Dec 05 09:03:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:03:38 np0005546420.localdomain podman[102257]: 2025-12-05 09:03:38.511996731 +0000 UTC m=+0.090870096 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr)
Dec 05 09:03:38 np0005546420.localdomain podman[102257]: 2025-12-05 09:03:38.73767911 +0000 UTC m=+0.316552475 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=metrics_qdr, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Dec 05 09:03:38 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:03:46 np0005546420.localdomain sshd[102287]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:03:46 np0005546420.localdomain sshd[102287]: Received disconnect from 93.157.248.178 port 43654:11: Bye Bye [preauth]
Dec 05 09:03:46 np0005546420.localdomain sshd[102287]: Disconnected from authenticating user root 93.157.248.178 port 43654 [preauth]
Dec 05 09:03:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:03:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:03:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:03:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:03:49 np0005546420.localdomain systemd[1]: tmp-crun.xN80FQ.mount: Deactivated successfully.
Dec 05 09:03:49 np0005546420.localdomain podman[102290]: 2025-12-05 09:03:49.509686374 +0000 UTC m=+0.080770293 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4)
Dec 05 09:03:49 np0005546420.localdomain podman[102289]: 2025-12-05 09:03:49.563337985 +0000 UTC m=+0.136116767 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=logrotate_crond, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 05 09:03:49 np0005546420.localdomain podman[102289]: 2025-12-05 09:03:49.571134587 +0000 UTC m=+0.143913399 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=logrotate_crond, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z)
Dec 05 09:03:49 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:03:49 np0005546420.localdomain podman[102291]: 2025-12-05 09:03:49.611169407 +0000 UTC m=+0.180585755 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12)
Dec 05 09:03:49 np0005546420.localdomain podman[102292]: 2025-12-05 09:03:49.664366944 +0000 UTC m=+0.229876770 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Dec 05 09:03:49 np0005546420.localdomain podman[102291]: 2025-12-05 09:03:49.671413152 +0000 UTC m=+0.240829500 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:03:49 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:03:49 np0005546420.localdomain podman[102290]: 2025-12-05 09:03:49.690528574 +0000 UTC m=+0.261612513 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step4)
Dec 05 09:03:49 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:03:49 np0005546420.localdomain podman[102292]: 2025-12-05 09:03:49.712527326 +0000 UTC m=+0.278037142 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team)
Dec 05 09:03:49 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:03:50 np0005546420.localdomain systemd[1]: tmp-crun.W83vch.mount: Deactivated successfully.
Dec 05 09:03:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:03:52 np0005546420.localdomain podman[102383]: 2025-12-05 09:03:52.514119552 +0000 UTC m=+0.083409735 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, url=https://www.redhat.com)
Dec 05 09:03:52 np0005546420.localdomain podman[102383]: 2025-12-05 09:03:52.888478876 +0000 UTC m=+0.457769059 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 05 09:03:52 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:03:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:03:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:03:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:03:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:03:56 np0005546420.localdomain podman[102414]: 2025-12-05 09:03:56.494439714 +0000 UTC m=+0.060726402 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-11-19T00:14:25Z, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=ovn_metadata_agent, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Dec 05 09:03:56 np0005546420.localdomain podman[102414]: 2025-12-05 09:03:56.506339952 +0000 UTC m=+0.072626680 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:03:56 np0005546420.localdomain podman[102414]: unhealthy
Dec 05 09:03:56 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:03:56 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:03:56 np0005546420.localdomain podman[102409]: 2025-12-05 09:03:56.554771762 +0000 UTC m=+0.122382461 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, release=1761123044, container_name=iscsid, vcs-type=git, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible)
Dec 05 09:03:56 np0005546420.localdomain systemd[1]: tmp-crun.vQK7Oj.mount: Deactivated successfully.
Dec 05 09:03:56 np0005546420.localdomain podman[102408]: 2025-12-05 09:03:56.59764336 +0000 UTC m=+0.167508079 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=collectd, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z)
Dec 05 09:03:56 np0005546420.localdomain podman[102408]: 2025-12-05 09:03:56.609284571 +0000 UTC m=+0.179149250 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, config_id=tripleo_step3, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 05 09:03:56 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:03:56 np0005546420.localdomain podman[102407]: 2025-12-05 09:03:56.476986254 +0000 UTC m=+0.055999716 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 05 09:03:56 np0005546420.localdomain podman[102407]: 2025-12-05 09:03:56.662839019 +0000 UTC m=+0.241852541 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, container_name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 09:03:56 np0005546420.localdomain podman[102407]: unhealthy
Dec 05 09:03:56 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:03:56 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:03:56 np0005546420.localdomain podman[102409]: 2025-12-05 09:03:56.71290515 +0000 UTC m=+0.280515919 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 05 09:03:56 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:04:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:04:09 np0005546420.localdomain podman[102489]: 2025-12-05 09:04:09.53836508 +0000 UTC m=+0.110862534 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true)
Dec 05 09:04:09 np0005546420.localdomain podman[102489]: 2025-12-05 09:04:09.731361228 +0000 UTC m=+0.303858702 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T22:49:46Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec 05 09:04:09 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:04:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:04:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:04:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:04:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:04:20 np0005546420.localdomain podman[102518]: 2025-12-05 09:04:20.510542063 +0000 UTC m=+0.089370360 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:04:20 np0005546420.localdomain podman[102518]: 2025-12-05 09:04:20.521390899 +0000 UTC m=+0.100219176 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=logrotate_crond, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 09:04:20 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:04:20 np0005546420.localdomain systemd[1]: tmp-crun.u3mBOm.mount: Deactivated successfully.
Dec 05 09:04:20 np0005546420.localdomain podman[102519]: 2025-12-05 09:04:20.55405243 +0000 UTC m=+0.128028746 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 05 09:04:20 np0005546420.localdomain podman[102519]: 2025-12-05 09:04:20.610337803 +0000 UTC m=+0.184314139 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 09:04:20 np0005546420.localdomain podman[102521]: 2025-12-05 09:04:20.621025174 +0000 UTC m=+0.188989074 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:04:20 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:04:20 np0005546420.localdomain podman[102521]: 2025-12-05 09:04:20.656464142 +0000 UTC m=+0.224428042 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public)
Dec 05 09:04:20 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:04:20 np0005546420.localdomain podman[102520]: 2025-12-05 09:04:20.674650394 +0000 UTC m=+0.246404572 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, architecture=x86_64, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5)
Dec 05 09:04:20 np0005546420.localdomain podman[102520]: 2025-12-05 09:04:20.73456012 +0000 UTC m=+0.306314278 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com)
Dec 05 09:04:20 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:04:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:04:23 np0005546420.localdomain podman[102614]: 2025-12-05 09:04:23.50268187 +0000 UTC m=+0.081844455 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Dec 05 09:04:23 np0005546420.localdomain podman[102614]: 2025-12-05 09:04:23.859416258 +0000 UTC m=+0.438578823 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-compute-container)
Dec 05 09:04:23 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:04:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:04:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:04:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:04:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:04:27 np0005546420.localdomain systemd[1]: tmp-crun.qahpbY.mount: Deactivated successfully.
Dec 05 09:04:27 np0005546420.localdomain podman[102637]: 2025-12-05 09:04:27.528082957 +0000 UTC m=+0.104173897 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, container_name=ovn_controller, io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, release=1761123044)
Dec 05 09:04:27 np0005546420.localdomain podman[102638]: 2025-12-05 09:04:27.571018747 +0000 UTC m=+0.143749363 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd)
Dec 05 09:04:27 np0005546420.localdomain podman[102638]: 2025-12-05 09:04:27.6059875 +0000 UTC m=+0.178718186 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible)
Dec 05 09:04:27 np0005546420.localdomain podman[102637]: 2025-12-05 09:04:27.613394529 +0000 UTC m=+0.189485469 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, container_name=ovn_controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:04:27 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:04:27 np0005546420.localdomain podman[102637]: unhealthy
Dec 05 09:04:27 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:04:27 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:04:27 np0005546420.localdomain podman[102645]: 2025-12-05 09:04:27.671755717 +0000 UTC m=+0.235815884 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, vcs-type=git, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z)
Dec 05 09:04:27 np0005546420.localdomain podman[102639]: 2025-12-05 09:04:27.683094828 +0000 UTC m=+0.250880120 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 05 09:04:27 np0005546420.localdomain podman[102645]: 2025-12-05 09:04:27.68865146 +0000 UTC m=+0.252711637 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:04:27 np0005546420.localdomain podman[102639]: 2025-12-05 09:04:27.693469419 +0000 UTC m=+0.261254741 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, release=1761123044, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 05 09:04:27 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:04:27 np0005546420.localdomain podman[102645]: unhealthy
Dec 05 09:04:27 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:04:27 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:04:33 np0005546420.localdomain sudo[102716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:04:33 np0005546420.localdomain sudo[102716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:04:33 np0005546420.localdomain sudo[102716]: pam_unix(sudo:session): session closed for user root
Dec 05 09:04:33 np0005546420.localdomain sudo[102731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 09:04:33 np0005546420.localdomain sudo[102731]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:04:34 np0005546420.localdomain sudo[102731]: pam_unix(sudo:session): session closed for user root
Dec 05 09:04:34 np0005546420.localdomain sudo[102767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:04:34 np0005546420.localdomain sudo[102767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:04:34 np0005546420.localdomain sudo[102767]: pam_unix(sudo:session): session closed for user root
Dec 05 09:04:34 np0005546420.localdomain sudo[102782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:04:34 np0005546420.localdomain sudo[102782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:04:35 np0005546420.localdomain sudo[102782]: pam_unix(sudo:session): session closed for user root
Dec 05 09:04:35 np0005546420.localdomain sshd[102828]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:04:35 np0005546420.localdomain sshd[102828]: error: kex_exchange_identification: banner line contains invalid characters
Dec 05 09:04:35 np0005546420.localdomain sshd[102828]: banner exchange: Connection from 213.55.83.90 port 35614: invalid format
Dec 05 09:04:35 np0005546420.localdomain sudo[102829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:04:35 np0005546420.localdomain sudo[102829]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:04:35 np0005546420.localdomain sudo[102829]: pam_unix(sudo:session): session closed for user root
Dec 05 09:04:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:04:40 np0005546420.localdomain podman[102844]: 2025-12-05 09:04:40.526159602 +0000 UTC m=+0.098676017 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:49:46Z)
Dec 05 09:04:40 np0005546420.localdomain podman[102844]: 2025-12-05 09:04:40.759499148 +0000 UTC m=+0.332015523 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, release=1761123044, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:04:40 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:04:41 np0005546420.localdomain sshd[102873]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:04:45 np0005546420.localdomain sshd[102873]: Invalid user NL5xUDpV2xRa from 213.55.83.90 port 46562
Dec 05 09:04:45 np0005546420.localdomain sshd[102873]: fatal: userauth_pubkey: parse packet: incomplete message [preauth]
Dec 05 09:04:47 np0005546420.localdomain sshd[102875]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:04:48 np0005546420.localdomain sshd[102875]: Received disconnect from 195.250.72.168 port 36782:11: Bye Bye [preauth]
Dec 05 09:04:48 np0005546420.localdomain sshd[102875]: Disconnected from authenticating user root 195.250.72.168 port 36782 [preauth]
Dec 05 09:04:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:04:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:04:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:04:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:04:51 np0005546420.localdomain podman[102878]: 2025-12-05 09:04:51.541424859 +0000 UTC m=+0.100933507 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=ceilometer_agent_ipmi, architecture=x86_64, batch=17.1_20251118.1)
Dec 05 09:04:51 np0005546420.localdomain podman[102878]: 2025-12-05 09:04:51.575084951 +0000 UTC m=+0.134593609 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 09:04:51 np0005546420.localdomain systemd[1]: tmp-crun.WoTMhX.mount: Deactivated successfully.
Dec 05 09:04:51 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:04:51 np0005546420.localdomain podman[102877]: 2025-12-05 09:04:51.589798406 +0000 UTC m=+0.152378429 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, container_name=logrotate_crond, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true)
Dec 05 09:04:51 np0005546420.localdomain podman[102877]: 2025-12-05 09:04:51.62738241 +0000 UTC m=+0.189962443 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-cron, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 09:04:51 np0005546420.localdomain podman[102879]: 2025-12-05 09:04:51.642480328 +0000 UTC m=+0.197235520 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:36:58Z, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public)
Dec 05 09:04:51 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:04:51 np0005546420.localdomain podman[102880]: 2025-12-05 09:04:51.697823193 +0000 UTC m=+0.247443246 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, release=1761123044, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public)
Dec 05 09:04:51 np0005546420.localdomain podman[102879]: 2025-12-05 09:04:51.720429483 +0000 UTC m=+0.275184725 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4)
Dec 05 09:04:51 np0005546420.localdomain podman[102880]: 2025-12-05 09:04:51.734802938 +0000 UTC m=+0.284422981 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 09:04:51 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:04:51 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:04:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:04:54 np0005546420.localdomain podman[102976]: 2025-12-05 09:04:54.513366192 +0000 UTC m=+0.088369869 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com)
Dec 05 09:04:54 np0005546420.localdomain podman[102976]: 2025-12-05 09:04:54.896677892 +0000 UTC m=+0.471681579 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=nova_migration_target)
Dec 05 09:04:54 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: tmp-crun.tIxNu4.mount: Deactivated successfully.
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: tmp-crun.qo7CNL.mount: Deactivated successfully.
Dec 05 09:04:58 np0005546420.localdomain podman[102999]: 2025-12-05 09:04:58.508859473 +0000 UTC m=+0.080841545 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, release=1761123044, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:04:58 np0005546420.localdomain podman[103001]: 2025-12-05 09:04:58.572028719 +0000 UTC m=+0.138898172 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team)
Dec 05 09:04:58 np0005546420.localdomain podman[102999]: 2025-12-05 09:04:58.588245941 +0000 UTC m=+0.160228103 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, version=17.1.12)
Dec 05 09:04:58 np0005546420.localdomain podman[103002]: 2025-12-05 09:04:58.538049337 +0000 UTC m=+0.099378859 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, version=17.1.12)
Dec 05 09:04:58 np0005546420.localdomain podman[103001]: 2025-12-05 09:04:58.609554722 +0000 UTC m=+0.176424165 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:04:58 np0005546420.localdomain podman[103000]: 2025-12-05 09:04:58.663322597 +0000 UTC m=+0.233433631 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd)
Dec 05 09:04:58 np0005546420.localdomain podman[103002]: 2025-12-05 09:04:58.667686422 +0000 UTC m=+0.229015934 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, io.openshift.expose-services=, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc.)
Dec 05 09:04:58 np0005546420.localdomain podman[103002]: unhealthy
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:04:58 np0005546420.localdomain podman[102999]: unhealthy
Dec 05 09:04:58 np0005546420.localdomain podman[103000]: 2025-12-05 09:04:58.700981323 +0000 UTC m=+0.271092367 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:04:58 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:04:59 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 09:04:59 np0005546420.localdomain recover_tripleo_nova_virtqemud[103076]: 62579
Dec 05 09:04:59 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 09:04:59 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 09:04:59 np0005546420.localdomain systemd[1]: tmp-crun.pNoJsm.mount: Deactivated successfully.
Dec 05 09:05:07 np0005546420.localdomain sshd[103077]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:05:08 np0005546420.localdomain sshd[103077]: Received disconnect from 93.157.248.178 port 45370:11: Bye Bye [preauth]
Dec 05 09:05:08 np0005546420.localdomain sshd[103077]: Disconnected from authenticating user root 93.157.248.178 port 45370 [preauth]
Dec 05 09:05:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:05:11 np0005546420.localdomain podman[103079]: 2025-12-05 09:05:11.518364472 +0000 UTC m=+0.094489208 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., container_name=metrics_qdr, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12)
Dec 05 09:05:11 np0005546420.localdomain podman[103079]: 2025-12-05 09:05:11.759150309 +0000 UTC m=+0.335274985 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, managed_by=tripleo_ansible)
Dec 05 09:05:11 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:05:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:05:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:05:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:05:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:05:22 np0005546420.localdomain podman[103109]: 2025-12-05 09:05:22.526712443 +0000 UTC m=+0.090962348 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, release=1761123044, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 09:05:22 np0005546420.localdomain podman[103107]: 2025-12-05 09:05:22.572262534 +0000 UTC m=+0.143655860 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=logrotate_crond)
Dec 05 09:05:22 np0005546420.localdomain podman[103107]: 2025-12-05 09:05:22.583330087 +0000 UTC m=+0.154723423 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 09:05:22 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:05:22 np0005546420.localdomain systemd[1]: tmp-crun.mkljkV.mount: Deactivated successfully.
Dec 05 09:05:22 np0005546420.localdomain podman[103108]: 2025-12-05 09:05:22.632646484 +0000 UTC m=+0.200560172 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible)
Dec 05 09:05:22 np0005546420.localdomain podman[103109]: 2025-12-05 09:05:22.639503817 +0000 UTC m=+0.203753752 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, version=17.1.12, container_name=nova_compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, release=1761123044, distribution-scope=public)
Dec 05 09:05:22 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:05:22 np0005546420.localdomain podman[103108]: 2025-12-05 09:05:22.663388706 +0000 UTC m=+0.231302444 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4)
Dec 05 09:05:22 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:05:22 np0005546420.localdomain podman[103110]: 2025-12-05 09:05:22.737220343 +0000 UTC m=+0.298202017 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4)
Dec 05 09:05:22 np0005546420.localdomain podman[103110]: 2025-12-05 09:05:22.773392934 +0000 UTC m=+0.334374558 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git)
Dec 05 09:05:22 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:05:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:05:25 np0005546420.localdomain podman[103207]: 2025-12-05 09:05:25.514037532 +0000 UTC m=+0.089391090 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true)
Dec 05 09:05:25 np0005546420.localdomain podman[103207]: 2025-12-05 09:05:25.915534176 +0000 UTC m=+0.490887734 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true)
Dec 05 09:05:25 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:05:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:05:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:05:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:05:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:05:29 np0005546420.localdomain systemd[1]: tmp-crun.iKc8OJ.mount: Deactivated successfully.
Dec 05 09:05:29 np0005546420.localdomain podman[103233]: 2025-12-05 09:05:29.561264016 +0000 UTC m=+0.130565245 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:05:29 np0005546420.localdomain podman[103233]: 2025-12-05 09:05:29.580338166 +0000 UTC m=+0.149639415 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z)
Dec 05 09:05:29 np0005546420.localdomain podman[103233]: unhealthy
Dec 05 09:05:29 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:05:29 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:05:29 np0005546420.localdomain podman[103231]: 2025-12-05 09:05:29.666229876 +0000 UTC m=+0.240970024 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, architecture=x86_64)
Dec 05 09:05:29 np0005546420.localdomain podman[103230]: 2025-12-05 09:05:29.532504784 +0000 UTC m=+0.108235142 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp17/openstack-ovn-controller, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:05:29 np0005546420.localdomain podman[103230]: 2025-12-05 09:05:29.71349163 +0000 UTC m=+0.289222038 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com)
Dec 05 09:05:29 np0005546420.localdomain podman[103230]: unhealthy
Dec 05 09:05:29 np0005546420.localdomain podman[103232]: 2025-12-05 09:05:29.726836504 +0000 UTC m=+0.297725773 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, vcs-type=git)
Dec 05 09:05:29 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:05:29 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:05:29 np0005546420.localdomain podman[103231]: 2025-12-05 09:05:29.75448438 +0000 UTC m=+0.329224578 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=collectd, vcs-type=git)
Dec 05 09:05:29 np0005546420.localdomain podman[103232]: 2025-12-05 09:05:29.765333145 +0000 UTC m=+0.336222404 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc.)
Dec 05 09:05:29 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:05:29 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:05:35 np0005546420.localdomain sudo[103308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:05:35 np0005546420.localdomain sudo[103308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:05:35 np0005546420.localdomain sudo[103308]: pam_unix(sudo:session): session closed for user root
Dec 05 09:05:35 np0005546420.localdomain sudo[103323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:05:35 np0005546420.localdomain sudo[103323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:05:36 np0005546420.localdomain sudo[103323]: pam_unix(sudo:session): session closed for user root
Dec 05 09:05:37 np0005546420.localdomain sudo[103370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:05:37 np0005546420.localdomain sudo[103370]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:05:37 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 09:05:37 np0005546420.localdomain sudo[103370]: pam_unix(sudo:session): session closed for user root
Dec 05 09:05:37 np0005546420.localdomain recover_tripleo_nova_virtqemud[103386]: 62579
Dec 05 09:05:37 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 09:05:37 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 09:05:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:05:42 np0005546420.localdomain podman[103387]: 2025-12-05 09:05:42.516416394 +0000 UTC m=+0.089871564 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 05 09:05:42 np0005546420.localdomain podman[103387]: 2025-12-05 09:05:42.714382996 +0000 UTC m=+0.287838156 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 09:05:42 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:05:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:05:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:05:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:05:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:05:53 np0005546420.localdomain podman[103415]: 2025-12-05 09:05:53.521553108 +0000 UTC m=+0.092521267 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 05 09:05:53 np0005546420.localdomain podman[103416]: 2025-12-05 09:05:53.573162786 +0000 UTC m=+0.142961628 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=)
Dec 05 09:05:53 np0005546420.localdomain podman[103415]: 2025-12-05 09:05:53.589525503 +0000 UTC m=+0.160493582 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T22:49:32Z, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, container_name=logrotate_crond, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 05 09:05:53 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:05:53 np0005546420.localdomain podman[103417]: 2025-12-05 09:05:53.676522308 +0000 UTC m=+0.243526804 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-nova-compute-container)
Dec 05 09:05:53 np0005546420.localdomain podman[103417]: 2025-12-05 09:05:53.733724599 +0000 UTC m=+0.300729065 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc.)
Dec 05 09:05:53 np0005546420.localdomain podman[103418]: 2025-12-05 09:05:53.741010765 +0000 UTC m=+0.304781931 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Dec 05 09:05:53 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:05:53 np0005546420.localdomain podman[103416]: 2025-12-05 09:05:53.750028344 +0000 UTC m=+0.319827226 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:05:53 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:05:53 np0005546420.localdomain podman[103418]: 2025-12-05 09:05:53.775626346 +0000 UTC m=+0.339397542 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64)
Dec 05 09:05:53 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:05:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:05:56 np0005546420.localdomain podman[103512]: 2025-12-05 09:05:56.517448232 +0000 UTC m=+0.091977670 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Dec 05 09:05:56 np0005546420.localdomain podman[103512]: 2025-12-05 09:05:56.912661912 +0000 UTC m=+0.487191340 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, release=1761123044)
Dec 05 09:05:56 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: tmp-crun.lEg32l.mount: Deactivated successfully.
Dec 05 09:06:00 np0005546420.localdomain podman[103537]: 2025-12-05 09:06:00.536802684 +0000 UTC m=+0.103234329 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true)
Dec 05 09:06:00 np0005546420.localdomain podman[103537]: 2025-12-05 09:06:00.581115486 +0000 UTC m=+0.147547121 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: tmp-crun.pzPwfe.mount: Deactivated successfully.
Dec 05 09:06:00 np0005546420.localdomain podman[103535]: 2025-12-05 09:06:00.593096677 +0000 UTC m=+0.162780172 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:06:00 np0005546420.localdomain podman[103536]: 2025-12-05 09:06:00.644525299 +0000 UTC m=+0.214913957 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, tcib_managed=true, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc.)
Dec 05 09:06:00 np0005546420.localdomain podman[103536]: 2025-12-05 09:06:00.65840962 +0000 UTC m=+0.228798278 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:06:00 np0005546420.localdomain podman[103535]: 2025-12-05 09:06:00.669193754 +0000 UTC m=+0.238877319 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:06:00 np0005546420.localdomain podman[103535]: unhealthy
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:06:00 np0005546420.localdomain podman[103538]: 2025-12-05 09:06:00.735691183 +0000 UTC m=+0.299651871 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:06:00 np0005546420.localdomain podman[103538]: 2025-12-05 09:06:00.780593994 +0000 UTC m=+0.344554692 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=)
Dec 05 09:06:00 np0005546420.localdomain podman[103538]: unhealthy
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:06:00 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:06:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:06:13 np0005546420.localdomain podman[103616]: 2025-12-05 09:06:13.516820788 +0000 UTC m=+0.090148183 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_id=tripleo_step1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:06:13 np0005546420.localdomain podman[103616]: 2025-12-05 09:06:13.718312598 +0000 UTC m=+0.291640033 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:06:13 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:06:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:06:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:06:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:06:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:06:24 np0005546420.localdomain systemd[1]: tmp-crun.XJVqvC.mount: Deactivated successfully.
Dec 05 09:06:24 np0005546420.localdomain podman[103646]: 2025-12-05 09:06:24.524536999 +0000 UTC m=+0.098094609 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, distribution-scope=public)
Dec 05 09:06:24 np0005546420.localdomain podman[103646]: 2025-12-05 09:06:24.556853931 +0000 UTC m=+0.130411561 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, container_name=logrotate_crond, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 05 09:06:24 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:06:24 np0005546420.localdomain podman[103647]: 2025-12-05 09:06:24.570236175 +0000 UTC m=+0.139874733 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:06:24 np0005546420.localdomain podman[103647]: 2025-12-05 09:06:24.603288789 +0000 UTC m=+0.172927357 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 09:06:24 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:06:24 np0005546420.localdomain podman[103648]: 2025-12-05 09:06:24.620365798 +0000 UTC m=+0.185760775 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team)
Dec 05 09:06:24 np0005546420.localdomain podman[103652]: 2025-12-05 09:06:24.681232502 +0000 UTC m=+0.241738937 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, version=17.1.12, release=1761123044, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, distribution-scope=public, container_name=ceilometer_agent_compute)
Dec 05 09:06:24 np0005546420.localdomain podman[103648]: 2025-12-05 09:06:24.705406671 +0000 UTC m=+0.270801628 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com)
Dec 05 09:06:24 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:06:24 np0005546420.localdomain podman[103652]: 2025-12-05 09:06:24.740353774 +0000 UTC m=+0.300860189 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, build-date=2025-11-19T00:11:48Z, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:06:24 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:06:27 np0005546420.localdomain sshd[103738]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:06:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:06:27 np0005546420.localdomain sshd[103751]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:06:27 np0005546420.localdomain podman[103740]: 2025-12-05 09:06:27.511285241 +0000 UTC m=+0.087096548 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container)
Dec 05 09:06:27 np0005546420.localdomain podman[103740]: 2025-12-05 09:06:27.883456717 +0000 UTC m=+0.459268044 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team)
Dec 05 09:06:27 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:06:28 np0005546420.localdomain sshd[103738]: Received disconnect from 93.157.248.178 port 41188:11: Bye Bye [preauth]
Dec 05 09:06:28 np0005546420.localdomain sshd[103738]: Disconnected from authenticating user root 93.157.248.178 port 41188 [preauth]
Dec 05 09:06:28 np0005546420.localdomain sshd[103751]: Invalid user default from 91.202.233.33 port 61002
Dec 05 09:06:29 np0005546420.localdomain sshd[103751]: Connection reset by invalid user default 91.202.233.33 port 61002 [preauth]
Dec 05 09:06:29 np0005546420.localdomain sshd[103763]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:06:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:06:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:06:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:06:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:06:31 np0005546420.localdomain podman[103765]: 2025-12-05 09:06:31.51577059 +0000 UTC m=+0.093406603 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:06:31 np0005546420.localdomain podman[103765]: 2025-12-05 09:06:31.560473525 +0000 UTC m=+0.138109568 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, architecture=x86_64, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12)
Dec 05 09:06:31 np0005546420.localdomain podman[103765]: unhealthy
Dec 05 09:06:31 np0005546420.localdomain podman[103767]: 2025-12-05 09:06:31.573765067 +0000 UTC m=+0.142293998 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z)
Dec 05 09:06:31 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:06:31 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:06:31 np0005546420.localdomain podman[103767]: 2025-12-05 09:06:31.612324451 +0000 UTC m=+0.180853422 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, architecture=x86_64, build-date=2025-11-18T23:44:13Z)
Dec 05 09:06:31 np0005546420.localdomain systemd[1]: tmp-crun.IpNwhZ.mount: Deactivated successfully.
Dec 05 09:06:31 np0005546420.localdomain sshd[103763]: Connection reset by authenticating user root 91.202.233.33 port 61020 [preauth]
Dec 05 09:06:31 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:06:31 np0005546420.localdomain podman[103766]: 2025-12-05 09:06:31.631847316 +0000 UTC m=+0.206104285 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z)
Dec 05 09:06:31 np0005546420.localdomain podman[103772]: 2025-12-05 09:06:31.671860585 +0000 UTC m=+0.235194265 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:06:31 np0005546420.localdomain podman[103766]: 2025-12-05 09:06:31.696093635 +0000 UTC m=+0.270350564 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, name=rhosp17/openstack-collectd, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4)
Dec 05 09:06:31 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:06:31 np0005546420.localdomain podman[103772]: 2025-12-05 09:06:31.71822635 +0000 UTC m=+0.281560040 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 05 09:06:31 np0005546420.localdomain podman[103772]: unhealthy
Dec 05 09:06:31 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:06:31 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:06:31 np0005546420.localdomain sshd[103845]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:06:34 np0005546420.localdomain sshd[103845]: Connection reset by authenticating user root 91.202.233.33 port 39108 [preauth]
Dec 05 09:06:34 np0005546420.localdomain sshd[103847]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:06:35 np0005546420.localdomain sshd[103847]: Connection reset by authenticating user root 91.202.233.33 port 39112 [preauth]
Dec 05 09:06:36 np0005546420.localdomain sshd[103849]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:06:37 np0005546420.localdomain sudo[103851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:06:37 np0005546420.localdomain sudo[103851]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:06:37 np0005546420.localdomain sudo[103851]: pam_unix(sudo:session): session closed for user root
Dec 05 09:06:37 np0005546420.localdomain sudo[103866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:06:37 np0005546420.localdomain sudo[103866]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:06:38 np0005546420.localdomain podman[103951]: 2025-12-05 09:06:38.151325705 +0000 UTC m=+0.103091834 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=1763362218, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:06:38 np0005546420.localdomain podman[103951]: 2025-12-05 09:06:38.283942352 +0000 UTC m=+0.235708471 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, ceph=True, release=1763362218, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4)
Dec 05 09:06:38 np0005546420.localdomain sudo[103866]: pam_unix(sudo:session): session closed for user root
Dec 05 09:06:38 np0005546420.localdomain sudo[104016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:06:38 np0005546420.localdomain sudo[104016]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:06:38 np0005546420.localdomain sudo[104016]: pam_unix(sudo:session): session closed for user root
Dec 05 09:06:38 np0005546420.localdomain sshd[103849]: Connection reset by authenticating user root 91.202.233.33 port 39122 [preauth]
Dec 05 09:06:38 np0005546420.localdomain sudo[104031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:06:38 np0005546420.localdomain sudo[104031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:06:39 np0005546420.localdomain sudo[104031]: pam_unix(sudo:session): session closed for user root
Dec 05 09:06:40 np0005546420.localdomain sudo[104078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:06:40 np0005546420.localdomain sudo[104078]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:06:40 np0005546420.localdomain sudo[104078]: pam_unix(sudo:session): session closed for user root
Dec 05 09:06:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:06:44 np0005546420.localdomain podman[104093]: 2025-12-05 09:06:44.517187738 +0000 UTC m=+0.092310360 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Dec 05 09:06:44 np0005546420.localdomain podman[104093]: 2025-12-05 09:06:44.722361122 +0000 UTC m=+0.297483724 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 09:06:44 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:06:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:06:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:06:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:06:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:06:55 np0005546420.localdomain podman[104124]: 2025-12-05 09:06:55.535066226 +0000 UTC m=+0.104788096 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, tcib_managed=true)
Dec 05 09:06:55 np0005546420.localdomain podman[104123]: 2025-12-05 09:06:55.579174272 +0000 UTC m=+0.150326577 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64)
Dec 05 09:06:55 np0005546420.localdomain podman[104124]: 2025-12-05 09:06:55.624346741 +0000 UTC m=+0.194068621 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4)
Dec 05 09:06:55 np0005546420.localdomain podman[104126]: 2025-12-05 09:06:55.639614753 +0000 UTC m=+0.199893691 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 09:06:55 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:06:55 np0005546420.localdomain podman[104123]: 2025-12-05 09:06:55.642795243 +0000 UTC m=+0.213947498 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 05 09:06:55 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:06:55 np0005546420.localdomain podman[104126]: 2025-12-05 09:06:55.703536483 +0000 UTC m=+0.263815451 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute)
Dec 05 09:06:55 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:06:55 np0005546420.localdomain podman[104125]: 2025-12-05 09:06:55.788064331 +0000 UTC m=+0.351911930 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container)
Dec 05 09:06:55 np0005546420.localdomain podman[104125]: 2025-12-05 09:06:55.848512863 +0000 UTC m=+0.412360462 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, release=1761123044)
Dec 05 09:06:55 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Deactivated successfully.
Dec 05 09:06:56 np0005546420.localdomain systemd[1]: tmp-crun.H7FOOo.mount: Deactivated successfully.
Dec 05 09:06:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:06:58 np0005546420.localdomain systemd[1]: tmp-crun.JdAQUb.mount: Deactivated successfully.
Dec 05 09:06:58 np0005546420.localdomain podman[104224]: 2025-12-05 09:06:58.53035703 +0000 UTC m=+0.106250002 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.buildah.version=1.41.4)
Dec 05 09:06:58 np0005546420.localdomain podman[104224]: 2025-12-05 09:06:58.931815514 +0000 UTC m=+0.507708446 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.buildah.version=1.41.4)
Dec 05 09:06:58 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:07:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:07:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:07:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:07:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:07:02 np0005546420.localdomain podman[104247]: 2025-12-05 09:07:02.521474827 +0000 UTC m=+0.096245882 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:07:02 np0005546420.localdomain podman[104248]: 2025-12-05 09:07:02.568772722 +0000 UTC m=+0.140835103 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, name=rhosp17/openstack-collectd, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=)
Dec 05 09:07:02 np0005546420.localdomain podman[104248]: 2025-12-05 09:07:02.582369903 +0000 UTC m=+0.154432314 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, tcib_managed=true, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=collectd, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 05 09:07:02 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:07:02 np0005546420.localdomain podman[104247]: 2025-12-05 09:07:02.595843111 +0000 UTC m=+0.170614206 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, architecture=x86_64, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, container_name=ovn_controller)
Dec 05 09:07:02 np0005546420.localdomain podman[104247]: unhealthy
Dec 05 09:07:02 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:07:02 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:07:02 np0005546420.localdomain podman[104249]: 2025-12-05 09:07:02.687188389 +0000 UTC m=+0.253409709 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, container_name=iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 09:07:02 np0005546420.localdomain podman[104249]: 2025-12-05 09:07:02.728260351 +0000 UTC m=+0.294481651 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, release=1761123044, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 05 09:07:02 np0005546420.localdomain podman[104250]: 2025-12-05 09:07:02.736390603 +0000 UTC m=+0.298928959 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 05 09:07:02 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:07:02 np0005546420.localdomain podman[104250]: 2025-12-05 09:07:02.778229249 +0000 UTC m=+0.340767625 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 05 09:07:02 np0005546420.localdomain podman[104250]: unhealthy
Dec 05 09:07:02 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:07:02 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:07:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:07:15 np0005546420.localdomain podman[104325]: 2025-12-05 09:07:15.489621556 +0000 UTC m=+0.073181347 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:46Z, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container)
Dec 05 09:07:15 np0005546420.localdomain podman[104325]: 2025-12-05 09:07:15.664462552 +0000 UTC m=+0.248022373 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=)
Dec 05 09:07:15 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:07:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:07:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:07:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:07:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:07:26 np0005546420.localdomain podman[104353]: 2025-12-05 09:07:26.514600195 +0000 UTC m=+0.085489239 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12)
Dec 05 09:07:26 np0005546420.localdomain podman[104353]: 2025-12-05 09:07:26.548386592 +0000 UTC m=+0.119275636 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, version=17.1.12, container_name=logrotate_crond, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 05 09:07:26 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:07:26 np0005546420.localdomain podman[104355]: 2025-12-05 09:07:26.562384255 +0000 UTC m=+0.127269203 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public)
Dec 05 09:07:26 np0005546420.localdomain podman[104355]: 2025-12-05 09:07:26.608595296 +0000 UTC m=+0.173480264 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 09:07:26 np0005546420.localdomain podman[104355]: unhealthy
Dec 05 09:07:26 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:07:26 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 09:07:26 np0005546420.localdomain podman[104356]: 2025-12-05 09:07:26.635693875 +0000 UTC m=+0.197037233 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 09:07:26 np0005546420.localdomain podman[104356]: 2025-12-05 09:07:26.666065736 +0000 UTC m=+0.227409124 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1)
Dec 05 09:07:26 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:07:26 np0005546420.localdomain podman[104354]: 2025-12-05 09:07:26.689547773 +0000 UTC m=+0.257151435 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 09:07:26 np0005546420.localdomain podman[104354]: 2025-12-05 09:07:26.744423943 +0000 UTC m=+0.312027585 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 05 09:07:26 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:07:27 np0005546420.localdomain systemd[1]: tmp-crun.aaQdgy.mount: Deactivated successfully.
Dec 05 09:07:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:07:29 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 09:07:29 np0005546420.localdomain recover_tripleo_nova_virtqemud[104456]: 62579
Dec 05 09:07:29 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 09:07:29 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 09:07:29 np0005546420.localdomain podman[104449]: 2025-12-05 09:07:29.522260662 +0000 UTC m=+0.097922224 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_migration_target, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 09:07:29 np0005546420.localdomain podman[104449]: 2025-12-05 09:07:29.892057485 +0000 UTC m=+0.467718977 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, container_name=nova_migration_target, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:07:29 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:07:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:07:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:07:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:07:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:07:33 np0005546420.localdomain podman[104475]: 2025-12-05 09:07:33.514743771 +0000 UTC m=+0.084310893 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team)
Dec 05 09:07:33 np0005546420.localdomain podman[104474]: 2025-12-05 09:07:33.561979393 +0000 UTC m=+0.131041178 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, distribution-scope=public)
Dec 05 09:07:33 np0005546420.localdomain podman[104474]: 2025-12-05 09:07:33.568371432 +0000 UTC m=+0.137433197 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-collectd-container, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 05 09:07:33 np0005546420.localdomain podman[104475]: 2025-12-05 09:07:33.575841523 +0000 UTC m=+0.145408595 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, release=1761123044, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 05 09:07:33 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:07:33 np0005546420.localdomain podman[104476]: 2025-12-05 09:07:33.53861437 +0000 UTC m=+0.102510705 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible)
Dec 05 09:07:33 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:07:33 np0005546420.localdomain podman[104476]: 2025-12-05 09:07:33.623136088 +0000 UTC m=+0.187032433 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent)
Dec 05 09:07:33 np0005546420.localdomain podman[104476]: unhealthy
Dec 05 09:07:33 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:07:33 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:07:33 np0005546420.localdomain podman[104473]: 2025-12-05 09:07:33.671395282 +0000 UTC m=+0.243324776 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, tcib_managed=true)
Dec 05 09:07:33 np0005546420.localdomain podman[104473]: 2025-12-05 09:07:33.712205946 +0000 UTC m=+0.284135440 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team)
Dec 05 09:07:33 np0005546420.localdomain podman[104473]: unhealthy
Dec 05 09:07:33 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:07:33 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:07:34 np0005546420.localdomain systemd[1]: tmp-crun.V72pxi.mount: Deactivated successfully.
Dec 05 09:07:40 np0005546420.localdomain sudo[104549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:07:40 np0005546420.localdomain sudo[104549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:07:40 np0005546420.localdomain sudo[104549]: pam_unix(sudo:session): session closed for user root
Dec 05 09:07:40 np0005546420.localdomain sudo[104564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:07:40 np0005546420.localdomain sudo[104564]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:07:40 np0005546420.localdomain sudo[104564]: pam_unix(sudo:session): session closed for user root
Dec 05 09:07:41 np0005546420.localdomain sudo[104611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:07:41 np0005546420.localdomain sudo[104611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:07:41 np0005546420.localdomain sudo[104611]: pam_unix(sudo:session): session closed for user root
Dec 05 09:07:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:07:46 np0005546420.localdomain podman[104626]: 2025-12-05 09:07:46.518767701 +0000 UTC m=+0.089676208 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.expose-services=)
Dec 05 09:07:46 np0005546420.localdomain podman[104626]: 2025-12-05 09:07:46.706693451 +0000 UTC m=+0.277601898 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 09:07:46 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:07:47 np0005546420.localdomain sshd[104656]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:07:48 np0005546420.localdomain sshd[104656]: Received disconnect from 93.157.248.178 port 45302:11: Bye Bye [preauth]
Dec 05 09:07:48 np0005546420.localdomain sshd[104656]: Disconnected from authenticating user root 93.157.248.178 port 45302 [preauth]
Dec 05 09:07:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:07:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:07:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:07:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:07:57 np0005546420.localdomain podman[104660]: 2025-12-05 09:07:57.530506835 +0000 UTC m=+0.100919607 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, config_id=tripleo_step5, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z)
Dec 05 09:07:57 np0005546420.localdomain systemd[1]: tmp-crun.0jby9i.mount: Deactivated successfully.
Dec 05 09:07:57 np0005546420.localdomain podman[104659]: 2025-12-05 09:07:57.588925714 +0000 UTC m=+0.159396567 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi)
Dec 05 09:07:57 np0005546420.localdomain podman[104658]: 2025-12-05 09:07:57.560596837 +0000 UTC m=+0.132653409 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, build-date=2025-11-18T22:49:32Z, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond)
Dec 05 09:07:57 np0005546420.localdomain podman[104661]: 2025-12-05 09:07:57.637645934 +0000 UTC m=+0.200597184 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, architecture=x86_64)
Dec 05 09:07:57 np0005546420.localdomain podman[104659]: 2025-12-05 09:07:57.641913365 +0000 UTC m=+0.212384218 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public)
Dec 05 09:07:57 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:07:57 np0005546420.localdomain podman[104660]: 2025-12-05 09:07:57.664424663 +0000 UTC m=+0.234837435 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute)
Dec 05 09:07:57 np0005546420.localdomain podman[104660]: unhealthy
Dec 05 09:07:57 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:07:57 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 09:07:57 np0005546420.localdomain podman[104658]: 2025-12-05 09:07:57.690385986 +0000 UTC m=+0.262442528 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:07:57 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:07:57 np0005546420.localdomain podman[104661]: 2025-12-05 09:07:57.748419294 +0000 UTC m=+0.311370534 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 05 09:07:57 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:08:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:08:00 np0005546420.localdomain podman[104752]: 2025-12-05 09:08:00.516202273 +0000 UTC m=+0.091685141 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:08:00 np0005546420.localdomain podman[104752]: 2025-12-05 09:08:00.913456896 +0000 UTC m=+0.488939714 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 09:08:00 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:08:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:08:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:08:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:08:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:08:04 np0005546420.localdomain podman[104775]: 2025-12-05 09:08:04.507682271 +0000 UTC m=+0.080074932 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true)
Dec 05 09:08:04 np0005546420.localdomain podman[104775]: 2025-12-05 09:08:04.525848753 +0000 UTC m=+0.098241444 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container)
Dec 05 09:08:04 np0005546420.localdomain podman[104775]: unhealthy
Dec 05 09:08:04 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:08:04 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:08:04 np0005546420.localdomain podman[104776]: 2025-12-05 09:08:04.619649038 +0000 UTC m=+0.190925074 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.openshift.expose-services=, url=https://www.redhat.com)
Dec 05 09:08:04 np0005546420.localdomain podman[104776]: 2025-12-05 09:08:04.659812752 +0000 UTC m=+0.231088838 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z)
Dec 05 09:08:04 np0005546420.localdomain podman[104777]: 2025-12-05 09:08:04.671300787 +0000 UTC m=+0.238750354 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 09:08:04 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:08:04 np0005546420.localdomain podman[104777]: 2025-12-05 09:08:04.685400304 +0000 UTC m=+0.252849891 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 09:08:04 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:08:04 np0005546420.localdomain podman[104778]: 2025-12-05 09:08:04.771905403 +0000 UTC m=+0.336323937 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, tcib_managed=true)
Dec 05 09:08:04 np0005546420.localdomain podman[104778]: 2025-12-05 09:08:04.812421629 +0000 UTC m=+0.376840133 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 05 09:08:04 np0005546420.localdomain podman[104778]: unhealthy
Dec 05 09:08:04 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:08:04 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:08:09 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23262 DF PROTO=TCP SPT=42344 DPT=9100 SEQ=2353726157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA168A0000000001030307) 
Dec 05 09:08:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23263 DF PROTO=TCP SPT=42344 DPT=9100 SEQ=2353726157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA1A990000000001030307) 
Dec 05 09:08:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23264 DF PROTO=TCP SPT=42344 DPT=9100 SEQ=2353726157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA22990000000001030307) 
Dec 05 09:08:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23265 DF PROTO=TCP SPT=42344 DPT=9100 SEQ=2353726157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA32590000000001030307) 
Dec 05 09:08:17 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55387 DF PROTO=TCP SPT=46860 DPT=9882 SEQ=3648878773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA33D30000000001030307) 
Dec 05 09:08:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:08:17 np0005546420.localdomain podman[104853]: 2025-12-05 09:08:17.529032477 +0000 UTC m=+0.097245813 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=metrics_qdr)
Dec 05 09:08:17 np0005546420.localdomain podman[104853]: 2025-12-05 09:08:17.741535569 +0000 UTC m=+0.309748915 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, container_name=metrics_qdr, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=)
Dec 05 09:08:17 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:08:18 np0005546420.localdomain sshd[104883]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:08:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55388 DF PROTO=TCP SPT=46860 DPT=9882 SEQ=3648878773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA37D90000000001030307) 
Dec 05 09:08:18 np0005546420.localdomain sshd[104883]: Accepted publickey for zuul from 192.168.122.31 port 43070 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:08:18 np0005546420.localdomain systemd-logind[762]: New session 35 of user zuul.
Dec 05 09:08:18 np0005546420.localdomain systemd[1]: Started Session 35 of User zuul.
Dec 05 09:08:18 np0005546420.localdomain sshd[104883]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:08:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10076 DF PROTO=TCP SPT=46666 DPT=9105 SEQ=1865518524 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA3A500000000001030307) 
Dec 05 09:08:19 np0005546420.localdomain sudo[104976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhsgvckyexiorusqlneqfhvxeapkldgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925698.4883697-27-40434837908139/AnsiballZ_stat.py
Dec 05 09:08:19 np0005546420.localdomain sudo[104976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:08:19 np0005546420.localdomain python3.9[104978]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:08:19 np0005546420.localdomain sudo[104976]: pam_unix(sudo:session): session closed for user root
Dec 05 09:08:19 np0005546420.localdomain sudo[105070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swbngljxyiyaevjtkeuvmzipnigisnze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925699.410988-63-269603594579102/AnsiballZ_command.py
Dec 05 09:08:19 np0005546420.localdomain sudo[105070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:08:19 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10077 DF PROTO=TCP SPT=46666 DPT=9105 SEQ=1865518524 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA3E5A0000000001030307) 
Dec 05 09:08:20 np0005546420.localdomain python3.9[105072]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:08:20 np0005546420.localdomain sudo[105070]: pam_unix(sudo:session): session closed for user root
Dec 05 09:08:20 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55389 DF PROTO=TCP SPT=46860 DPT=9882 SEQ=3648878773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA3FD90000000001030307) 
Dec 05 09:08:20 np0005546420.localdomain sudo[105163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjnbnvlkcddteykhxjdknhfbuwlhxnal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925700.3542988-87-148546976564464/AnsiballZ_stat.py
Dec 05 09:08:20 np0005546420.localdomain sudo[105163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:08:20 np0005546420.localdomain python3.9[105165]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:08:20 np0005546420.localdomain sudo[105163]: pam_unix(sudo:session): session closed for user root
Dec 05 09:08:21 np0005546420.localdomain sudo[105257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-koshfogvrnopytrezaoojvnfbofimipw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925701.1491249-111-232609696818808/AnsiballZ_command.py
Dec 05 09:08:21 np0005546420.localdomain sudo[105257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:08:21 np0005546420.localdomain python3.9[105259]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:08:21 np0005546420.localdomain sudo[105257]: pam_unix(sudo:session): session closed for user root
Dec 05 09:08:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10078 DF PROTO=TCP SPT=46666 DPT=9105 SEQ=1865518524 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA465A0000000001030307) 
Dec 05 09:08:22 np0005546420.localdomain sudo[105350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vagykfguixhebdcloaejfnhdbzekniuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925701.9372807-138-52987922432571/AnsiballZ_command.py
Dec 05 09:08:22 np0005546420.localdomain sudo[105350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:08:22 np0005546420.localdomain python3.9[105352]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:08:22 np0005546420.localdomain sudo[105350]: pam_unix(sudo:session): session closed for user root
Dec 05 09:08:23 np0005546420.localdomain python3.9[105443]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 05 09:08:23 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=94 DF PROTO=TCP SPT=60702 DPT=9102 SEQ=3883656671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA4C280000000001030307) 
Dec 05 09:08:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55390 DF PROTO=TCP SPT=46860 DPT=9882 SEQ=3648878773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA4F990000000001030307) 
Dec 05 09:08:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=95 DF PROTO=TCP SPT=60702 DPT=9102 SEQ=3883656671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA501A0000000001030307) 
Dec 05 09:08:24 np0005546420.localdomain python3.9[105533]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:08:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23266 DF PROTO=TCP SPT=42344 DPT=9100 SEQ=2353726157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA51D90000000001030307) 
Dec 05 09:08:25 np0005546420.localdomain python3.9[105625]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 05 09:08:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10079 DF PROTO=TCP SPT=46666 DPT=9105 SEQ=1865518524 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA56190000000001030307) 
Dec 05 09:08:26 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=96 DF PROTO=TCP SPT=60702 DPT=9102 SEQ=3883656671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA58190000000001030307) 
Dec 05 09:08:26 np0005546420.localdomain python3.9[105715]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:08:27 np0005546420.localdomain python3.9[105763]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:08:28 np0005546420.localdomain sshd[104883]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: session-35.scope: Deactivated successfully.
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: session-35.scope: Consumed 4.868s CPU time.
Dec 05 09:08:28 np0005546420.localdomain systemd-logind[762]: Session 35 logged out. Waiting for processes to exit.
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:08:28 np0005546420.localdomain systemd-logind[762]: Removed session 35.
Dec 05 09:08:28 np0005546420.localdomain podman[105780]: 2025-12-05 09:08:28.454563913 +0000 UTC m=+0.095109157 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:12:45Z)
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: tmp-crun.UjNFML.mount: Deactivated successfully.
Dec 05 09:08:28 np0005546420.localdomain podman[105782]: 2025-12-05 09:08:28.51450439 +0000 UTC m=+0.148917854 container health_status fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1)
Dec 05 09:08:28 np0005546420.localdomain podman[105779]: 2025-12-05 09:08:28.562147825 +0000 UTC m=+0.203723480 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible)
Dec 05 09:08:28 np0005546420.localdomain podman[105779]: 2025-12-05 09:08:28.572290969 +0000 UTC m=+0.213866614 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step4, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:08:28 np0005546420.localdomain podman[105781]: 2025-12-05 09:08:28.617506379 +0000 UTC m=+0.257337461 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 09:08:28 np0005546420.localdomain podman[105782]: 2025-12-05 09:08:28.623563347 +0000 UTC m=+0.257976831 container exec_died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 05 09:08:28 np0005546420.localdomain podman[105780]: 2025-12-05 09:08:28.634722262 +0000 UTC m=+0.275267576 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12)
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Deactivated successfully.
Dec 05 09:08:28 np0005546420.localdomain podman[105781]: 2025-12-05 09:08:28.647199709 +0000 UTC m=+0.287030801 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step5, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044)
Dec 05 09:08:28 np0005546420.localdomain podman[105781]: unhealthy
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:08:28 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 09:08:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=97 DF PROTO=TCP SPT=60702 DPT=9102 SEQ=3883656671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA67DA0000000001030307) 
Dec 05 09:08:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:08:31 np0005546420.localdomain podman[105870]: 2025-12-05 09:08:31.498191165 +0000 UTC m=+0.073428376 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Dec 05 09:08:31 np0005546420.localdomain podman[105870]: 2025-12-05 09:08:31.870431734 +0000 UTC m=+0.445668985 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, release=1761123044, io.openshift.expose-services=, architecture=x86_64, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 09:08:31 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:08:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55391 DF PROTO=TCP SPT=46860 DPT=9882 SEQ=3648878773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA6FDA0000000001030307) 
Dec 05 09:08:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10080 DF PROTO=TCP SPT=46666 DPT=9105 SEQ=1865518524 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA75DA0000000001030307) 
Dec 05 09:08:35 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40190 DF PROTO=TCP SPT=33842 DPT=9101 SEQ=1873167131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA7A410000000001030307) 
Dec 05 09:08:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:08:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:08:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:08:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:08:35 np0005546420.localdomain podman[105895]: 2025-12-05 09:08:35.502280402 +0000 UTC m=+0.080116513 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., container_name=iscsid, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container)
Dec 05 09:08:35 np0005546420.localdomain podman[105895]: 2025-12-05 09:08:35.539187445 +0000 UTC m=+0.117023546 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:08:35 np0005546420.localdomain systemd[1]: tmp-crun.gAR95x.mount: Deactivated successfully.
Dec 05 09:08:35 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:08:35 np0005546420.localdomain podman[105893]: 2025-12-05 09:08:35.56322163 +0000 UTC m=+0.143430883 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vendor=Red Hat, Inc., container_name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 09:08:35 np0005546420.localdomain podman[105894]: 2025-12-05 09:08:35.602453175 +0000 UTC m=+0.181026578 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.buildah.version=1.41.4, architecture=x86_64, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Dec 05 09:08:35 np0005546420.localdomain podman[105894]: 2025-12-05 09:08:35.608371618 +0000 UTC m=+0.186945011 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step3, name=rhosp17/openstack-collectd, architecture=x86_64)
Dec 05 09:08:35 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:08:35 np0005546420.localdomain podman[105893]: 2025-12-05 09:08:35.626456247 +0000 UTC m=+0.206665560 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1)
Dec 05 09:08:35 np0005546420.localdomain podman[105893]: unhealthy
Dec 05 09:08:35 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:08:35 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:08:35 np0005546420.localdomain podman[105896]: 2025-12-05 09:08:35.716692662 +0000 UTC m=+0.290160637 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git)
Dec 05 09:08:35 np0005546420.localdomain podman[105896]: 2025-12-05 09:08:35.761495209 +0000 UTC m=+0.334963134 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 09:08:35 np0005546420.localdomain podman[105896]: unhealthy
Dec 05 09:08:35 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:08:35 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:08:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40191 DF PROTO=TCP SPT=33842 DPT=9101 SEQ=1873167131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA7E590000000001030307) 
Dec 05 09:08:38 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40192 DF PROTO=TCP SPT=33842 DPT=9101 SEQ=1873167131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA865A0000000001030307) 
Dec 05 09:08:38 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=98 DF PROTO=TCP SPT=60702 DPT=9102 SEQ=3883656671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA87DA0000000001030307) 
Dec 05 09:08:39 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59174 DF PROTO=TCP SPT=41456 DPT=9100 SEQ=3574123500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA8BBB0000000001030307) 
Dec 05 09:08:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59175 DF PROTO=TCP SPT=41456 DPT=9100 SEQ=3574123500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA8FDA0000000001030307) 
Dec 05 09:08:41 np0005546420.localdomain sudo[105969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:08:41 np0005546420.localdomain sudo[105969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:08:41 np0005546420.localdomain sudo[105969]: pam_unix(sudo:session): session closed for user root
Dec 05 09:08:41 np0005546420.localdomain sudo[105984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:08:41 np0005546420.localdomain sudo[105984]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:08:42 np0005546420.localdomain sshd[106014]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:08:42 np0005546420.localdomain sshd[106014]: Accepted publickey for zuul from 192.168.122.30 port 50982 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:08:42 np0005546420.localdomain systemd-logind[762]: New session 36 of user zuul.
Dec 05 09:08:42 np0005546420.localdomain systemd[1]: Started Session 36 of User zuul.
Dec 05 09:08:42 np0005546420.localdomain sshd[106014]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:08:42 np0005546420.localdomain sudo[105984]: pam_unix(sudo:session): session closed for user root
Dec 05 09:08:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59176 DF PROTO=TCP SPT=41456 DPT=9100 SEQ=3574123500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAA97D90000000001030307) 
Dec 05 09:08:43 np0005546420.localdomain sudo[106137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dopyyspjwctkdhhqyesiykpgixvrackv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925722.4503884-24-265845434022508/AnsiballZ_systemd_service.py
Dec 05 09:08:43 np0005546420.localdomain sudo[106113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:08:43 np0005546420.localdomain sudo[106137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:08:43 np0005546420.localdomain sudo[106113]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:08:43 np0005546420.localdomain sudo[106113]: pam_unix(sudo:session): session closed for user root
Dec 05 09:08:43 np0005546420.localdomain python3.9[106140]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:08:43 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:08:43 np0005546420.localdomain systemd-sysv-generator[106169]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:08:43 np0005546420.localdomain systemd-rc-local-generator[106163]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:08:43 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:08:43 np0005546420.localdomain sudo[106137]: pam_unix(sudo:session): session closed for user root
Dec 05 09:08:44 np0005546420.localdomain python3.9[106267]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:08:44 np0005546420.localdomain network[106284]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:08:44 np0005546420.localdomain network[106285]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:08:44 np0005546420.localdomain network[106286]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:08:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:08:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59177 DF PROTO=TCP SPT=41456 DPT=9100 SEQ=3574123500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAAA7990000000001030307) 
Dec 05 09:08:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:08:48 np0005546420.localdomain podman[106363]: 2025-12-05 09:08:48.505240268 +0000 UTC m=+0.078635827 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 09:08:48 np0005546420.localdomain podman[106363]: 2025-12-05 09:08:48.721351851 +0000 UTC m=+0.294747400 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-qdrouterd, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, container_name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4)
Dec 05 09:08:48 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:08:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7033 DF PROTO=TCP SPT=45680 DPT=9105 SEQ=2910494356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAAAF810000000001030307) 
Dec 05 09:08:50 np0005546420.localdomain python3.9[106513]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:08:50 np0005546420.localdomain network[106530]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:08:50 np0005546420.localdomain network[106531]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:08:50 np0005546420.localdomain network[106532]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:08:50 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 09:08:50 np0005546420.localdomain recover_tripleo_nova_virtqemud[106539]: 62579
Dec 05 09:08:50 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 09:08:50 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 09:08:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7035 DF PROTO=TCP SPT=45680 DPT=9105 SEQ=2910494356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAABB9A0000000001030307) 
Dec 05 09:08:52 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:08:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59178 DF PROTO=TCP SPT=41456 DPT=9100 SEQ=3574123500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAAC7D90000000001030307) 
Dec 05 09:08:56 np0005546420.localdomain sudo[106732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bijgksxrfpplmbzptgegnvfikcgrpqdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925735.972513-114-17169427462452/AnsiballZ_systemd_service.py
Dec 05 09:08:56 np0005546420.localdomain sudo[106732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:08:56 np0005546420.localdomain python3.9[106734]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:08:56 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:08:56 np0005546420.localdomain systemd-rc-local-generator[106757]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:08:56 np0005546420.localdomain systemd-sysv-generator[106763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:08:56 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:08:57 np0005546420.localdomain systemd[1]: Stopping ceilometer_agent_compute container...
Dec 05 09:08:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:08:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:08:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:08:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:08:59 np0005546420.localdomain systemd[1]: tmp-crun.A9piMh.mount: Deactivated successfully.
Dec 05 09:08:59 np0005546420.localdomain podman[106797]: Error: container fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe is not running
Dec 05 09:08:59 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Main process exited, code=exited, status=125/n/a
Dec 05 09:08:59 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Failed with result 'exit-code'.
Dec 05 09:08:59 np0005546420.localdomain podman[106790]: 2025-12-05 09:08:59.345132684 +0000 UTC m=+0.162081231 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Dec 05 09:08:59 np0005546420.localdomain podman[106790]: 2025-12-05 09:08:59.547930024 +0000 UTC m=+0.364878531 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 09:08:59 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:08:59 np0005546420.localdomain podman[106789]: 2025-12-05 09:08:59.458250576 +0000 UTC m=+0.282172149 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Dec 05 09:08:59 np0005546420.localdomain podman[106791]: 2025-12-05 09:08:59.835394777 +0000 UTC m=+0.650478926 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute)
Dec 05 09:08:59 np0005546420.localdomain podman[106791]: 2025-12-05 09:08:59.99111641 +0000 UTC m=+0.806200549 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z)
Dec 05 09:08:59 np0005546420.localdomain podman[106789]: 2025-12-05 09:08:59.991352497 +0000 UTC m=+0.815274100 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git)
Dec 05 09:09:00 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:09:00 np0005546420.localdomain podman[106791]: unhealthy
Dec 05 09:09:00 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:09:00 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 09:09:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57763 DF PROTO=TCP SPT=57000 DPT=9102 SEQ=3353592287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAADD190000000001030307) 
Dec 05 09:09:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:09:02 np0005546420.localdomain podman[106869]: 2025-12-05 09:09:02.086940548 +0000 UTC m=+0.168496639 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 09:09:02 np0005546420.localdomain podman[106869]: 2025-12-05 09:09:02.487470502 +0000 UTC m=+0.569026613 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Dec 05 09:09:02 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:09:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13637 DF PROTO=TCP SPT=34836 DPT=9882 SEQ=3709894364 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAAE5D90000000001030307) 
Dec 05 09:09:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7037 DF PROTO=TCP SPT=45680 DPT=9105 SEQ=2910494356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAAEBDA0000000001030307) 
Dec 05 09:09:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:09:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:09:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:09:06 np0005546420.localdomain systemd[1]: tmp-crun.ipxDnO.mount: Deactivated successfully.
Dec 05 09:09:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:09:06 np0005546420.localdomain systemd[1]: tmp-crun.FTIXs4.mount: Deactivated successfully.
Dec 05 09:09:06 np0005546420.localdomain podman[106896]: 2025-12-05 09:09:06.455713619 +0000 UTC m=+0.770787062 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 05 09:09:06 np0005546420.localdomain podman[106932]: 2025-12-05 09:09:06.311462012 +0000 UTC m=+0.229940732 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:09:06 np0005546420.localdomain podman[106895]: 2025-12-05 09:09:06.16611304 +0000 UTC m=+0.486372794 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container)
Dec 05 09:09:06 np0005546420.localdomain podman[106894]: 2025-12-05 09:09:06.069063194 +0000 UTC m=+0.394501649 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, container_name=ovn_controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true)
Dec 05 09:09:06 np0005546420.localdomain podman[106932]: 2025-12-05 09:09:06.544281732 +0000 UTC m=+0.462760452 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Dec 05 09:09:06 np0005546420.localdomain podman[106932]: unhealthy
Dec 05 09:09:06 np0005546420.localdomain podman[106894]: 2025-12-05 09:09:06.558376729 +0000 UTC m=+0.883815194 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 05 09:09:06 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:09:06 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:09:06 np0005546420.localdomain podman[106894]: unhealthy
Dec 05 09:09:06 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:09:06 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:09:06 np0005546420.localdomain podman[106896]: 2025-12-05 09:09:06.581624859 +0000 UTC m=+0.896698292 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-type=git, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Dec 05 09:09:06 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:09:06 np0005546420.localdomain podman[106895]: 2025-12-05 09:09:06.603664941 +0000 UTC m=+0.923924635 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, architecture=x86_64, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:09:06 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:09:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40195 DF PROTO=TCP SPT=33842 DPT=9101 SEQ=1873167131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAAF5DA0000000001030307) 
Dec 05 09:09:09 np0005546420.localdomain sshd[106971]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:09:10 np0005546420.localdomain sshd[106971]: Received disconnect from 93.157.248.178 port 47244:11: Bye Bye [preauth]
Dec 05 09:09:10 np0005546420.localdomain sshd[106971]: Disconnected from authenticating user root 93.157.248.178 port 47244 [preauth]
Dec 05 09:09:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33708 DF PROTO=TCP SPT=41258 DPT=9100 SEQ=996234049 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB04D90000000001030307) 
Dec 05 09:09:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33709 DF PROTO=TCP SPT=41258 DPT=9100 SEQ=996234049 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB0CDA0000000001030307) 
Dec 05 09:09:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33710 DF PROTO=TCP SPT=41258 DPT=9100 SEQ=996234049 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB1C990000000001030307) 
Dec 05 09:09:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24680 DF PROTO=TCP SPT=37956 DPT=9105 SEQ=1297806249 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB24B30000000001030307) 
Dec 05 09:09:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:09:19 np0005546420.localdomain podman[106973]: 2025-12-05 09:09:19.258650271 +0000 UTC m=+0.086202140 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, container_name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, vcs-type=git)
Dec 05 09:09:19 np0005546420.localdomain podman[106973]: 2025-12-05 09:09:19.434033453 +0000 UTC m=+0.261585332 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 09:09:19 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:09:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24682 DF PROTO=TCP SPT=37956 DPT=9105 SEQ=1297806249 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB30D90000000001030307) 
Dec 05 09:09:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57765 DF PROTO=TCP SPT=57000 DPT=9102 SEQ=3353592287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB3DDA0000000001030307) 
Dec 05 09:09:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:09:29 np0005546420.localdomain podman[107004]: Error: container fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe is not running
Dec 05 09:09:29 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Main process exited, code=exited, status=125/n/a
Dec 05 09:09:29 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Failed with result 'exit-code'.
Dec 05 09:09:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:09:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:09:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:09:30 np0005546420.localdomain podman[107017]: 2025-12-05 09:09:30.535249582 +0000 UTC m=+0.102942089 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.buildah.version=1.41.4, container_name=nova_compute)
Dec 05 09:09:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53360 DF PROTO=TCP SPT=34758 DPT=9102 SEQ=2583223522 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB525A0000000001030307) 
Dec 05 09:09:30 np0005546420.localdomain podman[107015]: 2025-12-05 09:09:30.576085277 +0000 UTC m=+0.148495130 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, managed_by=tripleo_ansible)
Dec 05 09:09:30 np0005546420.localdomain podman[107015]: 2025-12-05 09:09:30.583918069 +0000 UTC m=+0.156327892 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 09:09:30 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:09:30 np0005546420.localdomain podman[107017]: 2025-12-05 09:09:30.671498512 +0000 UTC m=+0.239191039 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_id=tripleo_step5, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64)
Dec 05 09:09:30 np0005546420.localdomain podman[107017]: unhealthy
Dec 05 09:09:30 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:09:30 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 09:09:30 np0005546420.localdomain podman[107016]: 2025-12-05 09:09:30.717814616 +0000 UTC m=+0.290441476 container health_status 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 05 09:09:30 np0005546420.localdomain podman[107016]: 2025-12-05 09:09:30.749330472 +0000 UTC m=+0.321957312 container exec_died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, tcib_managed=true)
Dec 05 09:09:30 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Deactivated successfully.
Dec 05 09:09:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20646 DF PROTO=TCP SPT=60558 DPT=9882 SEQ=1814290205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB59D90000000001030307) 
Dec 05 09:09:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:09:33 np0005546420.localdomain podman[107084]: 2025-12-05 09:09:33.253258639 +0000 UTC m=+0.081901477 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:09:33 np0005546420.localdomain podman[107084]: 2025-12-05 09:09:33.622473564 +0000 UTC m=+0.451116352 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team)
Dec 05 09:09:33 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:09:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24684 DF PROTO=TCP SPT=37956 DPT=9105 SEQ=1297806249 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB61D90000000001030307) 
Dec 05 09:09:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:09:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:09:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:09:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:09:37 np0005546420.localdomain podman[107115]: 2025-12-05 09:09:37.04980935 +0000 UTC m=+0.112536876 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team)
Dec 05 09:09:37 np0005546420.localdomain podman[107108]: 2025-12-05 09:09:37.027325114 +0000 UTC m=+0.102926059 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, com.redhat.component=openstack-collectd-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 09:09:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55071 DF PROTO=TCP SPT=59454 DPT=9101 SEQ=534391123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB6BD90000000001030307) 
Dec 05 09:09:37 np0005546420.localdomain podman[107115]: 2025-12-05 09:09:37.094419271 +0000 UTC m=+0.157146807 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.buildah.version=1.41.4)
Dec 05 09:09:37 np0005546420.localdomain podman[107115]: unhealthy
Dec 05 09:09:37 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:09:37 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:09:37 np0005546420.localdomain podman[107107]: 2025-12-05 09:09:37.08207198 +0000 UTC m=+0.159922944 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-11-18T23:34:05Z, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, container_name=ovn_controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible)
Dec 05 09:09:37 np0005546420.localdomain podman[107109]: 2025-12-05 09:09:37.14280573 +0000 UTC m=+0.211074078 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, build-date=2025-11-18T23:44:13Z, container_name=iscsid, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, url=https://www.redhat.com)
Dec 05 09:09:37 np0005546420.localdomain podman[107108]: 2025-12-05 09:09:37.162882842 +0000 UTC m=+0.238483817 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:09:37 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:09:37 np0005546420.localdomain podman[107109]: 2025-12-05 09:09:37.181318953 +0000 UTC m=+0.249587291 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, release=1761123044)
Dec 05 09:09:37 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:09:37 np0005546420.localdomain podman[107107]: 2025-12-05 09:09:37.216691869 +0000 UTC m=+0.294542843 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 09:09:37 np0005546420.localdomain podman[107107]: unhealthy
Dec 05 09:09:37 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:09:37 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:09:39 np0005546420.localdomain podman[106775]: time="2025-12-05T09:09:39Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL"
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: libpod-fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.scope: Deactivated successfully.
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: libpod-fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.scope: Consumed 6.113s CPU time.
Dec 05 09:09:39 np0005546420.localdomain podman[106775]: 2025-12-05 09:09:39.172195661 +0000 UTC m=+42.121080036 container died fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, version=17.1.12)
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.timer: Deactivated successfully.
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Failed to open /run/systemd/transient/fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: No such file or directory
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe-userdata-shm.mount: Deactivated successfully.
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f05f78bfd8a8e2cb0bd70f6d604b3d8f88b9a205bde766603d2ab894593606d9-merged.mount: Deactivated successfully.
Dec 05 09:09:39 np0005546420.localdomain podman[106775]: 2025-12-05 09:09:39.23317682 +0000 UTC m=+42.182061165 container cleanup fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=)
Dec 05 09:09:39 np0005546420.localdomain podman[106775]: ceilometer_agent_compute
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.timer: Failed to open /run/systemd/transient/fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.timer: No such file or directory
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Failed to open /run/systemd/transient/fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: No such file or directory
Dec 05 09:09:39 np0005546420.localdomain podman[107185]: 2025-12-05 09:09:39.296980735 +0000 UTC m=+0.117439348 container cleanup fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: libpod-conmon-fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.scope: Deactivated successfully.
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.timer: Failed to open /run/systemd/transient/fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.timer: No such file or directory
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: Failed to open /run/systemd/transient/fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe.service: No such file or directory
Dec 05 09:09:39 np0005546420.localdomain podman[107200]: 2025-12-05 09:09:39.391158132 +0000 UTC m=+0.060448243 container cleanup fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 05 09:09:39 np0005546420.localdomain podman[107200]: ceilometer_agent_compute
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully.
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: Stopped ceilometer_agent_compute container.
Dec 05 09:09:39 np0005546420.localdomain systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.204s CPU time, no IO.
Dec 05 09:09:39 np0005546420.localdomain sudo[106732]: pam_unix(sudo:session): session closed for user root
Dec 05 09:09:39 np0005546420.localdomain sudo[107302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rqcsrdoloyaznpsinowqutjdrnfftcyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925779.5567682-114-45367422423387/AnsiballZ_systemd_service.py
Dec 05 09:09:39 np0005546420.localdomain sudo[107302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:09:39 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:5e:9b:f8 MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=45700 SEQ=3523441829 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 
Dec 05 09:09:40 np0005546420.localdomain python3.9[107304]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:09:40 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:09:40 np0005546420.localdomain systemd-sysv-generator[107332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:09:40 np0005546420.localdomain systemd-rc-local-generator[107328]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:09:40 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:09:40 np0005546420.localdomain systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec 05 09:09:40 np0005546420.localdomain systemd[1]: tmp-crun.D8qwFw.mount: Deactivated successfully.
Dec 05 09:09:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65347 DF PROTO=TCP SPT=52386 DPT=9100 SEQ=1311464196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB82190000000001030307) 
Dec 05 09:09:43 np0005546420.localdomain sudo[107361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:09:43 np0005546420.localdomain sudo[107361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:09:43 np0005546420.localdomain sudo[107361]: pam_unix(sudo:session): session closed for user root
Dec 05 09:09:43 np0005546420.localdomain sudo[107376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:09:43 np0005546420.localdomain sudo[107376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:09:44 np0005546420.localdomain sudo[107376]: pam_unix(sudo:session): session closed for user root
Dec 05 09:09:44 np0005546420.localdomain sudo[107424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:09:44 np0005546420.localdomain sudo[107424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:09:44 np0005546420.localdomain sudo[107424]: pam_unix(sudo:session): session closed for user root
Dec 05 09:09:45 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:5e:9b:f8 MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=45700 SEQ=3523441829 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 
Dec 05 09:09:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20647 DF PROTO=TCP SPT=60558 DPT=9882 SEQ=1814290205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAB99D90000000001030307) 
Dec 05 09:09:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:09:49 np0005546420.localdomain podman[107439]: 2025-12-05 09:09:49.771509824 +0000 UTC m=+0.099003546 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 05 09:09:49 np0005546420.localdomain podman[107439]: 2025-12-05 09:09:49.971461468 +0000 UTC m=+0.298955190 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1)
Dec 05 09:09:49 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:09:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48259 DF PROTO=TCP SPT=43050 DPT=9105 SEQ=3206686403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AABA5D90000000001030307) 
Dec 05 09:09:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53362 DF PROTO=TCP SPT=34758 DPT=9102 SEQ=2583223522 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AABB1D90000000001030307) 
Dec 05 09:10:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40083 DF PROTO=TCP SPT=46718 DPT=9102 SEQ=1266444235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AABC7990000000001030307) 
Dec 05 09:10:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:10:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:10:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:10:01 np0005546420.localdomain systemd[1]: tmp-crun.hq773X.mount: Deactivated successfully.
Dec 05 09:10:01 np0005546420.localdomain podman[107468]: 2025-12-05 09:10:01.11503336 +0000 UTC m=+0.190402988 container health_status 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 09:10:01 np0005546420.localdomain podman[107470]: 2025-12-05 09:10:01.131158039 +0000 UTC m=+0.199885031 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 09:10:01 np0005546420.localdomain podman[107469]: Error: container 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 is not running
Dec 05 09:10:01 np0005546420.localdomain podman[107468]: 2025-12-05 09:10:01.160474348 +0000 UTC m=+0.235844006 container exec_died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 09:10:01 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Main process exited, code=exited, status=125/n/a
Dec 05 09:10:01 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Failed with result 'exit-code'.
Dec 05 09:10:01 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Deactivated successfully.
Dec 05 09:10:01 np0005546420.localdomain podman[107470]: 2025-12-05 09:10:01.205473701 +0000 UTC m=+0.274200703 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute)
Dec 05 09:10:01 np0005546420.localdomain podman[107470]: unhealthy
Dec 05 09:10:01 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:10:01 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 09:10:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53041 DF PROTO=TCP SPT=48940 DPT=9882 SEQ=2624224119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AABCFD90000000001030307) 
Dec 05 09:10:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:10:03 np0005546420.localdomain podman[107524]: 2025-12-05 09:10:03.763063781 +0000 UTC m=+0.091533447 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=)
Dec 05 09:10:04 np0005546420.localdomain podman[107524]: 2025-12-05 09:10:04.136463305 +0000 UTC m=+0.464932981 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.12, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64)
Dec 05 09:10:04 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:10:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48261 DF PROTO=TCP SPT=43050 DPT=9105 SEQ=3206686403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AABD5DA0000000001030307) 
Dec 05 09:10:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12894 DF PROTO=TCP SPT=42508 DPT=9101 SEQ=2981993523 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AABDFD90000000001030307) 
Dec 05 09:10:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:10:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:10:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:10:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:10:07 np0005546420.localdomain podman[107548]: 2025-12-05 09:10:07.523034587 +0000 UTC m=+0.095733806 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, container_name=ovn_controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4)
Dec 05 09:10:07 np0005546420.localdomain podman[107550]: 2025-12-05 09:10:07.568241627 +0000 UTC m=+0.135453216 container health_status a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, container_name=iscsid, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 05 09:10:07 np0005546420.localdomain podman[107550]: 2025-12-05 09:10:07.583568842 +0000 UTC m=+0.150780491 container exec_died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, distribution-scope=public, tcib_managed=true, release=1761123044)
Dec 05 09:10:07 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Deactivated successfully.
Dec 05 09:10:07 np0005546420.localdomain podman[107551]: 2025-12-05 09:10:07.632210898 +0000 UTC m=+0.195982991 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git)
Dec 05 09:10:07 np0005546420.localdomain podman[107551]: 2025-12-05 09:10:07.653491408 +0000 UTC m=+0.217263501 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, architecture=x86_64, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:10:07 np0005546420.localdomain podman[107551]: unhealthy
Dec 05 09:10:07 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:10:07 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:10:07 np0005546420.localdomain podman[107549]: 2025-12-05 09:10:07.67457972 +0000 UTC m=+0.243960436 container health_status 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=collectd, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:10:07 np0005546420.localdomain podman[107549]: 2025-12-05 09:10:07.683772665 +0000 UTC m=+0.253153361 container exec_died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 09:10:07 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Deactivated successfully.
Dec 05 09:10:07 np0005546420.localdomain podman[107548]: 2025-12-05 09:10:07.696131778 +0000 UTC m=+0.268830927 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 05 09:10:07 np0005546420.localdomain podman[107548]: unhealthy
Dec 05 09:10:07 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:10:07 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:10:08 np0005546420.localdomain systemd[1]: tmp-crun.aS8M4d.mount: Deactivated successfully.
Dec 05 09:10:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20689 DF PROTO=TCP SPT=36426 DPT=9100 SEQ=3175566358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AABEF590000000001030307) 
Dec 05 09:10:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20690 DF PROTO=TCP SPT=36426 DPT=9100 SEQ=3175566358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AABF7590000000001030307) 
Dec 05 09:10:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20691 DF PROTO=TCP SPT=36426 DPT=9100 SEQ=3175566358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC07190000000001030307) 
Dec 05 09:10:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53766 DF PROTO=TCP SPT=44084 DPT=9105 SEQ=477441436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC0F110000000001030307) 
Dec 05 09:10:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:10:20 np0005546420.localdomain systemd[1]: tmp-crun.9r1L2C.mount: Deactivated successfully.
Dec 05 09:10:20 np0005546420.localdomain podman[107624]: 2025-12-05 09:10:20.297247679 +0000 UTC m=+0.124156276 container health_status 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git)
Dec 05 09:10:20 np0005546420.localdomain podman[107624]: 2025-12-05 09:10:20.504522868 +0000 UTC m=+0.331431445 container exec_died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 05 09:10:20 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Deactivated successfully.
Dec 05 09:10:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53768 DF PROTO=TCP SPT=44084 DPT=9105 SEQ=477441436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC1B190000000001030307) 
Dec 05 09:10:22 np0005546420.localdomain podman[107345]: time="2025-12-05T09:10:22Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL"
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: libpod-1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.scope: Deactivated successfully.
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: libpod-1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.scope: Consumed 6.715s CPU time.
Dec 05 09:10:22 np0005546420.localdomain podman[107345]: 2025-12-05 09:10:22.759823985 +0000 UTC m=+42.123257843 container died 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.timer: Deactivated successfully.
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Failed to open /run/systemd/transient/1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: No such file or directory
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97-userdata-shm.mount: Deactivated successfully.
Dec 05 09:10:22 np0005546420.localdomain podman[107345]: 2025-12-05 09:10:22.812427275 +0000 UTC m=+42.175861053 container cleanup 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 05 09:10:22 np0005546420.localdomain podman[107345]: ceilometer_agent_ipmi
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.timer: Failed to open /run/systemd/transient/1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.timer: No such file or directory
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Failed to open /run/systemd/transient/1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: No such file or directory
Dec 05 09:10:22 np0005546420.localdomain podman[107654]: 2025-12-05 09:10:22.841635879 +0000 UTC m=+0.071551866 container cleanup 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: libpod-conmon-1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.scope: Deactivated successfully.
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.timer: Failed to open /run/systemd/transient/1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.timer: No such file or directory
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: Failed to open /run/systemd/transient/1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97.service: No such file or directory
Dec 05 09:10:22 np0005546420.localdomain podman[107667]: 2025-12-05 09:10:22.922429631 +0000 UTC m=+0.050676940 container cleanup 1bf332b50f59ad9ff9b3fddee0155683a1370ba65fba97190a38442328119b97 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 05 09:10:22 np0005546420.localdomain podman[107667]: ceilometer_agent_ipmi
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec 05 09:10:22 np0005546420.localdomain systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec 05 09:10:22 np0005546420.localdomain sudo[107302]: pam_unix(sudo:session): session closed for user root
Dec 05 09:10:23 np0005546420.localdomain sudo[107769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ogytncdutalsttywtmpuwkuctnzeuupm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925823.0828876-114-6225050599292/AnsiballZ_systemd_service.py
Dec 05 09:10:23 np0005546420.localdomain sudo[107769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:10:23 np0005546420.localdomain python3.9[107771]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:10:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-67c0121ab2e02c08e681d8a85898c08bf802edfec3fbfb45ad79be05f6aa5dc4-merged.mount: Deactivated successfully.
Dec 05 09:10:23 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:10:23 np0005546420.localdomain systemd-rc-local-generator[107795]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:10:23 np0005546420.localdomain systemd-sysv-generator[107798]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:10:23 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:10:24 np0005546420.localdomain systemd[1]: Stopping collectd container...
Dec 05 09:10:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20692 DF PROTO=TCP SPT=36426 DPT=9100 SEQ=3175566358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC27DA0000000001030307) 
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: libpod-63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.scope: Deactivated successfully.
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: libpod-63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.scope: Consumed 2.152s CPU time.
Dec 05 09:10:25 np0005546420.localdomain podman[107812]: 2025-12-05 09:10:25.600311006 +0000 UTC m=+1.500172681 container died 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step3, container_name=collectd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.timer: Deactivated successfully.
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Failed to open /run/systemd/transient/63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: No such file or directory
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: tmp-crun.bCzKYX.mount: Deactivated successfully.
Dec 05 09:10:25 np0005546420.localdomain podman[107812]: 2025-12-05 09:10:25.726949338 +0000 UTC m=+1.626810963 container cleanup 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 05 09:10:25 np0005546420.localdomain podman[107812]: collectd
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.timer: Failed to open /run/systemd/transient/63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.timer: No such file or directory
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Failed to open /run/systemd/transient/63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: No such file or directory
Dec 05 09:10:25 np0005546420.localdomain podman[107824]: 2025-12-05 09:10:25.752247971 +0000 UTC m=+0.141670558 container cleanup 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, batch=17.1_20251118.1)
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: libpod-conmon-63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.scope: Deactivated successfully.
Dec 05 09:10:25 np0005546420.localdomain podman[107855]: error opening file `/run/crun/63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2/status`: No such file or directory
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.timer: Failed to open /run/systemd/transient/63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.timer: No such file or directory
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: Failed to open /run/systemd/transient/63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2.service: No such file or directory
Dec 05 09:10:25 np0005546420.localdomain podman[107843]: 2025-12-05 09:10:25.84970701 +0000 UTC m=+0.065521990 container cleanup 63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, config_id=tripleo_step3, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:10:25 np0005546420.localdomain podman[107843]: collectd
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'.
Dec 05 09:10:25 np0005546420.localdomain systemd[1]: Stopped collectd container.
Dec 05 09:10:25 np0005546420.localdomain sudo[107769]: pam_unix(sudo:session): session closed for user root
Dec 05 09:10:26 np0005546420.localdomain sudo[107947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmcctrdqnetlfupirmtoeqomflhqrvvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925826.002532-114-14140293813560/AnsiballZ_systemd_service.py
Dec 05 09:10:26 np0005546420.localdomain sudo[107947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:10:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-1d1a749154e63d40b680bc56b84ad99f9346ef73a071954dcf2dda725e125803-merged.mount: Deactivated successfully.
Dec 05 09:10:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63c2259e3902c82b1fc9b9342ea2f5953a2d5b4ede7bfa653ad9136083cdadf2-userdata-shm.mount: Deactivated successfully.
Dec 05 09:10:26 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 09:10:26 np0005546420.localdomain recover_tripleo_nova_virtqemud[107951]: 62579
Dec 05 09:10:26 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 09:10:26 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 09:10:26 np0005546420.localdomain python3.9[107949]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:10:26 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:10:26 np0005546420.localdomain systemd-rc-local-generator[107976]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:10:26 np0005546420.localdomain systemd-sysv-generator[107982]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:10:26 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: Stopping iscsid container...
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: tmp-crun.RR2BCb.mount: Deactivated successfully.
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: libpod-a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.scope: Deactivated successfully.
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: libpod-a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.scope: Consumed 1.177s CPU time.
Dec 05 09:10:27 np0005546420.localdomain podman[107992]: 2025-12-05 09:10:27.226511451 +0000 UTC m=+0.084382765 container died a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.timer: Deactivated successfully.
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Failed to open /run/systemd/transient/a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: No such file or directory
Dec 05 09:10:27 np0005546420.localdomain podman[107992]: 2025-12-05 09:10:27.283183986 +0000 UTC m=+0.141055290 container cleanup a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=iscsid, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 05 09:10:27 np0005546420.localdomain podman[107992]: iscsid
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.timer: Failed to open /run/systemd/transient/a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.timer: No such file or directory
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Failed to open /run/systemd/transient/a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: No such file or directory
Dec 05 09:10:27 np0005546420.localdomain podman[108005]: 2025-12-05 09:10:27.325244758 +0000 UTC m=+0.087965796 container cleanup a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, version=17.1.12, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git)
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: libpod-conmon-a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.scope: Deactivated successfully.
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.timer: Failed to open /run/systemd/transient/a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.timer: No such file or directory
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: Failed to open /run/systemd/transient/a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87.service: No such file or directory
Dec 05 09:10:27 np0005546420.localdomain podman[108019]: 2025-12-05 09:10:27.438942379 +0000 UTC m=+0.073592340 container cleanup a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc.)
Dec 05 09:10:27 np0005546420.localdomain podman[108019]: iscsid
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: tripleo_iscsid.service: Deactivated successfully.
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: Stopped iscsid container.
Dec 05 09:10:27 np0005546420.localdomain sudo[107947]: pam_unix(sudo:session): session closed for user root
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ab5576283f602b49fd74c99052bb7baa8b8fd55184846126f29133b6a14b7c4f-merged.mount: Deactivated successfully.
Dec 05 09:10:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a13af8b42fe5890b74a738cee422bfc0d68bc4401e264aa1ccb75538a27a0e87-userdata-shm.mount: Deactivated successfully.
Dec 05 09:10:27 np0005546420.localdomain sudo[108119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzskbbtzviwtpmiqzxwtunsdvapkyjak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925827.629327-114-36267712790666/AnsiballZ_systemd_service.py
Dec 05 09:10:27 np0005546420.localdomain sudo[108119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:10:28 np0005546420.localdomain python3.9[108121]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:10:28 np0005546420.localdomain systemd-rc-local-generator[108147]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:10:28 np0005546420.localdomain systemd-sysv-generator[108152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: Stopping logrotate_crond container...
Dec 05 09:10:28 np0005546420.localdomain crond[69423]: (CRON) INFO (Shutting down)
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: libpod-11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.scope: Deactivated successfully.
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: libpod-11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.scope: Consumed 1.105s CPU time.
Dec 05 09:10:28 np0005546420.localdomain podman[108161]: 2025-12-05 09:10:28.672547285 +0000 UTC m=+0.067042668 container died 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:32Z, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: tmp-crun.uCSMpP.mount: Deactivated successfully.
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.timer: Deactivated successfully.
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Failed to open /run/systemd/transient/11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: No such file or directory
Dec 05 09:10:28 np0005546420.localdomain podman[108161]: 2025-12-05 09:10:28.790190828 +0000 UTC m=+0.184686211 container cleanup 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, tcib_managed=true, distribution-scope=public, container_name=logrotate_crond, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron)
Dec 05 09:10:28 np0005546420.localdomain podman[108161]: logrotate_crond
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.timer: Failed to open /run/systemd/transient/11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.timer: No such file or directory
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Failed to open /run/systemd/transient/11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: No such file or directory
Dec 05 09:10:28 np0005546420.localdomain podman[108173]: 2025-12-05 09:10:28.813462229 +0000 UTC m=+0.135433215 container cleanup 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, name=rhosp17/openstack-cron, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: libpod-conmon-11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.scope: Deactivated successfully.
Dec 05 09:10:28 np0005546420.localdomain podman[108205]: error opening file `/run/crun/11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3/status`: No such file or directory
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.timer: Failed to open /run/systemd/transient/11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.timer: No such file or directory
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: Failed to open /run/systemd/transient/11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3.service: No such file or directory
Dec 05 09:10:28 np0005546420.localdomain podman[108192]: 2025-12-05 09:10:28.933017322 +0000 UTC m=+0.084944083 container cleanup 11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 05 09:10:28 np0005546420.localdomain podman[108192]: logrotate_crond
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully.
Dec 05 09:10:28 np0005546420.localdomain systemd[1]: Stopped logrotate_crond container.
Dec 05 09:10:28 np0005546420.localdomain sudo[108119]: pam_unix(sudo:session): session closed for user root
Dec 05 09:10:29 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:5e:9b:f8 MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=45722 SEQ=2294507701 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 
Dec 05 09:10:29 np0005546420.localdomain sudo[108297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uovvrzfzoyzskttlwmucawwxqgqorjda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925829.098279-114-102072079147189/AnsiballZ_systemd_service.py
Dec 05 09:10:29 np0005546420.localdomain sudo[108297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:10:29 np0005546420.localdomain python3.9[108299]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:10:29 np0005546420.localdomain systemd[1]: tmp-crun.XpaUhd.mount: Deactivated successfully.
Dec 05 09:10:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-522dca5b0897edc142dfc46111f3114c06dbf23dda84b5305bf810fad13843cc-merged.mount: Deactivated successfully.
Dec 05 09:10:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11e8c164e3e45c87259938ab9810af6a333a33acd25b01d17165dd1d995a75e3-userdata-shm.mount: Deactivated successfully.
Dec 05 09:10:29 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:10:29 np0005546420.localdomain systemd-sysv-generator[108331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:10:29 np0005546420.localdomain systemd-rc-local-generator[108325]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:10:29 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: Stopping metrics_qdr container...
Dec 05 09:10:30 np0005546420.localdomain kernel: qdrouterd[55309]: segfault at 0 ip 00007fcf83b537cb sp 00007ffc799a2640 error 4 in libc.so.6[7fcf83af0000+175000]
Dec 05 09:10:30 np0005546420.localdomain kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: Created slice Slice /system/systemd-coredump.
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: Started Process Core Dump (PID 108353/UID 0).
Dec 05 09:10:30 np0005546420.localdomain sshd[108355]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:10:30 np0005546420.localdomain systemd-coredump[108354]: Resource limits disable core dumping for process 55309 (qdrouterd).
Dec 05 09:10:30 np0005546420.localdomain systemd-coredump[108354]: Process 55309 (qdrouterd) of user 42465 dumped core.
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: systemd-coredump@0-108353-0.service: Deactivated successfully.
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: libpod-89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.scope: Deactivated successfully.
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: libpod-89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.scope: Consumed 29.293s CPU time.
Dec 05 09:10:30 np0005546420.localdomain podman[108340]: 2025-12-05 09:10:30.355298849 +0000 UTC m=+0.253512812 container died 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12)
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.timer: Deactivated successfully.
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Failed to open /run/systemd/transient/89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: No such file or directory
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: tmp-crun.UNG6aC.mount: Deactivated successfully.
Dec 05 09:10:30 np0005546420.localdomain podman[108340]: 2025-12-05 09:10:30.420227751 +0000 UTC m=+0.318441724 container cleanup 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 05 09:10:30 np0005546420.localdomain podman[108340]: metrics_qdr
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.timer: Failed to open /run/systemd/transient/89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.timer: No such file or directory
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Failed to open /run/systemd/transient/89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: No such file or directory
Dec 05 09:10:30 np0005546420.localdomain podman[108360]: 2025-12-05 09:10:30.451714275 +0000 UTC m=+0.079915006 container cleanup 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: libpod-conmon-89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.scope: Deactivated successfully.
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.timer: Failed to open /run/systemd/transient/89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.timer: No such file or directory
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: Failed to open /run/systemd/transient/89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84.service: No such file or directory
Dec 05 09:10:30 np0005546420.localdomain podman[108375]: 2025-12-05 09:10:30.573468346 +0000 UTC m=+0.078541334 container cleanup 89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '5ff3cb86de79e978498bafac8cf0172c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true)
Dec 05 09:10:30 np0005546420.localdomain podman[108375]: metrics_qdr
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'.
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: Stopped metrics_qdr container.
Dec 05 09:10:30 np0005546420.localdomain sudo[108297]: pam_unix(sudo:session): session closed for user root
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-8ef5a06c835915ebb12133f669566b60e1f53fa40ede7bc1454e6dd2b41cdd2b-merged.mount: Deactivated successfully.
Dec 05 09:10:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89e5d6c09a46b150c701980949a67ce7017a5a140163cace6ce53dee063b6e84-userdata-shm.mount: Deactivated successfully.
Dec 05 09:10:30 np0005546420.localdomain sudo[108477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csvgxvobjwglxhuxyfnsvperlensdspu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925830.7298675-114-203194936894679/AnsiballZ_systemd_service.py
Dec 05 09:10:31 np0005546420.localdomain sudo[108477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:10:31 np0005546420.localdomain sshd[108355]: Received disconnect from 93.157.248.178 port 54904:11: Bye Bye [preauth]
Dec 05 09:10:31 np0005546420.localdomain sshd[108355]: Disconnected from authenticating user root 93.157.248.178 port 54904 [preauth]
Dec 05 09:10:31 np0005546420.localdomain python3.9[108479]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:10:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:10:31 np0005546420.localdomain sudo[108477]: pam_unix(sudo:session): session closed for user root
Dec 05 09:10:31 np0005546420.localdomain podman[108481]: 2025-12-05 09:10:31.429617222 +0000 UTC m=+0.092669801 container health_status ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4)
Dec 05 09:10:31 np0005546420.localdomain podman[108481]: 2025-12-05 09:10:31.458686631 +0000 UTC m=+0.121739200 container exec_died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5)
Dec 05 09:10:31 np0005546420.localdomain podman[108481]: unhealthy
Dec 05 09:10:31 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:10:31 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 09:10:31 np0005546420.localdomain sudo[108591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewzmtkbaarkvizjmnfzcotomedezxyhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925831.4885375-114-210252823026203/AnsiballZ_systemd_service.py
Dec 05 09:10:31 np0005546420.localdomain sudo[108591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:10:32 np0005546420.localdomain python3.9[108593]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:10:32 np0005546420.localdomain sudo[108591]: pam_unix(sudo:session): session closed for user root
Dec 05 09:10:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39112 DF PROTO=TCP SPT=51578 DPT=9882 SEQ=2905760731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC43D90000000001030307) 
Dec 05 09:10:33 np0005546420.localdomain sudo[108684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvibijtvbmjkosawjoiopkccctkijbwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925832.2111495-114-50691634708583/AnsiballZ_systemd_service.py
Dec 05 09:10:33 np0005546420.localdomain sudo[108684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:10:33 np0005546420.localdomain python3.9[108686]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:10:33 np0005546420.localdomain sudo[108684]: pam_unix(sudo:session): session closed for user root
Dec 05 09:10:33 np0005546420.localdomain sudo[108777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxlgzarykkynkjeomglknspqmzopvsef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925833.5150533-114-42143688458292/AnsiballZ_systemd_service.py
Dec 05 09:10:33 np0005546420.localdomain sudo[108777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:10:34 np0005546420.localdomain python3.9[108779]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:10:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:10:34 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:10:34 np0005546420.localdomain podman[108782]: 2025-12-05 09:10:34.28131872 +0000 UTC m=+0.103664102 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 05 09:10:34 np0005546420.localdomain systemd-rc-local-generator[108820]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:10:34 np0005546420.localdomain systemd-sysv-generator[108823]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:10:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:10:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53770 DF PROTO=TCP SPT=44084 DPT=9105 SEQ=477441436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC4BD90000000001030307) 
Dec 05 09:10:34 np0005546420.localdomain systemd[1]: Stopping nova_compute container...
Dec 05 09:10:34 np0005546420.localdomain systemd[1]: tmp-crun.TIewmX.mount: Deactivated successfully.
Dec 05 09:10:34 np0005546420.localdomain podman[108782]: 2025-12-05 09:10:34.699267494 +0000 UTC m=+0.521612926 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 05 09:10:34 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:10:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57379 DF PROTO=TCP SPT=42402 DPT=9101 SEQ=91911208 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC55D90000000001030307) 
Dec 05 09:10:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:10:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:10:38 np0005546420.localdomain systemd[1]: tmp-crun.o7wv7o.mount: Deactivated successfully.
Dec 05 09:10:38 np0005546420.localdomain podman[108855]: 2025-12-05 09:10:38.261747515 +0000 UTC m=+0.089319527 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Dec 05 09:10:38 np0005546420.localdomain podman[108855]: 2025-12-05 09:10:38.30844136 +0000 UTC m=+0.136013372 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 09:10:38 np0005546420.localdomain podman[108855]: unhealthy
Dec 05 09:10:38 np0005546420.localdomain podman[108856]: 2025-12-05 09:10:38.318732589 +0000 UTC m=+0.145083854 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, version=17.1.12, release=1761123044, vendor=Red Hat, Inc.)
Dec 05 09:10:38 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:10:38 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:10:38 np0005546420.localdomain podman[108856]: 2025-12-05 09:10:38.337413458 +0000 UTC m=+0.163764783 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, distribution-scope=public)
Dec 05 09:10:38 np0005546420.localdomain podman[108856]: unhealthy
Dec 05 09:10:38 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:10:38 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:10:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45563 DF PROTO=TCP SPT=47674 DPT=9100 SEQ=3754261770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC64990000000001030307) 
Dec 05 09:10:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45564 DF PROTO=TCP SPT=47674 DPT=9100 SEQ=3754261770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC6C9A0000000001030307) 
Dec 05 09:10:44 np0005546420.localdomain sudo[108897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:10:44 np0005546420.localdomain sudo[108897]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:10:44 np0005546420.localdomain sudo[108897]: pam_unix(sudo:session): session closed for user root
Dec 05 09:10:44 np0005546420.localdomain sudo[108912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:10:44 np0005546420.localdomain sudo[108912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:10:45 np0005546420.localdomain sudo[108912]: pam_unix(sudo:session): session closed for user root
Dec 05 09:10:46 np0005546420.localdomain sudo[108960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:10:46 np0005546420.localdomain sudo[108960]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:10:46 np0005546420.localdomain sudo[108960]: pam_unix(sudo:session): session closed for user root
Dec 05 09:10:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45565 DF PROTO=TCP SPT=47674 DPT=9100 SEQ=3754261770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC7C5A0000000001030307) 
Dec 05 09:10:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39113 DF PROTO=TCP SPT=51578 DPT=9882 SEQ=2905760731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC83D90000000001030307) 
Dec 05 09:10:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28306 DF PROTO=TCP SPT=51072 DPT=9105 SEQ=1996949086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC90590000000001030307) 
Dec 05 09:10:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45566 DF PROTO=TCP SPT=47674 DPT=9100 SEQ=3754261770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAC9BD90000000001030307) 
Dec 05 09:11:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53841 DF PROTO=TCP SPT=45900 DPT=9102 SEQ=1101893870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AACB1D90000000001030307) 
Dec 05 09:11:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:11:01 np0005546420.localdomain podman[108975]: Error: container ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e is not running
Dec 05 09:11:01 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Main process exited, code=exited, status=125/n/a
Dec 05 09:11:01 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed with result 'exit-code'.
Dec 05 09:11:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18874 DF PROTO=TCP SPT=58880 DPT=9882 SEQ=3195834704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AACB9D90000000001030307) 
Dec 05 09:11:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28308 DF PROTO=TCP SPT=51072 DPT=9105 SEQ=1996949086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AACBFD90000000001030307) 
Dec 05 09:11:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:11:05 np0005546420.localdomain podman[108986]: 2025-12-05 09:11:05.276157351 +0000 UTC m=+0.100481873 container health_status a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 09:11:05 np0005546420.localdomain podman[108986]: 2025-12-05 09:11:05.678596535 +0000 UTC m=+0.502921037 container exec_died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:11:05 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Deactivated successfully.
Dec 05 09:11:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55472 DF PROTO=TCP SPT=47728 DPT=9101 SEQ=3660501305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AACCBDA0000000001030307) 
Dec 05 09:11:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:11:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:11:08 np0005546420.localdomain systemd[1]: tmp-crun.zwTmgH.mount: Deactivated successfully.
Dec 05 09:11:08 np0005546420.localdomain podman[109011]: 2025-12-05 09:11:08.523906684 +0000 UTC m=+0.094825978 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Dec 05 09:11:08 np0005546420.localdomain podman[109010]: 2025-12-05 09:11:08.572262272 +0000 UTC m=+0.147871211 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 09:11:08 np0005546420.localdomain podman[109011]: 2025-12-05 09:11:08.568756373 +0000 UTC m=+0.139675697 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4)
Dec 05 09:11:08 np0005546420.localdomain podman[109011]: unhealthy
Dec 05 09:11:08 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:11:08 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:11:08 np0005546420.localdomain podman[109010]: 2025-12-05 09:11:08.614289424 +0000 UTC m=+0.189898343 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 05 09:11:08 np0005546420.localdomain podman[109010]: unhealthy
Dec 05 09:11:08 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:11:08 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:11:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31237 DF PROTO=TCP SPT=47636 DPT=9100 SEQ=2584707876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AACD99A0000000001030307) 
Dec 05 09:11:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31238 DF PROTO=TCP SPT=47636 DPT=9100 SEQ=2584707876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AACE19A0000000001030307) 
Dec 05 09:11:16 np0005546420.localdomain podman[108843]: time="2025-12-05T09:11:16Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL"
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: tmp-crun.G3okX5.mount: Deactivated successfully.
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: libpod-ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.scope: Deactivated successfully.
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: libpod-ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.scope: Consumed 28.805s CPU time.
Dec 05 09:11:16 np0005546420.localdomain podman[108843]: 2025-12-05 09:11:16.689485794 +0000 UTC m=+42.121906440 container died ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.timer: Deactivated successfully.
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed to open /run/systemd/transient/ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: No such file or directory
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: tmp-crun.YCAF51.mount: Deactivated successfully.
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e16c2ca79882c79a16bfd6ec33c860677688d3a70e4e2506da76095f804b00d2-merged.mount: Deactivated successfully.
Dec 05 09:11:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31239 DF PROTO=TCP SPT=47636 DPT=9100 SEQ=2584707876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AACF15A0000000001030307) 
Dec 05 09:11:16 np0005546420.localdomain podman[108843]: 2025-12-05 09:11:16.815702313 +0000 UTC m=+42.248122909 container cleanup ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=nova_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, io.openshift.expose-services=, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 09:11:16 np0005546420.localdomain podman[108843]: nova_compute
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.timer: Failed to open /run/systemd/transient/ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.timer: No such file or directory
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed to open /run/systemd/transient/ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: No such file or directory
Dec 05 09:11:16 np0005546420.localdomain podman[109052]: 2025-12-05 09:11:16.832265026 +0000 UTC m=+0.129257884 container cleanup ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: libpod-conmon-ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.scope: Deactivated successfully.
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.timer: Failed to open /run/systemd/transient/ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.timer: No such file or directory
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: Failed to open /run/systemd/transient/ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e.service: No such file or directory
Dec 05 09:11:16 np0005546420.localdomain podman[109068]: 2025-12-05 09:11:16.932758748 +0000 UTC m=+0.070463883 container cleanup ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:11:16 np0005546420.localdomain podman[109068]: nova_compute
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: tripleo_nova_compute.service: Deactivated successfully.
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: Stopped nova_compute container.
Dec 05 09:11:16 np0005546420.localdomain systemd[1]: tripleo_nova_compute.service: Consumed 1.182s CPU time, no IO.
Dec 05 09:11:16 np0005546420.localdomain sudo[108777]: pam_unix(sudo:session): session closed for user root
Dec 05 09:11:17 np0005546420.localdomain sudo[109169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uefbaysqfmrscrsbcukmniyhzfkykczj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925877.0994582-114-81834802672567/AnsiballZ_systemd_service.py
Dec 05 09:11:17 np0005546420.localdomain sudo[109169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:11:17 np0005546420.localdomain python3.9[109171]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:11:17 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:11:17 np0005546420.localdomain systemd-sysv-generator[109202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:11:17 np0005546420.localdomain systemd-rc-local-generator[109199]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:11:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: Stopping nova_migration_target container...
Dec 05 09:11:18 np0005546420.localdomain sshd[69768]: Received signal 15; terminating.
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: libpod-a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.scope: Deactivated successfully.
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: libpod-a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.scope: Consumed 34.382s CPU time.
Dec 05 09:11:18 np0005546420.localdomain podman[109211]: 2025-12-05 09:11:18.251104698 +0000 UTC m=+0.080943148 container died a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, tcib_managed=true)
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.timer: Deactivated successfully.
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Failed to open /run/systemd/transient/a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: No such file or directory
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: tmp-crun.XNDIxn.mount: Deactivated successfully.
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3-userdata-shm.mount: Deactivated successfully.
Dec 05 09:11:18 np0005546420.localdomain podman[109211]: 2025-12-05 09:11:18.307839905 +0000 UTC m=+0.137678335 container cleanup a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 09:11:18 np0005546420.localdomain podman[109211]: nova_migration_target
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.timer: Failed to open /run/systemd/transient/a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.timer: No such file or directory
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Failed to open /run/systemd/transient/a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: No such file or directory
Dec 05 09:11:18 np0005546420.localdomain podman[109224]: 2025-12-05 09:11:18.341533628 +0000 UTC m=+0.077135760 container cleanup a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, version=17.1.12, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: libpod-conmon-a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.scope: Deactivated successfully.
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.timer: Failed to open /run/systemd/transient/a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.timer: No such file or directory
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: Failed to open /run/systemd/transient/a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3.service: No such file or directory
Dec 05 09:11:18 np0005546420.localdomain podman[109240]: 2025-12-05 09:11:18.447179021 +0000 UTC m=+0.074097797 container cleanup a3cb0266962ca0497d51aac252133ef215ff880bb28a5e50883445b193303db3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 05 09:11:18 np0005546420.localdomain podman[109240]: nova_migration_target
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully.
Dec 05 09:11:18 np0005546420.localdomain systemd[1]: Stopped nova_migration_target container.
Dec 05 09:11:18 np0005546420.localdomain sudo[109169]: pam_unix(sudo:session): session closed for user root
Dec 05 09:11:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32301 DF PROTO=TCP SPT=59656 DPT=9105 SEQ=1579824071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AACF9710000000001030307) 
Dec 05 09:11:18 np0005546420.localdomain sudo[109343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjmlqibozobadsksseumywjguznpsyyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925878.619179-114-173640140147514/AnsiballZ_systemd_service.py
Dec 05 09:11:18 np0005546420.localdomain sudo[109343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:11:19 np0005546420.localdomain python3.9[109345]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:11:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-81af632ac7b1bb30b73d3b843d9ead4231843a2eced4d0ef746349ae454b4194-merged.mount: Deactivated successfully.
Dec 05 09:11:19 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:11:19 np0005546420.localdomain systemd-sysv-generator[109379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:11:19 np0005546420.localdomain systemd-rc-local-generator[109375]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:11:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:11:19 np0005546420.localdomain systemd[1]: Stopping nova_virtlogd_wrapper container...
Dec 05 09:11:19 np0005546420.localdomain systemd[1]: tmp-crun.fURswc.mount: Deactivated successfully.
Dec 05 09:11:19 np0005546420.localdomain systemd[1]: libpod-a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a.scope: Deactivated successfully.
Dec 05 09:11:19 np0005546420.localdomain podman[109387]: 2025-12-05 09:11:19.66736745 +0000 UTC m=+0.066696617 container died a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, container_name=nova_virtlogd_wrapper, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:11:19 np0005546420.localdomain podman[109387]: 2025-12-05 09:11:19.712211188 +0000 UTC m=+0.111540335 container cleanup a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtlogd_wrapper)
Dec 05 09:11:19 np0005546420.localdomain podman[109387]: nova_virtlogd_wrapper
Dec 05 09:11:19 np0005546420.localdomain podman[109401]: 2025-12-05 09:11:19.74100478 +0000 UTC m=+0.062918149 container cleanup a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtlogd_wrapper)
Dec 05 09:11:20 np0005546420.localdomain systemd[1]: tmp-crun.SITHnb.mount: Deactivated successfully.
Dec 05 09:11:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5f9b52405571b7dbea88b728550de84377ddb5cebafdc587dadde8e1530aa413-merged.mount: Deactivated successfully.
Dec 05 09:11:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a-userdata-shm.mount: Deactivated successfully.
Dec 05 09:11:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32303 DF PROTO=TCP SPT=59656 DPT=9105 SEQ=1579824071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD05590000000001030307) 
Dec 05 09:11:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53843 DF PROTO=TCP SPT=45900 DPT=9102 SEQ=1101893870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD11D90000000001030307) 
Dec 05 09:11:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14876 DF PROTO=TCP SPT=53872 DPT=9102 SEQ=1154657996 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD271A0000000001030307) 
Dec 05 09:11:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11579 DF PROTO=TCP SPT=45156 DPT=9882 SEQ=3549483540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD2FD90000000001030307) 
Dec 05 09:11:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32305 DF PROTO=TCP SPT=59656 DPT=9105 SEQ=1579824071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD35D90000000001030307) 
Dec 05 09:11:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65092 DF PROTO=TCP SPT=52752 DPT=9101 SEQ=1163864973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD3FDA0000000001030307) 
Dec 05 09:11:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:11:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:11:38 np0005546420.localdomain podman[109418]: 2025-12-05 09:11:38.762496379 +0000 UTC m=+0.088628395 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:11:38 np0005546420.localdomain podman[109417]: 2025-12-05 09:11:38.806396489 +0000 UTC m=+0.134649431 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public)
Dec 05 09:11:38 np0005546420.localdomain podman[109417]: 2025-12-05 09:11:38.828297058 +0000 UTC m=+0.156550050 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 09:11:38 np0005546420.localdomain podman[109417]: unhealthy
Dec 05 09:11:38 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:11:38 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:11:38 np0005546420.localdomain podman[109418]: 2025-12-05 09:11:38.881358361 +0000 UTC m=+0.207490387 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 05 09:11:38 np0005546420.localdomain podman[109418]: unhealthy
Dec 05 09:11:38 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:11:38 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:11:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49081 DF PROTO=TCP SPT=40606 DPT=9100 SEQ=1569149216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD4ED90000000001030307) 
Dec 05 09:11:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49082 DF PROTO=TCP SPT=40606 DPT=9100 SEQ=1569149216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD56D90000000001030307) 
Dec 05 09:11:46 np0005546420.localdomain sudo[109454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:11:46 np0005546420.localdomain sudo[109454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:11:46 np0005546420.localdomain sudo[109454]: pam_unix(sudo:session): session closed for user root
Dec 05 09:11:46 np0005546420.localdomain sudo[109469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:11:46 np0005546420.localdomain sudo[109469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:11:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49083 DF PROTO=TCP SPT=40606 DPT=9100 SEQ=1569149216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD669A0000000001030307) 
Dec 05 09:11:47 np0005546420.localdomain sudo[109469]: pam_unix(sudo:session): session closed for user root
Dec 05 09:11:47 np0005546420.localdomain sudo[109515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:11:47 np0005546420.localdomain sudo[109515]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:11:47 np0005546420.localdomain sudo[109515]: pam_unix(sudo:session): session closed for user root
Dec 05 09:11:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49981 DF PROTO=TCP SPT=55648 DPT=9105 SEQ=3616199754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD6EA10000000001030307) 
Dec 05 09:11:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49983 DF PROTO=TCP SPT=55648 DPT=9105 SEQ=3616199754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD7A990000000001030307) 
Dec 05 09:11:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14878 DF PROTO=TCP SPT=53872 DPT=9102 SEQ=1154657996 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD87D90000000001030307) 
Dec 05 09:12:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27921 DF PROTO=TCP SPT=55138 DPT=9102 SEQ=3603525635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAD9C5A0000000001030307) 
Dec 05 09:12:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22955 DF PROTO=TCP SPT=33940 DPT=9882 SEQ=987274919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AADA3D90000000001030307) 
Dec 05 09:12:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49985 DF PROTO=TCP SPT=55648 DPT=9105 SEQ=3616199754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AADA9D90000000001030307) 
Dec 05 09:12:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:12:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4800.1 total, 600.0 interval
                                                          Cumulative writes: 5715 writes, 25K keys, 5715 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5715 writes, 734 syncs, 7.79 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 09:12:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28564 DF PROTO=TCP SPT=41944 DPT=9101 SEQ=3080571420 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AADB5D90000000001030307) 
Dec 05 09:12:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:12:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:12:09 np0005546420.localdomain podman[109530]: 2025-12-05 09:12:09.267993567 +0000 UTC m=+0.091647230 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1761123044)
Dec 05 09:12:09 np0005546420.localdomain podman[109530]: 2025-12-05 09:12:09.311073161 +0000 UTC m=+0.134726844 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, container_name=ovn_controller)
Dec 05 09:12:09 np0005546420.localdomain podman[109530]: unhealthy
Dec 05 09:12:09 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:12:09 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:12:09 np0005546420.localdomain podman[109531]: 2025-12-05 09:12:09.369259543 +0000 UTC m=+0.192120221 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true)
Dec 05 09:12:09 np0005546420.localdomain podman[109531]: 2025-12-05 09:12:09.41368764 +0000 UTC m=+0.236548258 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:12:09 np0005546420.localdomain podman[109531]: unhealthy
Dec 05 09:12:09 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:12:09 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:12:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8987 DF PROTO=TCP SPT=34352 DPT=9100 SEQ=2839297817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AADC4190000000001030307) 
Dec 05 09:12:11 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:12:11 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4800.1 total, 600.0 interval
                                                          Cumulative writes: 4690 writes, 21K keys, 4690 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4690 writes, 584 syncs, 8.03 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 09:12:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8988 DF PROTO=TCP SPT=34352 DPT=9100 SEQ=2839297817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AADCC1A0000000001030307) 
Dec 05 09:12:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8989 DF PROTO=TCP SPT=34352 DPT=9100 SEQ=2839297817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AADDBDA0000000001030307) 
Dec 05 09:12:18 np0005546420.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 05 09:12:18 np0005546420.localdomain recover_tripleo_nova_virtqemud[109568]: 62579
Dec 05 09:12:18 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 05 09:12:18 np0005546420.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 05 09:12:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63175 DF PROTO=TCP SPT=47714 DPT=9105 SEQ=4293052140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AADE3D10000000001030307) 
Dec 05 09:12:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63177 DF PROTO=TCP SPT=47714 DPT=9105 SEQ=4293052140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AADEFDA0000000001030307) 
Dec 05 09:12:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27923 DF PROTO=TCP SPT=55138 DPT=9102 SEQ=3603525635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AADFBDA0000000001030307) 
Dec 05 09:12:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37140 DF PROTO=TCP SPT=46442 DPT=9102 SEQ=1567034683 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE115A0000000001030307) 
Dec 05 09:12:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40114 DF PROTO=TCP SPT=48890 DPT=9882 SEQ=3203669236 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE19D90000000001030307) 
Dec 05 09:12:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63179 DF PROTO=TCP SPT=47714 DPT=9105 SEQ=4293052140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE20600000000001030307) 
Dec 05 09:12:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63727 DF PROTO=TCP SPT=35496 DPT=9101 SEQ=3494302695 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE29D90000000001030307) 
Dec 05 09:12:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:12:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:12:39 np0005546420.localdomain podman[109569]: 2025-12-05 09:12:39.530298422 +0000 UTC m=+0.099557114 container health_status 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Dec 05 09:12:39 np0005546420.localdomain podman[109569]: 2025-12-05 09:12:39.569188406 +0000 UTC m=+0.138447058 container exec_died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible)
Dec 05 09:12:39 np0005546420.localdomain podman[109569]: unhealthy
Dec 05 09:12:39 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:12:39 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed with result 'exit-code'.
Dec 05 09:12:39 np0005546420.localdomain podman[109587]: 2025-12-05 09:12:39.626341447 +0000 UTC m=+0.086116658 container health_status dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:12:39 np0005546420.localdomain podman[109587]: 2025-12-05 09:12:39.641468875 +0000 UTC m=+0.101244026 container exec_died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team)
Dec 05 09:12:39 np0005546420.localdomain podman[109587]: unhealthy
Dec 05 09:12:39 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:12:39 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed with result 'exit-code'.
Dec 05 09:12:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12345 DF PROTO=TCP SPT=60914 DPT=9100 SEQ=765191101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE395A0000000001030307) 
Dec 05 09:12:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12346 DF PROTO=TCP SPT=60914 DPT=9100 SEQ=765191101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE41590000000001030307) 
Dec 05 09:12:43 np0005546420.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing.
Dec 05 09:12:43 np0005546420.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61801 (conmon) with signal SIGKILL.
Dec 05 09:12:43 np0005546420.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL
Dec 05 09:12:43 np0005546420.localdomain systemd[1]: libpod-conmon-a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a.scope: Deactivated successfully.
Dec 05 09:12:43 np0005546420.localdomain podman[109620]: error opening file `/run/crun/a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a/status`: No such file or directory
Dec 05 09:12:43 np0005546420.localdomain podman[109609]: 2025-12-05 09:12:43.986001547 +0000 UTC m=+0.057141090 container cleanup a4d903b900f8618ac8e7bd0b16c8d0647931d912642680e9432ba8d5a2d6dd1a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=1761123044, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']})
Dec 05 09:12:43 np0005546420.localdomain podman[109609]: nova_virtlogd_wrapper
Dec 05 09:12:43 np0005546420.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'.
Dec 05 09:12:43 np0005546420.localdomain systemd[1]: Stopped nova_virtlogd_wrapper container.
Dec 05 09:12:44 np0005546420.localdomain sudo[109343]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:44 np0005546420.localdomain sudo[109711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lzlhbeoucqaapndgekmmmxekekgdglkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925964.1557338-114-218369104859645/AnsiballZ_systemd_service.py
Dec 05 09:12:44 np0005546420.localdomain sudo[109711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:44 np0005546420.localdomain python3.9[109713]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:12:44 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:12:44 np0005546420.localdomain systemd-rc-local-generator[109738]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:12:44 np0005546420.localdomain systemd-sysv-generator[109743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:12:44 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:12:45 np0005546420.localdomain systemd[1]: Stopping nova_virtnodedevd container...
Dec 05 09:12:45 np0005546420.localdomain systemd[1]: libpod-845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986.scope: Deactivated successfully.
Dec 05 09:12:45 np0005546420.localdomain systemd[1]: libpod-845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986.scope: Consumed 1.447s CPU time.
Dec 05 09:12:45 np0005546420.localdomain podman[109754]: 2025-12-05 09:12:45.180035857 +0000 UTC m=+0.083860429 container died 845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, container_name=nova_virtnodedevd, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 05 09:12:45 np0005546420.localdomain podman[109754]: 2025-12-05 09:12:45.216170116 +0000 UTC m=+0.119994638 container cleanup 845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, container_name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:12:45 np0005546420.localdomain podman[109754]: nova_virtnodedevd
Dec 05 09:12:45 np0005546420.localdomain podman[109771]: 2025-12-05 09:12:45.253430389 +0000 UTC m=+0.058062768 container cleanup 845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, io.buildah.version=1.41.4, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 05 09:12:45 np0005546420.localdomain systemd[1]: libpod-conmon-845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986.scope: Deactivated successfully.
Dec 05 09:12:45 np0005546420.localdomain podman[109798]: error opening file `/run/crun/845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986/status`: No such file or directory
Dec 05 09:12:45 np0005546420.localdomain podman[109787]: 2025-12-05 09:12:45.33738633 +0000 UTC m=+0.056239053 container cleanup 845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, batch=17.1_20251118.1, io.openshift.expose-services=)
Dec 05 09:12:45 np0005546420.localdomain podman[109787]: nova_virtnodedevd
Dec 05 09:12:45 np0005546420.localdomain systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully.
Dec 05 09:12:45 np0005546420.localdomain systemd[1]: Stopped nova_virtnodedevd container.
Dec 05 09:12:45 np0005546420.localdomain sudo[109711]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:45 np0005546420.localdomain sudo[109890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lihgrwcrqlqadxpmliexnuyhvmyzowqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925965.4753008-114-19469411912300/AnsiballZ_systemd_service.py
Dec 05 09:12:45 np0005546420.localdomain sudo[109890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:46 np0005546420.localdomain python3.9[109892]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:12:46 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:12:46 np0005546420.localdomain systemd-sysv-generator[109921]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:12:46 np0005546420.localdomain systemd-rc-local-generator[109917]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:12:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:12:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-fb09081a0f64c6cf9725f53043f5bfef7ea250bf1548c4bcadf49dc8ee839156-merged.mount: Deactivated successfully.
Dec 05 09:12:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-845e5359c29d3aa7e85fb1adfa4d072d1a28f35fcdadf8e94dc53ed4a8323986-userdata-shm.mount: Deactivated successfully.
Dec 05 09:12:46 np0005546420.localdomain systemd[1]: Stopping nova_virtproxyd container...
Dec 05 09:12:46 np0005546420.localdomain systemd[1]: libpod-ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed.scope: Deactivated successfully.
Dec 05 09:12:46 np0005546420.localdomain podman[109933]: 2025-12-05 09:12:46.572722868 +0000 UTC m=+0.067952165 container died ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, container_name=nova_virtproxyd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 05 09:12:46 np0005546420.localdomain podman[109933]: 2025-12-05 09:12:46.606768643 +0000 UTC m=+0.101997870 container cleanup ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtproxyd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, release=1761123044, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=)
Dec 05 09:12:46 np0005546420.localdomain podman[109933]: nova_virtproxyd
Dec 05 09:12:46 np0005546420.localdomain podman[109948]: 2025-12-05 09:12:46.646018768 +0000 UTC m=+0.060908886 container cleanup ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, url=https://www.redhat.com, container_name=nova_virtproxyd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12)
Dec 05 09:12:46 np0005546420.localdomain systemd[1]: libpod-conmon-ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed.scope: Deactivated successfully.
Dec 05 09:12:46 np0005546420.localdomain podman[109976]: error opening file `/run/crun/ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed/status`: No such file or directory
Dec 05 09:12:46 np0005546420.localdomain podman[109964]: 2025-12-05 09:12:46.762531707 +0000 UTC m=+0.075128778 container cleanup ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_virtproxyd, build-date=2025-11-19T00:35:22Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, tcib_managed=true, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']})
Dec 05 09:12:46 np0005546420.localdomain podman[109964]: nova_virtproxyd
Dec 05 09:12:46 np0005546420.localdomain systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully.
Dec 05 09:12:46 np0005546420.localdomain systemd[1]: Stopped nova_virtproxyd container.
Dec 05 09:12:46 np0005546420.localdomain sudo[109890]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12347 DF PROTO=TCP SPT=60914 DPT=9100 SEQ=765191101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE51190000000001030307) 
Dec 05 09:12:47 np0005546420.localdomain sudo[110067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-icmeuqopdpppxmkrqrhbyjmnknypbhif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925966.9233594-114-164475595357407/AnsiballZ_systemd_service.py
Dec 05 09:12:47 np0005546420.localdomain sudo[110067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-eb541339826395780260e54eaea5ebe9da0c74cf9b96dae2643192eb4d511174-merged.mount: Deactivated successfully.
Dec 05 09:12:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed-userdata-shm.mount: Deactivated successfully.
Dec 05 09:12:47 np0005546420.localdomain python3.9[110069]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:12:47 np0005546420.localdomain sudo[110072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:12:47 np0005546420.localdomain sudo[110072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:12:47 np0005546420.localdomain sudo[110072]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:48 np0005546420.localdomain sudo[110087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:12:48 np0005546420.localdomain sudo[110087]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:12:48 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:12:48 np0005546420.localdomain sudo[110087]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:48 np0005546420.localdomain systemd-rc-local-generator[110161]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:12:48 np0005546420.localdomain systemd-sysv-generator[110164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:12:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:12:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13916 DF PROTO=TCP SPT=35364 DPT=9105 SEQ=99074273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE59010000000001030307) 
Dec 05 09:12:48 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully.
Dec 05 09:12:48 np0005546420.localdomain systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m.
Dec 05 09:12:48 np0005546420.localdomain systemd[1]: Stopping nova_virtqemud container...
Dec 05 09:12:49 np0005546420.localdomain systemd[1]: tmp-crun.anB3eR.mount: Deactivated successfully.
Dec 05 09:12:49 np0005546420.localdomain systemd[1]: libpod-7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c.scope: Deactivated successfully.
Dec 05 09:12:49 np0005546420.localdomain podman[110172]: 2025-12-05 09:12:49.02124652 +0000 UTC m=+0.089768291 container died 7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, config_id=tripleo_step3, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, container_name=nova_virtqemud, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 05 09:12:49 np0005546420.localdomain systemd[1]: libpod-7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c.scope: Consumed 2.251s CPU time.
Dec 05 09:12:49 np0005546420.localdomain podman[110172]: 2025-12-05 09:12:49.060020861 +0000 UTC m=+0.128542582 container cleanup 7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Dec 05 09:12:49 np0005546420.localdomain podman[110172]: nova_virtqemud
Dec 05 09:12:49 np0005546420.localdomain podman[110187]: 2025-12-05 09:12:49.10842901 +0000 UTC m=+0.068127801 container cleanup 7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, build-date=2025-11-19T00:35:22Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtqemud, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, config_id=tripleo_step3)
Dec 05 09:12:49 np0005546420.localdomain systemd[1]: libpod-conmon-7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c.scope: Deactivated successfully.
Dec 05 09:12:49 np0005546420.localdomain podman[110214]: error opening file `/run/crun/7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c/status`: No such file or directory
Dec 05 09:12:49 np0005546420.localdomain podman[110202]: 2025-12-05 09:12:49.209366516 +0000 UTC m=+0.070213466 container cleanup 7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, config_id=tripleo_step3, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, container_name=nova_virtqemud, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 05 09:12:49 np0005546420.localdomain podman[110202]: nova_virtqemud
Dec 05 09:12:49 np0005546420.localdomain systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully.
Dec 05 09:12:49 np0005546420.localdomain systemd[1]: Stopped nova_virtqemud container.
Dec 05 09:12:49 np0005546420.localdomain sudo[110067]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:49 np0005546420.localdomain sudo[110305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-afwcldhfmwjlagqxwwtcvchymgnvoqal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925969.3875177-114-160919174550149/AnsiballZ_systemd_service.py
Dec 05 09:12:49 np0005546420.localdomain sudo[110305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:49 np0005546420.localdomain python3.9[110307]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:12:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d-merged.mount: Deactivated successfully.
Dec 05 09:12:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7103204b7c5268034aff3a6a96c366ce2591f46fd4ffe6353401bfc589a88b1c-userdata-shm.mount: Deactivated successfully.
Dec 05 09:12:50 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:12:50 np0005546420.localdomain systemd-sysv-generator[110340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:12:50 np0005546420.localdomain systemd-rc-local-generator[110334]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:12:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:12:50 np0005546420.localdomain sudo[110305]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:50 np0005546420.localdomain sudo[110381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:12:50 np0005546420.localdomain sudo[110381]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:12:50 np0005546420.localdomain sudo[110381]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:50 np0005546420.localdomain sudo[110450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ulmxezuswhggxjkmzudmnaabwnuqqjix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925970.5022354-114-77175862224470/AnsiballZ_systemd_service.py
Dec 05 09:12:50 np0005546420.localdomain sudo[110450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:51 np0005546420.localdomain python3.9[110452]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:12:51 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:12:51 np0005546420.localdomain systemd-sysv-generator[110484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:12:51 np0005546420.localdomain systemd-rc-local-generator[110481]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:12:51 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:12:51 np0005546420.localdomain systemd[1]: Stopping nova_virtsecretd container...
Dec 05 09:12:51 np0005546420.localdomain systemd[1]: libpod-2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130.scope: Deactivated successfully.
Dec 05 09:12:51 np0005546420.localdomain podman[110492]: 2025-12-05 09:12:51.574352241 +0000 UTC m=+0.085211600 container died 2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, config_id=tripleo_step3, container_name=nova_virtsecretd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']})
Dec 05 09:12:51 np0005546420.localdomain systemd[1]: tmp-crun.cnRb38.mount: Deactivated successfully.
Dec 05 09:12:51 np0005546420.localdomain systemd[1]: tmp-crun.USFZqO.mount: Deactivated successfully.
Dec 05 09:12:51 np0005546420.localdomain podman[110492]: 2025-12-05 09:12:51.621887583 +0000 UTC m=+0.132746912 container cleanup 2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtsecretd, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=)
Dec 05 09:12:51 np0005546420.localdomain podman[110492]: nova_virtsecretd
Dec 05 09:12:51 np0005546420.localdomain podman[110505]: 2025-12-05 09:12:51.670133617 +0000 UTC m=+0.078629486 container cleanup 2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-type=git, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, version=17.1.12, container_name=nova_virtsecretd, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64)
Dec 05 09:12:51 np0005546420.localdomain systemd[1]: libpod-conmon-2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130.scope: Deactivated successfully.
Dec 05 09:12:51 np0005546420.localdomain podman[110536]: error opening file `/run/crun/2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130/status`: No such file or directory
Dec 05 09:12:51 np0005546420.localdomain podman[110525]: 2025-12-05 09:12:51.775626765 +0000 UTC m=+0.071306670 container cleanup 2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, container_name=nova_virtsecretd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 05 09:12:51 np0005546420.localdomain podman[110525]: nova_virtsecretd
Dec 05 09:12:51 np0005546420.localdomain systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully.
Dec 05 09:12:51 np0005546420.localdomain systemd[1]: Stopped nova_virtsecretd container.
Dec 05 09:12:51 np0005546420.localdomain sudo[110450]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13918 DF PROTO=TCP SPT=35364 DPT=9105 SEQ=99074273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE651A0000000001030307) 
Dec 05 09:12:52 np0005546420.localdomain sudo[110627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxrigkwlrvvncefuudookcqbrvdoeeoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925971.9357536-114-207436535297818/AnsiballZ_systemd_service.py
Dec 05 09:12:52 np0005546420.localdomain sudo[110627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:52 np0005546420.localdomain python3.9[110629]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:12:52 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:12:52 np0005546420.localdomain systemd-sysv-generator[110657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:12:52 np0005546420.localdomain systemd-rc-local-generator[110654]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:12:52 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:12:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b752316a61cbb33d6feb02d4eda12e2e301029b20202d03c944a50658e11130-userdata-shm.mount: Deactivated successfully.
Dec 05 09:12:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a594ca6f65c5dc922c764b7fba6bddaef9e5a11599ecac6b1adff7ab94f7ceb9-merged.mount: Deactivated successfully.
Dec 05 09:12:52 np0005546420.localdomain systemd[1]: Stopping nova_virtstoraged container...
Dec 05 09:12:52 np0005546420.localdomain systemd[1]: tmp-crun.UzX0vc.mount: Deactivated successfully.
Dec 05 09:12:52 np0005546420.localdomain systemd[1]: libpod-3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781.scope: Deactivated successfully.
Dec 05 09:12:52 np0005546420.localdomain podman[110670]: 2025-12-05 09:12:52.929659055 +0000 UTC m=+0.067909474 container died 3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, container_name=nova_virtstoraged, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z)
Dec 05 09:12:52 np0005546420.localdomain podman[110670]: 2025-12-05 09:12:52.959396586 +0000 UTC m=+0.097646985 container cleanup 3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']})
Dec 05 09:12:52 np0005546420.localdomain podman[110670]: nova_virtstoraged
Dec 05 09:12:52 np0005546420.localdomain podman[110685]: 2025-12-05 09:12:52.990486279 +0000 UTC m=+0.051885218 container cleanup 3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, build-date=2025-11-19T00:35:22Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=nova_virtstoraged, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, vcs-type=git)
Dec 05 09:12:53 np0005546420.localdomain systemd[1]: libpod-conmon-3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781.scope: Deactivated successfully.
Dec 05 09:12:53 np0005546420.localdomain podman[110715]: error opening file `/run/crun/3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781/status`: No such file or directory
Dec 05 09:12:53 np0005546420.localdomain podman[110702]: 2025-12-05 09:12:53.072081846 +0000 UTC m=+0.053379254 container cleanup 3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ac0f5be6f71e6f8c16cd05155c4b5429'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, container_name=nova_virtstoraged, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 05 09:12:53 np0005546420.localdomain podman[110702]: nova_virtstoraged
Dec 05 09:12:53 np0005546420.localdomain systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully.
Dec 05 09:12:53 np0005546420.localdomain systemd[1]: Stopped nova_virtstoraged container.
Dec 05 09:12:53 np0005546420.localdomain sudo[110627]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:53 np0005546420.localdomain sudo[110806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqrqgilrxlwxkrtadzfqkyrngukpwhdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925973.2750306-114-9563376224482/AnsiballZ_systemd_service.py
Dec 05 09:12:53 np0005546420.localdomain sudo[110806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-62dc5ca56cabff6fee2b8a4f6e4dde9258d2fdbc443d9294aabf255694ff62dc-merged.mount: Deactivated successfully.
Dec 05 09:12:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781-userdata-shm.mount: Deactivated successfully.
Dec 05 09:12:53 np0005546420.localdomain python3.9[110808]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:12:54 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:12:54 np0005546420.localdomain systemd-rc-local-generator[110831]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:12:54 np0005546420.localdomain systemd-sysv-generator[110836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:12:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12348 DF PROTO=TCP SPT=60914 DPT=9100 SEQ=765191101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE71D90000000001030307) 
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: Stopping ovn_controller container...
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: libpod-1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.scope: Deactivated successfully.
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: libpod-1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.scope: Consumed 2.674s CPU time.
Dec 05 09:12:55 np0005546420.localdomain podman[110849]: 2025-12-05 09:12:55.312779341 +0000 UTC m=+0.063759876 container died 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.timer: Deactivated successfully.
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed to open /run/systemd/transient/1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: No such file or directory
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb-userdata-shm.mount: Deactivated successfully.
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-407e666a727972fae5871c994186b9ead4079502f92535d717006e20e7650b6a-merged.mount: Deactivated successfully.
Dec 05 09:12:55 np0005546420.localdomain podman[110849]: 2025-12-05 09:12:55.364455472 +0000 UTC m=+0.115435967 container cleanup 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4)
Dec 05 09:12:55 np0005546420.localdomain podman[110849]: ovn_controller
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.timer: Failed to open /run/systemd/transient/1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.timer: No such file or directory
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed to open /run/systemd/transient/1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: No such file or directory
Dec 05 09:12:55 np0005546420.localdomain podman[110862]: 2025-12-05 09:12:55.401058316 +0000 UTC m=+0.072736934 container cleanup 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: libpod-conmon-1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.scope: Deactivated successfully.
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.timer: Failed to open /run/systemd/transient/1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.timer: No such file or directory
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: Failed to open /run/systemd/transient/1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb.service: No such file or directory
Dec 05 09:12:55 np0005546420.localdomain podman[110876]: 2025-12-05 09:12:55.509819144 +0000 UTC m=+0.074079296 container cleanup 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public)
Dec 05 09:12:55 np0005546420.localdomain podman[110876]: ovn_controller
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: tripleo_ovn_controller.service: Deactivated successfully.
Dec 05 09:12:55 np0005546420.localdomain systemd[1]: Stopped ovn_controller container.
Dec 05 09:12:55 np0005546420.localdomain sudo[110806]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:55 np0005546420.localdomain sudo[110977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okypujgcazgzuzwojhaghmvgaqiombxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925975.691094-114-115699376593762/AnsiballZ_systemd_service.py
Dec 05 09:12:55 np0005546420.localdomain sudo[110977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:56 np0005546420.localdomain python3.9[110979]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:12:56 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:12:56 np0005546420.localdomain systemd-sysv-generator[111012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:12:56 np0005546420.localdomain systemd-rc-local-generator[111008]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:12:56 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:12:56 np0005546420.localdomain systemd[1]: Stopping ovn_metadata_agent container...
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: libpod-dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.scope: Deactivated successfully.
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: libpod-dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.scope: Consumed 9.801s CPU time.
Dec 05 09:12:57 np0005546420.localdomain podman[111020]: 2025-12-05 09:12:57.246242031 +0000 UTC m=+0.616558826 container died dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent)
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: tmp-crun.mp7bq1.mount: Deactivated successfully.
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.timer: Deactivated successfully.
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed to open /run/systemd/transient/dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: No such file or directory
Dec 05 09:12:57 np0005546420.localdomain podman[111020]: 2025-12-05 09:12:57.369860669 +0000 UTC m=+0.740177464 container cleanup dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc.)
Dec 05 09:12:57 np0005546420.localdomain podman[111020]: ovn_metadata_agent
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.timer: Failed to open /run/systemd/transient/dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.timer: No such file or directory
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed to open /run/systemd/transient/dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: No such file or directory
Dec 05 09:12:57 np0005546420.localdomain podman[111032]: 2025-12-05 09:12:57.395142183 +0000 UTC m=+0.142362170 container cleanup dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_metadata_agent, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: libpod-conmon-dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.scope: Deactivated successfully.
Dec 05 09:12:57 np0005546420.localdomain podman[111063]: error opening file `/run/crun/dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce/status`: No such file or directory
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.timer: Failed to open /run/systemd/transient/dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.timer: No such file or directory
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: Failed to open /run/systemd/transient/dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce.service: No such file or directory
Dec 05 09:12:57 np0005546420.localdomain podman[111051]: 2025-12-05 09:12:57.519316509 +0000 UTC m=+0.086962484 container cleanup dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, version=17.1.12, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 05 09:12:57 np0005546420.localdomain podman[111051]: ovn_metadata_agent
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully.
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: Stopped ovn_metadata_agent container.
Dec 05 09:12:57 np0005546420.localdomain sudo[110977]: pam_unix(sudo:session): session closed for user root
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-25d567292acb3ce2216020d33f5af2ad32fea36c49bc00cd4399244553285869-merged.mount: Deactivated successfully.
Dec 05 09:12:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce-userdata-shm.mount: Deactivated successfully.
Dec 05 09:12:57 np0005546420.localdomain sudo[111154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zkdfmzgywhzqcehfoxmlxgozvfstzgnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764925977.698566-114-57383594347688/AnsiballZ_systemd_service.py
Dec 05 09:12:57 np0005546420.localdomain sudo[111154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:12:58 np0005546420.localdomain python3.9[111156]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:12:59 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:12:59 np0005546420.localdomain systemd-sysv-generator[111186]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:12:59 np0005546420.localdomain systemd-rc-local-generator[111182]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:12:59 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:12:59 np0005546420.localdomain sudo[111154]: pam_unix(sudo:session): session closed for user root
Dec 05 09:13:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26868 DF PROTO=TCP SPT=57886 DPT=9102 SEQ=2776083198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE86990000000001030307) 
Dec 05 09:13:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6788 DF PROTO=TCP SPT=45232 DPT=9882 SEQ=3692134672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE8DD90000000001030307) 
Dec 05 09:13:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13920 DF PROTO=TCP SPT=35364 DPT=9105 SEQ=99074273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE95D90000000001030307) 
Dec 05 09:13:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40049 DF PROTO=TCP SPT=35528 DPT=9101 SEQ=2703332300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAE9FD90000000001030307) 
Dec 05 09:13:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38756 DF PROTO=TCP SPT=36510 DPT=9100 SEQ=2534850650 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAEAE590000000001030307) 
Dec 05 09:13:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38757 DF PROTO=TCP SPT=36510 DPT=9100 SEQ=2534850650 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAEB6590000000001030307) 
Dec 05 09:13:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38758 DF PROTO=TCP SPT=36510 DPT=9100 SEQ=2534850650 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAEC6190000000001030307) 
Dec 05 09:13:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6789 DF PROTO=TCP SPT=45232 DPT=9882 SEQ=3692134672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAECDD90000000001030307) 
Dec 05 09:13:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15698 DF PROTO=TCP SPT=34504 DPT=9105 SEQ=471230138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAEDA590000000001030307) 
Dec 05 09:13:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38759 DF PROTO=TCP SPT=36510 DPT=9100 SEQ=2534850650 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAEE5D90000000001030307) 
Dec 05 09:13:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1862 DF PROTO=TCP SPT=45924 DPT=9102 SEQ=4236728137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAEFBD90000000001030307) 
Dec 05 09:13:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63858 DF PROTO=TCP SPT=60926 DPT=9882 SEQ=854996410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF03DA0000000001030307) 
Dec 05 09:13:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15700 DF PROTO=TCP SPT=34504 DPT=9105 SEQ=471230138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF09DA0000000001030307) 
Dec 05 09:13:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50478 DF PROTO=TCP SPT=55104 DPT=9101 SEQ=1192398936 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF14790000000001030307) 
Dec 05 09:13:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31685 DF PROTO=TCP SPT=41072 DPT=9100 SEQ=2598438949 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF23990000000001030307) 
Dec 05 09:13:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31686 DF PROTO=TCP SPT=41072 DPT=9100 SEQ=2598438949 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF2B9A0000000001030307) 
Dec 05 09:13:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31687 DF PROTO=TCP SPT=41072 DPT=9100 SEQ=2598438949 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF3B590000000001030307) 
Dec 05 09:13:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1232 DF PROTO=TCP SPT=59984 DPT=9105 SEQ=1146924074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF43610000000001030307) 
Dec 05 09:13:50 np0005546420.localdomain sudo[111208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:13:50 np0005546420.localdomain sudo[111208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:13:50 np0005546420.localdomain sudo[111208]: pam_unix(sudo:session): session closed for user root
Dec 05 09:13:50 np0005546420.localdomain sudo[111223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:13:50 np0005546420.localdomain sudo[111223]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:13:51 np0005546420.localdomain sudo[111223]: pam_unix(sudo:session): session closed for user root
Dec 05 09:13:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1234 DF PROTO=TCP SPT=59984 DPT=9105 SEQ=1146924074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF4F590000000001030307) 
Dec 05 09:13:52 np0005546420.localdomain sudo[111269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:13:52 np0005546420.localdomain sudo[111269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:13:52 np0005546420.localdomain sudo[111269]: pam_unix(sudo:session): session closed for user root
Dec 05 09:13:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1864 DF PROTO=TCP SPT=45924 DPT=9102 SEQ=4236728137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF5BD90000000001030307) 
Dec 05 09:14:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51572 DF PROTO=TCP SPT=49418 DPT=9102 SEQ=282783331 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF71190000000001030307) 
Dec 05 09:14:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29380 DF PROTO=TCP SPT=59006 DPT=9882 SEQ=1310834600 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF79D90000000001030307) 
Dec 05 09:14:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1236 DF PROTO=TCP SPT=59984 DPT=9105 SEQ=1146924074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF7FDA0000000001030307) 
Dec 05 09:14:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50482 DF PROTO=TCP SPT=55104 DPT=9101 SEQ=1192398936 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF8BD90000000001030307) 
Dec 05 09:14:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12456 DF PROTO=TCP SPT=33400 DPT=9100 SEQ=2347375212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAF98D90000000001030307) 
Dec 05 09:14:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12457 DF PROTO=TCP SPT=33400 DPT=9100 SEQ=2347375212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAFA0DA0000000001030307) 
Dec 05 09:14:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12458 DF PROTO=TCP SPT=33400 DPT=9100 SEQ=2347375212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAFB09A0000000001030307) 
Dec 05 09:14:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47473 DF PROTO=TCP SPT=38682 DPT=9105 SEQ=3538064255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAFB8910000000001030307) 
Dec 05 09:14:19 np0005546420.localdomain sshd[106024]: Received disconnect from 192.168.122.30 port 50982:11: disconnected by user
Dec 05 09:14:19 np0005546420.localdomain sshd[106024]: Disconnected from user zuul 192.168.122.30 port 50982
Dec 05 09:14:19 np0005546420.localdomain sshd[106014]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:14:19 np0005546420.localdomain systemd[1]: session-36.scope: Deactivated successfully.
Dec 05 09:14:19 np0005546420.localdomain systemd[1]: session-36.scope: Consumed 19.242s CPU time.
Dec 05 09:14:19 np0005546420.localdomain systemd-logind[762]: Session 36 logged out. Waiting for processes to exit.
Dec 05 09:14:19 np0005546420.localdomain systemd-logind[762]: Removed session 36.
Dec 05 09:14:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47475 DF PROTO=TCP SPT=38682 DPT=9105 SEQ=3538064255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAFC4990000000001030307) 
Dec 05 09:14:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12459 DF PROTO=TCP SPT=33400 DPT=9100 SEQ=2347375212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAFD1D90000000001030307) 
Dec 05 09:14:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11311 DF PROTO=TCP SPT=60360 DPT=9102 SEQ=3741197699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAFE6190000000001030307) 
Dec 05 09:14:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59972 DF PROTO=TCP SPT=36306 DPT=9882 SEQ=4135786338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAFEDD90000000001030307) 
Dec 05 09:14:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47477 DF PROTO=TCP SPT=38682 DPT=9105 SEQ=3538064255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAFF3D90000000001030307) 
Dec 05 09:14:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35797 DF PROTO=TCP SPT=47864 DPT=9101 SEQ=2334840786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AAFFFD90000000001030307) 
Dec 05 09:14:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6338 DF PROTO=TCP SPT=52940 DPT=9100 SEQ=3043669772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB00E190000000001030307) 
Dec 05 09:14:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6339 DF PROTO=TCP SPT=52940 DPT=9100 SEQ=3043669772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB016190000000001030307) 
Dec 05 09:14:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6340 DF PROTO=TCP SPT=52940 DPT=9100 SEQ=3043669772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB025D90000000001030307) 
Dec 05 09:14:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12755 DF PROTO=TCP SPT=40906 DPT=9105 SEQ=2508957098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB02DC00000000001030307) 
Dec 05 09:14:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12757 DF PROTO=TCP SPT=40906 DPT=9105 SEQ=2508957098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB039DA0000000001030307) 
Dec 05 09:14:52 np0005546420.localdomain sudo[111284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:14:52 np0005546420.localdomain sudo[111284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:14:52 np0005546420.localdomain sudo[111284]: pam_unix(sudo:session): session closed for user root
Dec 05 09:14:52 np0005546420.localdomain sudo[111299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 09:14:52 np0005546420.localdomain sudo[111299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:14:53 np0005546420.localdomain sudo[111299]: pam_unix(sudo:session): session closed for user root
Dec 05 09:14:53 np0005546420.localdomain sudo[111334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:14:53 np0005546420.localdomain sudo[111334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:14:53 np0005546420.localdomain sudo[111334]: pam_unix(sudo:session): session closed for user root
Dec 05 09:14:53 np0005546420.localdomain sudo[111349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:14:53 np0005546420.localdomain sudo[111349]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:14:53 np0005546420.localdomain sudo[111349]: pam_unix(sudo:session): session closed for user root
Dec 05 09:14:54 np0005546420.localdomain sudo[111395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:14:54 np0005546420.localdomain sudo[111395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:14:54 np0005546420.localdomain sudo[111395]: pam_unix(sudo:session): session closed for user root
Dec 05 09:14:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11313 DF PROTO=TCP SPT=60360 DPT=9102 SEQ=3741197699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB045D90000000001030307) 
Dec 05 09:15:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27220 DF PROTO=TCP SPT=33420 DPT=9102 SEQ=2250808907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB05B590000000001030307) 
Dec 05 09:15:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43141 DF PROTO=TCP SPT=37098 DPT=9882 SEQ=2527234603 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB063DA0000000001030307) 
Dec 05 09:15:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12759 DF PROTO=TCP SPT=40906 DPT=9105 SEQ=2508957098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB069DA0000000001030307) 
Dec 05 09:15:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36535 DF PROTO=TCP SPT=55602 DPT=9101 SEQ=1884362987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB073DA0000000001030307) 
Dec 05 09:15:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7093 DF PROTO=TCP SPT=52844 DPT=9100 SEQ=2765428056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB083190000000001030307) 
Dec 05 09:15:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7094 DF PROTO=TCP SPT=52844 DPT=9100 SEQ=2765428056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB08B190000000001030307) 
Dec 05 09:15:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7095 DF PROTO=TCP SPT=52844 DPT=9100 SEQ=2765428056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB09AD90000000001030307) 
Dec 05 09:15:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4596 DF PROTO=TCP SPT=52064 DPT=9105 SEQ=145820908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB0A2F00000000001030307) 
Dec 05 09:15:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4598 DF PROTO=TCP SPT=52064 DPT=9105 SEQ=145820908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB0AED90000000001030307) 
Dec 05 09:15:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27222 DF PROTO=TCP SPT=33420 DPT=9102 SEQ=2250808907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB0BBD90000000001030307) 
Dec 05 09:15:26 np0005546420.localdomain sshd[111410]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:15:26 np0005546420.localdomain sshd[111410]: Accepted publickey for zuul from 192.168.122.30 port 45254 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:15:26 np0005546420.localdomain systemd-logind[762]: New session 37 of user zuul.
Dec 05 09:15:26 np0005546420.localdomain systemd[1]: Started Session 37 of User zuul.
Dec 05 09:15:26 np0005546420.localdomain sshd[111410]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:15:27 np0005546420.localdomain sudo[111489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-otxsmdgdawnzzkmyyhtbiftovqurgxjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926126.9431677-564-232479721322950/AnsiballZ_file.py
Dec 05 09:15:27 np0005546420.localdomain sudo[111489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:27 np0005546420.localdomain python3.9[111491]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:27 np0005546420.localdomain sudo[111489]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:27 np0005546420.localdomain sudo[111581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ijqgyoarefpqxxlbwfsbpknjfszzteim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926127.5448067-564-213083824582190/AnsiballZ_file.py
Dec 05 09:15:27 np0005546420.localdomain sudo[111581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:27 np0005546420.localdomain python3.9[111583]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:28 np0005546420.localdomain sudo[111581]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:28 np0005546420.localdomain sudo[111673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkhwchfiljbkzloordrarteadzppibuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926128.1050406-564-15094966580505/AnsiballZ_file.py
Dec 05 09:15:28 np0005546420.localdomain sudo[111673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:28 np0005546420.localdomain python3.9[111675]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:28 np0005546420.localdomain sudo[111673]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:28 np0005546420.localdomain sudo[111765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ojykewqiawmcqyyrjgkxaiuvbpqcbrjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926128.6682577-564-241127952857383/AnsiballZ_file.py
Dec 05 09:15:28 np0005546420.localdomain sudo[111765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:29 np0005546420.localdomain python3.9[111767]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:29 np0005546420.localdomain sudo[111765]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:29 np0005546420.localdomain sudo[111857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvrpsizhzmwpcczvfrxsjadkjjrnelfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926129.2487931-564-215069311470043/AnsiballZ_file.py
Dec 05 09:15:29 np0005546420.localdomain sudo[111857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:29 np0005546420.localdomain python3.9[111859]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:29 np0005546420.localdomain sudo[111857]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:30 np0005546420.localdomain sudo[111949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmxfifbjzcaduhwdgumliaiifoyjahjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926129.8492756-564-59899510038288/AnsiballZ_file.py
Dec 05 09:15:30 np0005546420.localdomain sudo[111949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:30 np0005546420.localdomain python3.9[111951]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:30 np0005546420.localdomain sudo[111949]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47705 DF PROTO=TCP SPT=55578 DPT=9102 SEQ=2813637104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB0D0990000000001030307) 
Dec 05 09:15:30 np0005546420.localdomain sudo[112041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mjjkevtxyncvozphsiusbkeccocxqavl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926130.4464626-564-33680809576393/AnsiballZ_file.py
Dec 05 09:15:30 np0005546420.localdomain sudo[112041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:30 np0005546420.localdomain python3.9[112043]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:30 np0005546420.localdomain sudo[112041]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:31 np0005546420.localdomain sudo[112133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dyiesrevltfffddatvtzgsnczwakbmsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926131.0169861-564-257620724577790/AnsiballZ_file.py
Dec 05 09:15:31 np0005546420.localdomain sudo[112133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:31 np0005546420.localdomain python3.9[112135]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:31 np0005546420.localdomain sudo[112133]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:31 np0005546420.localdomain sudo[112225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqebqwcjapwytiobgwlwownesnxgdntx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926131.6069043-564-61921886589073/AnsiballZ_file.py
Dec 05 09:15:31 np0005546420.localdomain sudo[112225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:32 np0005546420.localdomain python3.9[112227]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:32 np0005546420.localdomain sudo[112225]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42553 DF PROTO=TCP SPT=54536 DPT=9882 SEQ=1771152687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB0D7D90000000001030307) 
Dec 05 09:15:32 np0005546420.localdomain sudo[112317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzbgevvaqvphkfgnemhitxfrgwmyodhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926132.177067-564-159737303516780/AnsiballZ_file.py
Dec 05 09:15:32 np0005546420.localdomain sudo[112317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:32 np0005546420.localdomain python3.9[112319]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:32 np0005546420.localdomain sudo[112317]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:33 np0005546420.localdomain sudo[112409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhnatipjzshbonpxbzvkgtwzsuujzsao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926132.7561016-564-238446076869918/AnsiballZ_file.py
Dec 05 09:15:33 np0005546420.localdomain sudo[112409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:33 np0005546420.localdomain python3.9[112411]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:33 np0005546420.localdomain sudo[112409]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:33 np0005546420.localdomain sudo[112501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hxajsykdlegusugttikgupxsqwozydre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926133.3152747-564-147413600608485/AnsiballZ_file.py
Dec 05 09:15:33 np0005546420.localdomain sudo[112501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:33 np0005546420.localdomain python3.9[112503]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:33 np0005546420.localdomain sudo[112501]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:34 np0005546420.localdomain sudo[112593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ghyxlfcapepxhqvmdsmcxbfpzgusverf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926133.8900971-564-126295500655977/AnsiballZ_file.py
Dec 05 09:15:34 np0005546420.localdomain sudo[112593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:34 np0005546420.localdomain python3.9[112595]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:34 np0005546420.localdomain sudo[112593]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4600 DF PROTO=TCP SPT=52064 DPT=9105 SEQ=145820908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB0DFD90000000001030307) 
Dec 05 09:15:34 np0005546420.localdomain sudo[112685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jxkhvwnohckmcddlpoyyrpcnhxtphlzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926134.4363768-564-91079189755182/AnsiballZ_file.py
Dec 05 09:15:34 np0005546420.localdomain sudo[112685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:34 np0005546420.localdomain python3.9[112687]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:34 np0005546420.localdomain sudo[112685]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:35 np0005546420.localdomain sudo[112777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sowihbydqctimtllvdkpzosblkrbmtsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926135.0299761-564-135450328527746/AnsiballZ_file.py
Dec 05 09:15:35 np0005546420.localdomain sudo[112777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:35 np0005546420.localdomain python3.9[112779]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:35 np0005546420.localdomain sudo[112777]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:35 np0005546420.localdomain sudo[112869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jdnjvtlmoiprcgaolccrvsqdjaidvplq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926135.5639448-564-81127523831404/AnsiballZ_file.py
Dec 05 09:15:35 np0005546420.localdomain sudo[112869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:36 np0005546420.localdomain python3.9[112871]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:36 np0005546420.localdomain sudo[112869]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:36 np0005546420.localdomain sudo[112961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sunctjbleeayatxuzvlgjazuvlrksxnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926136.133195-564-160933163205313/AnsiballZ_file.py
Dec 05 09:15:36 np0005546420.localdomain sudo[112961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:36 np0005546420.localdomain python3.9[112963]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:36 np0005546420.localdomain sudo[112961]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9303 DF PROTO=TCP SPT=46116 DPT=9101 SEQ=1100914609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB0E9D90000000001030307) 
Dec 05 09:15:37 np0005546420.localdomain sudo[113053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzhoehznkjuwdtzvrxxkwgheodsqkwgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926136.7811239-564-91795309133352/AnsiballZ_file.py
Dec 05 09:15:37 np0005546420.localdomain sudo[113053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:37 np0005546420.localdomain python3.9[113055]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:37 np0005546420.localdomain sudo[113053]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:37 np0005546420.localdomain sudo[113145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ggnjpobamvmhpbfsalkeptnpklryppqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926137.340974-564-200782094360983/AnsiballZ_file.py
Dec 05 09:15:37 np0005546420.localdomain sudo[113145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:37 np0005546420.localdomain python3.9[113147]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:37 np0005546420.localdomain sudo[113145]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:38 np0005546420.localdomain sudo[113237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpmuglbcabfzdbsbqcstrbctnstllprg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926137.9224012-564-219131952930948/AnsiballZ_file.py
Dec 05 09:15:38 np0005546420.localdomain sudo[113237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:38 np0005546420.localdomain python3.9[113239]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:38 np0005546420.localdomain sudo[113237]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:38 np0005546420.localdomain sudo[113329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ekizbmbowxsygkhwgiuddcjkblsiworm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926138.5239043-564-25949049606518/AnsiballZ_file.py
Dec 05 09:15:38 np0005546420.localdomain sudo[113329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:38 np0005546420.localdomain python3.9[113331]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:38 np0005546420.localdomain sudo[113329]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:40 np0005546420.localdomain sudo[113421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cudrvlxinvwtydmrzsofahqmxfrhrtqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926140.2832234-1014-114602459132078/AnsiballZ_file.py
Dec 05 09:15:40 np0005546420.localdomain sudo[113421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:40 np0005546420.localdomain python3.9[113423]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=326 DF PROTO=TCP SPT=48114 DPT=9100 SEQ=1287399498 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB0F8590000000001030307) 
Dec 05 09:15:40 np0005546420.localdomain sudo[113421]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:41 np0005546420.localdomain sudo[113513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tcivnccnuscdazxvtvrjlzjeauliqpxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926140.8309906-1014-166920669700218/AnsiballZ_file.py
Dec 05 09:15:41 np0005546420.localdomain sudo[113513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:41 np0005546420.localdomain python3.9[113515]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:41 np0005546420.localdomain sudo[113513]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:41 np0005546420.localdomain sudo[113605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hkieuniywwthwcsdaqevnnfmalepgkkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926141.412457-1014-194926035769929/AnsiballZ_file.py
Dec 05 09:15:41 np0005546420.localdomain sudo[113605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:41 np0005546420.localdomain python3.9[113607]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:41 np0005546420.localdomain sudo[113605]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:42 np0005546420.localdomain sudo[113697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uvdpurvshlnxcrslhfoumzzyunmqvtka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926141.9797792-1014-201719791025037/AnsiballZ_file.py
Dec 05 09:15:42 np0005546420.localdomain sudo[113697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:42 np0005546420.localdomain python3.9[113699]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:42 np0005546420.localdomain sudo[113697]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:42 np0005546420.localdomain sudo[113789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xpvhipnoolemfixlodlhsflqkocrtazy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926142.506439-1014-215781105764812/AnsiballZ_file.py
Dec 05 09:15:42 np0005546420.localdomain sudo[113789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=327 DF PROTO=TCP SPT=48114 DPT=9100 SEQ=1287399498 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB100590000000001030307) 
Dec 05 09:15:42 np0005546420.localdomain python3.9[113791]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:42 np0005546420.localdomain sudo[113789]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:43 np0005546420.localdomain sudo[113881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fwyvavrwzsdvdntzdwspmdjtapmwsnih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926143.1049833-1014-190596342889720/AnsiballZ_file.py
Dec 05 09:15:43 np0005546420.localdomain sudo[113881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:43 np0005546420.localdomain python3.9[113883]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:43 np0005546420.localdomain sudo[113881]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:43 np0005546420.localdomain sudo[113973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dljyqmrutyjcilyjvryjxnajvsutrlqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926143.6494257-1014-260939077292830/AnsiballZ_file.py
Dec 05 09:15:43 np0005546420.localdomain sudo[113973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:44 np0005546420.localdomain python3.9[113975]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:44 np0005546420.localdomain sudo[113973]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:44 np0005546420.localdomain sudo[114065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rdksankgioftfjzdrortlasxisevdsiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926144.2314556-1014-73780379270630/AnsiballZ_file.py
Dec 05 09:15:44 np0005546420.localdomain sudo[114065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:44 np0005546420.localdomain python3.9[114067]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:44 np0005546420.localdomain sudo[114065]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:45 np0005546420.localdomain sudo[114157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptmberycqnbxzfhmhfjwzqcjneojthlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926144.8192637-1014-90016926261034/AnsiballZ_file.py
Dec 05 09:15:45 np0005546420.localdomain sudo[114157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:45 np0005546420.localdomain python3.9[114159]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:45 np0005546420.localdomain sudo[114157]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:45 np0005546420.localdomain sudo[114249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-arsswxqwldgwqdnsvhvnuvluhxshunxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926145.457387-1014-199700085649030/AnsiballZ_file.py
Dec 05 09:15:45 np0005546420.localdomain sudo[114249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:45 np0005546420.localdomain python3.9[114251]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:45 np0005546420.localdomain sudo[114249]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:46 np0005546420.localdomain sudo[114341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eiawhmmshnsfcalkfgaqkwnibdjvijfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926146.1032228-1014-63180189051499/AnsiballZ_file.py
Dec 05 09:15:46 np0005546420.localdomain sudo[114341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:46 np0005546420.localdomain python3.9[114343]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:46 np0005546420.localdomain sudo[114341]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=328 DF PROTO=TCP SPT=48114 DPT=9100 SEQ=1287399498 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB110190000000001030307) 
Dec 05 09:15:46 np0005546420.localdomain sudo[114433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tuxfrlmdtcfetjpgapxzmegzuckicyby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926146.6546574-1014-185037870665948/AnsiballZ_file.py
Dec 05 09:15:46 np0005546420.localdomain sudo[114433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:47 np0005546420.localdomain python3.9[114435]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:47 np0005546420.localdomain sudo[114433]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:47 np0005546420.localdomain sudo[114525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckmemarfolobwowtwzlzhykbgilgwoes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926147.1981075-1014-261120085795448/AnsiballZ_file.py
Dec 05 09:15:47 np0005546420.localdomain sudo[114525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:47 np0005546420.localdomain python3.9[114527]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:47 np0005546420.localdomain sudo[114525]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:47 np0005546420.localdomain sudo[114617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-excxwkxqhemusulqhbkgdwtxchfrvzdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926147.7127957-1014-209009939667145/AnsiballZ_file.py
Dec 05 09:15:47 np0005546420.localdomain sudo[114617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:48 np0005546420.localdomain python3.9[114619]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:48 np0005546420.localdomain sudo[114617]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:48 np0005546420.localdomain sudo[114709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fgjtohfsnndlbczorrhhyrqecsbhytlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926148.3254697-1014-34467799286334/AnsiballZ_file.py
Dec 05 09:15:48 np0005546420.localdomain sudo[114709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:48 np0005546420.localdomain python3.9[114711]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42554 DF PROTO=TCP SPT=54536 DPT=9882 SEQ=1771152687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB117D90000000001030307) 
Dec 05 09:15:48 np0005546420.localdomain sudo[114709]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:49 np0005546420.localdomain sudo[114801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdhyquajmxirrhpuwghjwquftslgylks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926148.9307408-1014-257212094618728/AnsiballZ_file.py
Dec 05 09:15:49 np0005546420.localdomain sudo[114801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:49 np0005546420.localdomain python3.9[114803]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:49 np0005546420.localdomain sudo[114801]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:49 np0005546420.localdomain sudo[114893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-edpautmonxhjdeklvojtwwgrknftixre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926149.4858532-1014-152490164552711/AnsiballZ_file.py
Dec 05 09:15:49 np0005546420.localdomain sudo[114893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:49 np0005546420.localdomain python3.9[114895]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:49 np0005546420.localdomain sudo[114893]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:50 np0005546420.localdomain sudo[114985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zwnbysliivzhmhbadzgamznxsxqlkbrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926150.0605204-1014-139764727817582/AnsiballZ_file.py
Dec 05 09:15:50 np0005546420.localdomain sudo[114985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:50 np0005546420.localdomain python3.9[114987]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:50 np0005546420.localdomain sudo[114985]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:50 np0005546420.localdomain sudo[115077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-unnalzxnxfpqlpnygxecyfpntlsjuizm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926150.653264-1014-109719193068522/AnsiballZ_file.py
Dec 05 09:15:50 np0005546420.localdomain sudo[115077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:51 np0005546420.localdomain python3.9[115079]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:51 np0005546420.localdomain sudo[115077]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:51 np0005546420.localdomain sudo[115169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vwqijwacbwjvqxbcyvlmqdzqgqpynmpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926151.334764-1014-203571265776475/AnsiballZ_file.py
Dec 05 09:15:51 np0005546420.localdomain sudo[115169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:51 np0005546420.localdomain python3.9[115171]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:51 np0005546420.localdomain sudo[115169]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18401 DF PROTO=TCP SPT=55368 DPT=9105 SEQ=2985977507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB124190000000001030307) 
Dec 05 09:15:52 np0005546420.localdomain sudo[115261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fsswfwibmivvsqsxyntpcakmfszxhvih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926151.9514532-1014-64133296260629/AnsiballZ_file.py
Dec 05 09:15:52 np0005546420.localdomain sudo[115261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:52 np0005546420.localdomain python3.9[115263]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:15:52 np0005546420.localdomain sudo[115261]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:53 np0005546420.localdomain sudo[115353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfmrneejvsnsqvcqijjumsegbngrrmnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926152.7424538-1461-10326793289746/AnsiballZ_command.py
Dec 05 09:15:53 np0005546420.localdomain sudo[115353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:53 np0005546420.localdomain python3.9[115355]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:15:53 np0005546420.localdomain sudo[115353]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:54 np0005546420.localdomain python3.9[115447]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:15:54 np0005546420.localdomain sudo[115537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uztfuwvgiufcajiribkmsizmptjaxfra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926154.4391189-1515-142176181524400/AnsiballZ_systemd_service.py
Dec 05 09:15:54 np0005546420.localdomain sudo[115537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:54 np0005546420.localdomain sudo[115540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:15:54 np0005546420.localdomain sudo[115540]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:15:54 np0005546420.localdomain sudo[115540]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:54 np0005546420.localdomain sudo[115555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:15:54 np0005546420.localdomain sudo[115555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:15:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=329 DF PROTO=TCP SPT=48114 DPT=9100 SEQ=1287399498 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB12FD90000000001030307) 
Dec 05 09:15:54 np0005546420.localdomain python3.9[115539]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:15:54 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:15:55 np0005546420.localdomain systemd-rc-local-generator[115596]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:15:55 np0005546420.localdomain systemd-sysv-generator[115599]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:15:55 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:15:55 np0005546420.localdomain sudo[115537]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:55 np0005546420.localdomain sudo[115555]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:55 np0005546420.localdomain sudo[115726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rialgzsiueigwmkbbzvfiuxyxxeaaliu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926155.5485766-1539-45328801478722/AnsiballZ_command.py
Dec 05 09:15:55 np0005546420.localdomain sudo[115726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:55 np0005546420.localdomain python3.9[115728]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:15:56 np0005546420.localdomain sudo[115726]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:56 np0005546420.localdomain sudo[115776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:15:56 np0005546420.localdomain sudo[115776]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:15:56 np0005546420.localdomain sudo[115776]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:56 np0005546420.localdomain sudo[115834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-naikswlyxdjlagyqkrqvfnhsmtzdpboo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926156.1387446-1539-109712115882325/AnsiballZ_command.py
Dec 05 09:15:56 np0005546420.localdomain sudo[115834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:56 np0005546420.localdomain python3.9[115836]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:15:56 np0005546420.localdomain sudo[115834]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:57 np0005546420.localdomain sudo[115927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-joqtnwyphksysunxlbxezhqwyhqvhzfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926156.7528117-1539-22776683421378/AnsiballZ_command.py
Dec 05 09:15:57 np0005546420.localdomain sudo[115927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:57 np0005546420.localdomain python3.9[115929]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:15:57 np0005546420.localdomain sudo[115927]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:57 np0005546420.localdomain sudo[116020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qpgpkkfcmcqgubxnfhhhujrkmxqnolta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926157.3605585-1539-90046713530823/AnsiballZ_command.py
Dec 05 09:15:57 np0005546420.localdomain sudo[116020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:57 np0005546420.localdomain python3.9[116022]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:15:57 np0005546420.localdomain sudo[116020]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:58 np0005546420.localdomain sudo[116113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ivkpjnzwtappcjqibcecbwasetrqhxli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926157.9449737-1539-170834202167746/AnsiballZ_command.py
Dec 05 09:15:58 np0005546420.localdomain sudo[116113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:58 np0005546420.localdomain python3.9[116115]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:15:58 np0005546420.localdomain sudo[116113]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:58 np0005546420.localdomain sudo[116206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjohvjalpzodvcepoicrheundummevuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926158.5659366-1539-228266693000063/AnsiballZ_command.py
Dec 05 09:15:58 np0005546420.localdomain sudo[116206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:59 np0005546420.localdomain python3.9[116208]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:15:59 np0005546420.localdomain sudo[116206]: pam_unix(sudo:session): session closed for user root
Dec 05 09:15:59 np0005546420.localdomain sudo[116299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdvmcpuztqmdwkkliyaujualoxayfhhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926159.1817412-1539-71968307880125/AnsiballZ_command.py
Dec 05 09:15:59 np0005546420.localdomain sudo[116299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:15:59 np0005546420.localdomain python3.9[116301]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:15:59 np0005546420.localdomain sudo[116299]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:00 np0005546420.localdomain sudo[116392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thgjonknltaikaeypsxnztjboapejpcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926160.0032618-1539-226167418894805/AnsiballZ_command.py
Dec 05 09:16:00 np0005546420.localdomain sudo[116392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:00 np0005546420.localdomain python3.9[116394]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:00 np0005546420.localdomain sudo[116392]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43981 DF PROTO=TCP SPT=41002 DPT=9102 SEQ=3511175938 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB145D90000000001030307) 
Dec 05 09:16:00 np0005546420.localdomain sudo[116485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rkbyvelakcrvtvxwgtmbnmhmuqwswyml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926160.553541-1539-20789647415739/AnsiballZ_command.py
Dec 05 09:16:00 np0005546420.localdomain sudo[116485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:00 np0005546420.localdomain python3.9[116487]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:01 np0005546420.localdomain sudo[116485]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:01 np0005546420.localdomain sudo[116578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rceacyddchprdgovpgynebsmqzxxwzzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926161.122778-1539-240864650284915/AnsiballZ_command.py
Dec 05 09:16:01 np0005546420.localdomain sudo[116578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:01 np0005546420.localdomain python3.9[116580]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:01 np0005546420.localdomain sudo[116578]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:02 np0005546420.localdomain sudo[116671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dnujdjxnjfbekjohykimzmlvnszjmrtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926161.7530806-1539-171398558218339/AnsiballZ_command.py
Dec 05 09:16:02 np0005546420.localdomain sudo[116671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:02 np0005546420.localdomain python3.9[116673]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:02 np0005546420.localdomain sudo[116671]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1368 DF PROTO=TCP SPT=33266 DPT=9882 SEQ=297625941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB14DD90000000001030307) 
Dec 05 09:16:02 np0005546420.localdomain sudo[116764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfezynuahvviqwouglvtdrrjgbyofblb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926162.3894176-1539-187306643194303/AnsiballZ_command.py
Dec 05 09:16:02 np0005546420.localdomain sudo[116764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:02 np0005546420.localdomain python3.9[116766]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:02 np0005546420.localdomain sudo[116764]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:03 np0005546420.localdomain sudo[116857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mwfdhxvflsvlzrdxmbrikbuvgizcafcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926163.0180473-1539-262006342308295/AnsiballZ_command.py
Dec 05 09:16:03 np0005546420.localdomain sudo[116857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:03 np0005546420.localdomain python3.9[116859]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:03 np0005546420.localdomain sudo[116857]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:03 np0005546420.localdomain sudo[116950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lplhgtdcikzvjtkbmkqwscemyuyrpbhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926163.605706-1539-17302159799549/AnsiballZ_command.py
Dec 05 09:16:03 np0005546420.localdomain sudo[116950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:04 np0005546420.localdomain python3.9[116952]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:04 np0005546420.localdomain sudo[116950]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18403 DF PROTO=TCP SPT=55368 DPT=9105 SEQ=2985977507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB153D90000000001030307) 
Dec 05 09:16:04 np0005546420.localdomain sudo[117043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vpmlfphjlfbtqsmubmiojclvcxkcjotv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926164.2181268-1539-60687345424420/AnsiballZ_command.py
Dec 05 09:16:04 np0005546420.localdomain sudo[117043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:04 np0005546420.localdomain python3.9[117045]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:04 np0005546420.localdomain sudo[117043]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:05 np0005546420.localdomain sudo[117136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ammvtrygkshfibthggmdhdjkvowkufye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926164.9213607-1539-145697959092627/AnsiballZ_command.py
Dec 05 09:16:05 np0005546420.localdomain sudo[117136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:05 np0005546420.localdomain python3.9[117138]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:05 np0005546420.localdomain sudo[117136]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:05 np0005546420.localdomain sudo[117229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kwdwunkqdkzyyjsmodwhavxzygkrvyxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926165.4774048-1539-128228815792847/AnsiballZ_command.py
Dec 05 09:16:05 np0005546420.localdomain sudo[117229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:05 np0005546420.localdomain python3.9[117231]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:05 np0005546420.localdomain sudo[117229]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:06 np0005546420.localdomain sudo[117322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dennorpsnbkaomqehiueyewwhveseuxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926166.0283546-1539-122647573180970/AnsiballZ_command.py
Dec 05 09:16:06 np0005546420.localdomain sudo[117322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:06 np0005546420.localdomain python3.9[117324]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:06 np0005546420.localdomain sudo[117322]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:06 np0005546420.localdomain sudo[117415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gihxyscnrdikqsykoiptrvmogeolpxkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926166.6230347-1539-272757846483449/AnsiballZ_command.py
Dec 05 09:16:06 np0005546420.localdomain sudo[117415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:07 np0005546420.localdomain python3.9[117417]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:07 np0005546420.localdomain sudo[117415]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46889 DF PROTO=TCP SPT=33676 DPT=9101 SEQ=1790843009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB15FDA0000000001030307) 
Dec 05 09:16:07 np0005546420.localdomain sudo[117508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oylrphxfxlnvsfazyyizhnqmfjvzonri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926167.1868353-1539-224755098233703/AnsiballZ_command.py
Dec 05 09:16:07 np0005546420.localdomain sudo[117508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:07 np0005546420.localdomain python3.9[117510]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:07 np0005546420.localdomain sudo[117508]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:08 np0005546420.localdomain sudo[117601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izethkkctnzxuahcrfajnweniywsxkrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926167.7621033-1539-40385166435210/AnsiballZ_command.py
Dec 05 09:16:08 np0005546420.localdomain sudo[117601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:08 np0005546420.localdomain python3.9[117603]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:08 np0005546420.localdomain sudo[117601]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:09 np0005546420.localdomain sshd[111410]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:16:09 np0005546420.localdomain systemd[1]: session-37.scope: Deactivated successfully.
Dec 05 09:16:09 np0005546420.localdomain systemd[1]: session-37.scope: Consumed 31.266s CPU time.
Dec 05 09:16:09 np0005546420.localdomain systemd-logind[762]: Session 37 logged out. Waiting for processes to exit.
Dec 05 09:16:09 np0005546420.localdomain systemd-logind[762]: Removed session 37.
Dec 05 09:16:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30588 DF PROTO=TCP SPT=42892 DPT=9100 SEQ=95545430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB16D990000000001030307) 
Dec 05 09:16:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30589 DF PROTO=TCP SPT=42892 DPT=9100 SEQ=95545430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB175990000000001030307) 
Dec 05 09:16:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30590 DF PROTO=TCP SPT=42892 DPT=9100 SEQ=95545430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB1855A0000000001030307) 
Dec 05 09:16:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35752 DF PROTO=TCP SPT=57990 DPT=9105 SEQ=483342774 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB18D510000000001030307) 
Dec 05 09:16:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35754 DF PROTO=TCP SPT=57990 DPT=9105 SEQ=483342774 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB199590000000001030307) 
Dec 05 09:16:23 np0005546420.localdomain sshd[117620]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:16:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43983 DF PROTO=TCP SPT=41002 DPT=9102 SEQ=3511175938 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB1A5D90000000001030307) 
Dec 05 09:16:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18852 DF PROTO=TCP SPT=46002 DPT=9102 SEQ=2362769267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB1BAD90000000001030307) 
Dec 05 09:16:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51930 DF PROTO=TCP SPT=46550 DPT=9882 SEQ=2840601551 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB1C3D90000000001030307) 
Dec 05 09:16:34 np0005546420.localdomain sshd[117620]: error: kex_exchange_identification: read: Connection timed out
Dec 05 09:16:34 np0005546420.localdomain sshd[117620]: banner exchange: Connection from 14.103.107.221 port 35294: Connection timed out
Dec 05 09:16:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35756 DF PROTO=TCP SPT=57990 DPT=9105 SEQ=483342774 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB1C9D90000000001030307) 
Dec 05 09:16:35 np0005546420.localdomain sshd[117621]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:16:35 np0005546420.localdomain sshd[117621]: Accepted publickey for zuul from 192.168.122.30 port 40278 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:16:35 np0005546420.localdomain systemd-logind[762]: New session 38 of user zuul.
Dec 05 09:16:35 np0005546420.localdomain systemd[1]: Started Session 38 of User zuul.
Dec 05 09:16:35 np0005546420.localdomain sshd[117621]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:16:36 np0005546420.localdomain python3.9[117714]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 05 09:16:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63157 DF PROTO=TCP SPT=50182 DPT=9101 SEQ=3145329406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB1D3D90000000001030307) 
Dec 05 09:16:37 np0005546420.localdomain python3.9[117818]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:16:38 np0005546420.localdomain sudo[117908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iaajvpoeirqqpjaonowriaprkdaoxiqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926198.1759071-94-183790084282273/AnsiballZ_command.py
Dec 05 09:16:38 np0005546420.localdomain sudo[117908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:38 np0005546420.localdomain python3.9[117910]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:38 np0005546420.localdomain sudo[117908]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:39 np0005546420.localdomain sudo[118001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pfqpztahspmvgdnslpbamnwcgrfcmfzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926199.292968-130-243149900989655/AnsiballZ_stat.py
Dec 05 09:16:39 np0005546420.localdomain sudo[118001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:39 np0005546420.localdomain python3.9[118003]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:16:39 np0005546420.localdomain sudo[118001]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:40 np0005546420.localdomain sudo[118093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bqlizjiqytvoxhqanrqxuakxuuypdwtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926200.1139817-154-266784098614778/AnsiballZ_file.py
Dec 05 09:16:40 np0005546420.localdomain sudo[118093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:40 np0005546420.localdomain python3.9[118095]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:16:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7195 DF PROTO=TCP SPT=55674 DPT=9100 SEQ=2037528459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB1E2D90000000001030307) 
Dec 05 09:16:40 np0005546420.localdomain sudo[118093]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:41 np0005546420.localdomain sudo[118185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hrypbqujbbfwxvnlbkkzewwpqwfwswnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926200.93446-178-194388562402012/AnsiballZ_stat.py
Dec 05 09:16:41 np0005546420.localdomain sudo[118185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:41 np0005546420.localdomain python3.9[118187]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:16:41 np0005546420.localdomain sudo[118185]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:42 np0005546420.localdomain sudo[118258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mommphbdteujmrxjcmxwwjqypvzwtvlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926200.93446-178-194388562402012/AnsiballZ_copy.py
Dec 05 09:16:42 np0005546420.localdomain sudo[118258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:42 np0005546420.localdomain python3.9[118260]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926200.93446-178-194388562402012/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:16:42 np0005546420.localdomain sudo[118258]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:42 np0005546420.localdomain sudo[118350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uodxrgbgfgkawrtrmucjsanlxbppizon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926202.5281472-223-249580353255152/AnsiballZ_setup.py
Dec 05 09:16:42 np0005546420.localdomain sudo[118350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7196 DF PROTO=TCP SPT=55674 DPT=9100 SEQ=2037528459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB1EAD90000000001030307) 
Dec 05 09:16:43 np0005546420.localdomain python3.9[118352]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:16:43 np0005546420.localdomain sudo[118350]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:43 np0005546420.localdomain sudo[118446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nldrhlrliogvomthcaqjeppcfkojgfns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926203.5640361-247-219468427731924/AnsiballZ_file.py
Dec 05 09:16:43 np0005546420.localdomain sudo[118446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:44 np0005546420.localdomain python3.9[118448]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:16:44 np0005546420.localdomain sudo[118446]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:44 np0005546420.localdomain sudo[118538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjiwzycyyruslhamqxnaltmmjrxcpeng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926204.3258266-274-8619287301663/AnsiballZ_file.py
Dec 05 09:16:44 np0005546420.localdomain sudo[118538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:44 np0005546420.localdomain python3.9[118540]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:16:44 np0005546420.localdomain sudo[118538]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:45 np0005546420.localdomain python3.9[118630]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:16:45 np0005546420.localdomain network[118647]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:16:45 np0005546420.localdomain network[118648]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:16:45 np0005546420.localdomain network[118649]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:16:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:16:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7197 DF PROTO=TCP SPT=55674 DPT=9100 SEQ=2037528459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB1FA990000000001030307) 
Dec 05 09:16:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27604 DF PROTO=TCP SPT=53752 DPT=9105 SEQ=286772376 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB202810000000001030307) 
Dec 05 09:16:48 np0005546420.localdomain python3.9[118847]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:16:49 np0005546420.localdomain python3.9[118937]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:16:50 np0005546420.localdomain sudo[119031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aonjrpiyozkrschsbqndnisyvtioumhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926210.2462215-376-106926413179388/AnsiballZ_command.py
Dec 05 09:16:50 np0005546420.localdomain sudo[119031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:16:50 np0005546420.localdomain python3.9[119033]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream
                                                            set -euxo pipefail
                                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                                            python3 -m venv ./venv
                                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main
                                                            # This is required for FIPS enabled until trunk.rdoproject.org
                                                            # is not being served from a centos7 host, tracked by
                                                            # https://issues.redhat.com/browse/RHOSZUUL-1517
                                                            dnf -y install crypto-policies
                                                            update-crypto-policies --set FIPS:NO-ENFORCE-EMS
                                                            ./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream
                                                            
                                                            # Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible
                                                            # with rhel 9.2 openssh
                                                            dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save
                                                            # FIXME: perform dnf upgrade for other packages in EDPM ansible
                                                            # here we only ensuring that decontainerized libvirt can start
                                                            dnf -y upgrade openstack-selinux
                                                            rm -f /run/virtlogd.pid
                                                            
                                                            rm -rf repo-setup-main
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:16:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27606 DF PROTO=TCP SPT=53752 DPT=9105 SEQ=286772376 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB20E9A0000000001030307) 
Dec 05 09:16:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18854 DF PROTO=TCP SPT=46002 DPT=9102 SEQ=2362769267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB21BD90000000001030307) 
Dec 05 09:16:56 np0005546420.localdomain sudo[119057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:16:56 np0005546420.localdomain sudo[119057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:16:56 np0005546420.localdomain sudo[119057]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:56 np0005546420.localdomain sudo[119073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:16:56 np0005546420.localdomain sudo[119073]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:16:57 np0005546420.localdomain systemd[1]: tmp-crun.fD5EXH.mount: Deactivated successfully.
Dec 05 09:16:57 np0005546420.localdomain podman[119164]: 2025-12-05 09:16:57.351183037 +0000 UTC m=+0.121322889 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.buildah.version=1.41.4)
Dec 05 09:16:57 np0005546420.localdomain podman[119164]: 2025-12-05 09:16:57.455355026 +0000 UTC m=+0.225494848 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, build-date=2025-11-26T19:44:28Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64)
Dec 05 09:16:57 np0005546420.localdomain sudo[119073]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:57 np0005546420.localdomain sudo[119230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:16:57 np0005546420.localdomain sudo[119230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:16:57 np0005546420.localdomain sudo[119230]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:57 np0005546420.localdomain sudo[119245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:16:57 np0005546420.localdomain sudo[119245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:16:58 np0005546420.localdomain sudo[119245]: pam_unix(sudo:session): session closed for user root
Dec 05 09:16:59 np0005546420.localdomain sudo[119292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:16:59 np0005546420.localdomain sudo[119292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:16:59 np0005546420.localdomain sudo[119292]: pam_unix(sudo:session): session closed for user root
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 05 09:17:00 np0005546420.localdomain sshd[46057]: Received signal 15; terminating.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: sshd.service: Consumed 5.351s CPU time.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 05 09:17:00 np0005546420.localdomain sshd[119319]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:17:00 np0005546420.localdomain sshd[119319]: Server listening on 0.0.0.0 port 22.
Dec 05 09:17:00 np0005546420.localdomain sshd[119319]: Server listening on :: port 22.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:17:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=691 DF PROTO=TCP SPT=56522 DPT=9102 SEQ=3173911995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB2301A0000000001030307) 
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: run-r91b99c503eae48c18960c8f4f451533a.service: Deactivated successfully.
Dec 05 09:17:00 np0005546420.localdomain systemd[1]: run-rb1625cd6d2c848aa938a2b660d6b5ff9.service: Deactivated successfully.
Dec 05 09:17:01 np0005546420.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 05 09:17:01 np0005546420.localdomain sshd[119319]: Received signal 15; terminating.
Dec 05 09:17:01 np0005546420.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 05 09:17:01 np0005546420.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 05 09:17:01 np0005546420.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 05 09:17:01 np0005546420.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 05 09:17:01 np0005546420.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:17:01 np0005546420.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:17:01 np0005546420.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:17:01 np0005546420.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 05 09:17:01 np0005546420.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 05 09:17:01 np0005546420.localdomain sshd[119494]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:17:01 np0005546420.localdomain sshd[119494]: Server listening on 0.0.0.0 port 22.
Dec 05 09:17:01 np0005546420.localdomain sshd[119494]: Server listening on :: port 22.
Dec 05 09:17:01 np0005546420.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 05 09:17:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29768 DF PROTO=TCP SPT=59310 DPT=9882 SEQ=3264932573 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB237DA0000000001030307) 
Dec 05 09:17:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27608 DF PROTO=TCP SPT=53752 DPT=9105 SEQ=286772376 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB23DD90000000001030307) 
Dec 05 09:17:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11065 DF PROTO=TCP SPT=51894 DPT=9101 SEQ=2867179689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB249D90000000001030307) 
Dec 05 09:17:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22687 DF PROTO=TCP SPT=34366 DPT=9100 SEQ=3222400189 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB257D90000000001030307) 
Dec 05 09:17:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22688 DF PROTO=TCP SPT=34366 DPT=9100 SEQ=3222400189 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB25FD90000000001030307) 
Dec 05 09:17:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22689 DF PROTO=TCP SPT=34366 DPT=9100 SEQ=3222400189 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB26F990000000001030307) 
Dec 05 09:17:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3903 DF PROTO=TCP SPT=56712 DPT=9105 SEQ=1052221562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB277B10000000001030307) 
Dec 05 09:17:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3905 DF PROTO=TCP SPT=56712 DPT=9105 SEQ=1052221562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB283990000000001030307) 
Dec 05 09:17:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22690 DF PROTO=TCP SPT=34366 DPT=9100 SEQ=3222400189 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB28FD90000000001030307) 
Dec 05 09:17:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20327 DF PROTO=TCP SPT=37966 DPT=9102 SEQ=2252448561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB2A55A0000000001030307) 
Dec 05 09:17:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33451 DF PROTO=TCP SPT=33468 DPT=9882 SEQ=481197343 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB2ADDA0000000001030307) 
Dec 05 09:17:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3907 DF PROTO=TCP SPT=56712 DPT=9105 SEQ=1052221562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB2B3D90000000001030307) 
Dec 05 09:17:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18693 DF PROTO=TCP SPT=38114 DPT=9101 SEQ=954539262 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB2BDD90000000001030307) 
Dec 05 09:17:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55440 DF PROTO=TCP SPT=44630 DPT=9100 SEQ=2806174932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB2CD190000000001030307) 
Dec 05 09:17:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55441 DF PROTO=TCP SPT=44630 DPT=9100 SEQ=2806174932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB2D51A0000000001030307) 
Dec 05 09:17:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55442 DF PROTO=TCP SPT=44630 DPT=9100 SEQ=2806174932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB2E4DA0000000001030307) 
Dec 05 09:17:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1649 DF PROTO=TCP SPT=57890 DPT=9105 SEQ=3833072845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB2ECE10000000001030307) 
Dec 05 09:17:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1651 DF PROTO=TCP SPT=57890 DPT=9105 SEQ=3833072845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB2F8DA0000000001030307) 
Dec 05 09:17:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20329 DF PROTO=TCP SPT=37966 DPT=9102 SEQ=2252448561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB305D90000000001030307) 
Dec 05 09:17:59 np0005546420.localdomain sudo[119823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:17:59 np0005546420.localdomain sudo[119823]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:17:59 np0005546420.localdomain sudo[119823]: pam_unix(sudo:session): session closed for user root
Dec 05 09:17:59 np0005546420.localdomain sudo[119838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:17:59 np0005546420.localdomain sudo[119838]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:17:59 np0005546420.localdomain sudo[119838]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39197 DF PROTO=TCP SPT=52266 DPT=9102 SEQ=546553762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB31A9A0000000001030307) 
Dec 05 09:18:00 np0005546420.localdomain sudo[119884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:18:00 np0005546420.localdomain sudo[119884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:18:00 np0005546420.localdomain sudo[119884]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38536 DF PROTO=TCP SPT=48134 DPT=9882 SEQ=3193187614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB321D90000000001030307) 
Dec 05 09:18:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1653 DF PROTO=TCP SPT=57890 DPT=9105 SEQ=3833072845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB329D90000000001030307) 
Dec 05 09:18:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18376 DF PROTO=TCP SPT=54758 DPT=9101 SEQ=1921344325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB333D90000000001030307) 
Dec 05 09:18:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60702 DF PROTO=TCP SPT=57720 DPT=9100 SEQ=3133067923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB3425A0000000001030307) 
Dec 05 09:18:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60703 DF PROTO=TCP SPT=57720 DPT=9100 SEQ=3133067923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB34A590000000001030307) 
Dec 05 09:18:14 np0005546420.localdomain kernel: SELinux:  Converting 2742 SID table entries...
Dec 05 09:18:14 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:18:14 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 09:18:14 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:18:14 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:18:14 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:18:14 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:18:14 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:18:15 np0005546420.localdomain sudo[119031]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60704 DF PROTO=TCP SPT=57720 DPT=9100 SEQ=3133067923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB35A190000000001030307) 
Dec 05 09:18:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38537 DF PROTO=TCP SPT=48134 DPT=9882 SEQ=3193187614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB361D90000000001030307) 
Dec 05 09:18:19 np0005546420.localdomain sudo[120128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbpiwttahqovgwrprucoyvytyptcoopq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926299.3025749-403-107804772221647/AnsiballZ_file.py
Dec 05 09:18:19 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=17 res=1
Dec 05 09:18:19 np0005546420.localdomain sudo[120128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:19 np0005546420.localdomain python3.9[120130]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:18:19 np0005546420.localdomain sudo[120128]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:20 np0005546420.localdomain sudo[120220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovfdopthdaqidmvjfxnkdraawvvoktqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926299.993269-427-221849047281150/AnsiballZ_stat.py
Dec 05 09:18:20 np0005546420.localdomain sudo[120220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:20 np0005546420.localdomain python3.9[120222]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:18:20 np0005546420.localdomain sudo[120220]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:20 np0005546420.localdomain sudo[120293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oxffmmqumhjwhvjrccebbkdngaynxlij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926299.993269-427-221849047281150/AnsiballZ_copy.py
Dec 05 09:18:20 np0005546420.localdomain sudo[120293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:20 np0005546420.localdomain python3.9[120295]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926299.993269-427-221849047281150/.source.fact _original_basename=.yvcjd5tj follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:18:21 np0005546420.localdomain sudo[120293]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:21 np0005546420.localdomain python3.9[120385]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:18:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19367 DF PROTO=TCP SPT=37374 DPT=9105 SEQ=2482796040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB36E1A0000000001030307) 
Dec 05 09:18:22 np0005546420.localdomain sudo[120481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vdgzspcvbmsuttpwilsabrbncgicmhrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926302.4348607-502-104707184672053/AnsiballZ_setup.py
Dec 05 09:18:22 np0005546420.localdomain sudo[120481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:23 np0005546420.localdomain python3.9[120483]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:18:23 np0005546420.localdomain sudo[120481]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:23 np0005546420.localdomain sudo[120535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-skihpyowmngmacrlurdrvvumzcuftfbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926302.4348607-502-104707184672053/AnsiballZ_dnf.py
Dec 05 09:18:23 np0005546420.localdomain sudo[120535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:24 np0005546420.localdomain python3.9[120537]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:18:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60705 DF PROTO=TCP SPT=57720 DPT=9100 SEQ=3133067923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB379DA0000000001030307) 
Dec 05 09:18:27 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:18:27 np0005546420.localdomain systemd-rc-local-generator[120566]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:18:27 np0005546420.localdomain systemd-sysv-generator[120570]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:18:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:18:27 np0005546420.localdomain systemd[1]: Starting dnf makecache...
Dec 05 09:18:27 np0005546420.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 09:18:28 np0005546420.localdomain dnf[120585]: Updating Subscription Management repositories.
Dec 05 09:18:28 np0005546420.localdomain sudo[120535]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:29 np0005546420.localdomain sudo[120675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bnfstwbeuuzmoxcfhkzsndieernyxpze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926309.1593986-538-123630627256250/AnsiballZ_command.py
Dec 05 09:18:29 np0005546420.localdomain sudo[120675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:29 np0005546420.localdomain python3.9[120677]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:18:29 np0005546420.localdomain dnf[120585]: Metadata cache refreshed recently.
Dec 05 09:18:30 np0005546420.localdomain systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 05 09:18:30 np0005546420.localdomain systemd[1]: Finished dnf makecache.
Dec 05 09:18:30 np0005546420.localdomain systemd[1]: dnf-makecache.service: Consumed 2.050s CPU time.
Dec 05 09:18:30 np0005546420.localdomain sudo[120675]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63216 DF PROTO=TCP SPT=58472 DPT=9102 SEQ=727904864 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB38F9F0000000001030307) 
Dec 05 09:18:31 np0005546420.localdomain sudo[120915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpywbeamzulgdjhhynpvffowrhrphgvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926310.6931415-562-205865059003430/AnsiballZ_selinux.py
Dec 05 09:18:31 np0005546420.localdomain sudo[120915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:31 np0005546420.localdomain python3.9[120917]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 05 09:18:31 np0005546420.localdomain sudo[120915]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:32 np0005546420.localdomain sudo[121007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zzgtgydbjybjlgeesywnrqksuqsvzodc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926312.0609448-595-46308058482755/AnsiballZ_command.py
Dec 05 09:18:32 np0005546420.localdomain sudo[121007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:32 np0005546420.localdomain python3.9[121009]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 05 09:18:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18009 DF PROTO=TCP SPT=42048 DPT=9882 SEQ=1915749352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB397D90000000001030307) 
Dec 05 09:18:33 np0005546420.localdomain sudo[121007]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19369 DF PROTO=TCP SPT=37374 DPT=9105 SEQ=2482796040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB39DDA0000000001030307) 
Dec 05 09:18:34 np0005546420.localdomain sudo[121100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-awanqhwhaaskokkmeafnvaxgxqgwjnsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926314.1385772-619-67494422566972/AnsiballZ_file.py
Dec 05 09:18:34 np0005546420.localdomain sudo[121100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:34 np0005546420.localdomain python3.9[121102]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:18:34 np0005546420.localdomain sudo[121100]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:35 np0005546420.localdomain sudo[121192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udvmduizddqhlnmdpmyhfwcetwkptqbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926314.7998364-643-249370490891084/AnsiballZ_mount.py
Dec 05 09:18:35 np0005546420.localdomain sudo[121192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:35 np0005546420.localdomain python3.9[121194]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 05 09:18:35 np0005546420.localdomain sudo[121192]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:37 np0005546420.localdomain sudo[121284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gakuplshnowuhsmoxalythjbyvrlnonb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926316.759537-727-28102781118494/AnsiballZ_file.py
Dec 05 09:18:37 np0005546420.localdomain sudo[121284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:37 np0005546420.localdomain python3.9[121286]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:18:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65221 DF PROTO=TCP SPT=38042 DPT=9101 SEQ=257321984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB3A9D90000000001030307) 
Dec 05 09:18:37 np0005546420.localdomain sudo[121284]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:37 np0005546420.localdomain sudo[121376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhqublmoxidpurgcbpwhqumbgjfjnjrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926317.4143496-751-219589136736025/AnsiballZ_stat.py
Dec 05 09:18:37 np0005546420.localdomain sudo[121376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:37 np0005546420.localdomain python3.9[121378]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:18:37 np0005546420.localdomain sudo[121376]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:38 np0005546420.localdomain sudo[121449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fzcxzwciayrsxnwwqvjksdzdvphqqowe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926317.4143496-751-219589136736025/AnsiballZ_copy.py
Dec 05 09:18:38 np0005546420.localdomain sudo[121449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:38 np0005546420.localdomain python3.9[121451]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926317.4143496-751-219589136736025/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:18:38 np0005546420.localdomain sudo[121449]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:39 np0005546420.localdomain sudo[121541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oqkbgxhijiiwmakibgtklekxxrddzuti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926319.1700246-823-101738487442499/AnsiballZ_stat.py
Dec 05 09:18:39 np0005546420.localdomain sudo[121541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:39 np0005546420.localdomain python3.9[121543]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:18:39 np0005546420.localdomain sudo[121541]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22032 DF PROTO=TCP SPT=35748 DPT=9100 SEQ=1075290951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB3B7990000000001030307) 
Dec 05 09:18:40 np0005546420.localdomain sudo[121635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aonajfrxaqnimqatnjeifmnpzvtpxzcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926320.4051504-862-141049300771532/AnsiballZ_getent.py
Dec 05 09:18:40 np0005546420.localdomain sudo[121635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:41 np0005546420.localdomain python3.9[121637]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 05 09:18:41 np0005546420.localdomain sudo[121635]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:41 np0005546420.localdomain sudo[121728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vdhrdcyftcyqdapcigppelwoumcxppex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926321.4381256-892-144264834947932/AnsiballZ_getent.py
Dec 05 09:18:41 np0005546420.localdomain sudo[121728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:41 np0005546420.localdomain python3.9[121730]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 05 09:18:41 np0005546420.localdomain sudo[121728]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:42 np0005546420.localdomain sudo[121821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aeijtmoqmprgpzvkmuyyvszocszndkse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926322.0637872-916-103917227729244/AnsiballZ_group.py
Dec 05 09:18:42 np0005546420.localdomain sudo[121821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:42 np0005546420.localdomain python3.9[121823]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 09:18:42 np0005546420.localdomain groupmod[121824]: group changed in /etc/group (group hugetlbfs/985, new gid: 42477)
Dec 05 09:18:42 np0005546420.localdomain groupmod[121824]: group changed in /etc/passwd (group hugetlbfs/985, new gid: 42477)
Dec 05 09:18:42 np0005546420.localdomain sudo[121821]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22033 DF PROTO=TCP SPT=35748 DPT=9100 SEQ=1075290951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB3BF9A0000000001030307) 
Dec 05 09:18:43 np0005546420.localdomain sudo[121919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xnikqsjdtjmzmkhbybwfhkyryjygsnvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926322.9332595-943-194763816490455/AnsiballZ_file.py
Dec 05 09:18:43 np0005546420.localdomain sudo[121919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:43 np0005546420.localdomain python3.9[121921]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 05 09:18:43 np0005546420.localdomain sudo[121919]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:44 np0005546420.localdomain sudo[122011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dibwbrxpqarmpejelzcslccieojccgjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926323.9260435-976-157918871701664/AnsiballZ_dnf.py
Dec 05 09:18:44 np0005546420.localdomain sudo[122011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:44 np0005546420.localdomain python3.9[122013]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:18:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22034 DF PROTO=TCP SPT=35748 DPT=9100 SEQ=1075290951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB3CF5A0000000001030307) 
Dec 05 09:18:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23519 DF PROTO=TCP SPT=51878 DPT=9105 SEQ=1482288821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB3D7410000000001030307) 
Dec 05 09:18:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23521 DF PROTO=TCP SPT=51878 DPT=9105 SEQ=1482288821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB3E3590000000001030307) 
Dec 05 09:18:54 np0005546420.localdomain sudo[122011]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63218 DF PROTO=TCP SPT=58472 DPT=9102 SEQ=727904864 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB3EFD90000000001030307) 
Dec 05 09:18:55 np0005546420.localdomain sudo[122106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fisuqzawrtsmlddvzlkxhlionmwbnplf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926335.0354636-1000-46437794012021/AnsiballZ_file.py
Dec 05 09:18:55 np0005546420.localdomain sudo[122106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:55 np0005546420.localdomain python3.9[122108]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:18:55 np0005546420.localdomain sudo[122106]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:57 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39200 DF PROTO=TCP SPT=52266 DPT=9102 SEQ=546553762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB3F9D90000000001030307) 
Dec 05 09:18:58 np0005546420.localdomain sudo[122198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ztnwjwzbmpmpuqjxjjskdmazycrcveui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926338.073102-1024-98077909848250/AnsiballZ_stat.py
Dec 05 09:18:58 np0005546420.localdomain sudo[122198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:58 np0005546420.localdomain python3.9[122200]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:18:58 np0005546420.localdomain sudo[122198]: pam_unix(sudo:session): session closed for user root
Dec 05 09:18:58 np0005546420.localdomain sudo[122271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aotbmggoujsedkyrcgooqrhasjjqutay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926338.073102-1024-98077909848250/AnsiballZ_copy.py
Dec 05 09:18:58 np0005546420.localdomain sudo[122271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:18:59 np0005546420.localdomain python3.9[122273]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926338.073102-1024-98077909848250/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:18:59 np0005546420.localdomain sudo[122271]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:00 np0005546420.localdomain sudo[122320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:19:00 np0005546420.localdomain sudo[122320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:19:00 np0005546420.localdomain sudo[122320]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:00 np0005546420.localdomain sudo[122335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:19:00 np0005546420.localdomain sudo[122335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:19:01 np0005546420.localdomain sudo[122400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dintgxtqllhaxkploechpldcdrxqhbrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926340.6423857-1069-140638902675176/AnsiballZ_systemd.py
Dec 05 09:19:01 np0005546420.localdomain sudo[122400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:01 np0005546420.localdomain python3.9[122408]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:19:01 np0005546420.localdomain sudo[122335]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:01 np0005546420.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 05 09:19:01 np0005546420.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 05 09:19:01 np0005546420.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 05 09:19:01 np0005546420.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 05 09:19:01 np0005546420.localdomain systemd-modules-load[122431]: Module 'msr' is built in
Dec 05 09:19:01 np0005546420.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 05 09:19:01 np0005546420.localdomain sudo[122400]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:02 np0005546420.localdomain sudo[122521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qaznnxxeashxkoeomrntvisdluuczkdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926341.8145704-1093-202903407029283/AnsiballZ_stat.py
Dec 05 09:19:02 np0005546420.localdomain sudo[122521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:02 np0005546420.localdomain sudo[122524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:19:02 np0005546420.localdomain sudo[122524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:19:02 np0005546420.localdomain sudo[122524]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:02 np0005546420.localdomain python3.9[122523]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:19:02 np0005546420.localdomain sudo[122521]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:02 np0005546420.localdomain sudo[122609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hxafxqfrgklfeowwgxtlcbprvoppdiit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926341.8145704-1093-202903407029283/AnsiballZ_copy.py
Dec 05 09:19:02 np0005546420.localdomain sudo[122609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:02 np0005546420.localdomain python3.9[122611]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926341.8145704-1093-202903407029283/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:19:02 np0005546420.localdomain sudo[122609]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41616 DF PROTO=TCP SPT=48446 DPT=9882 SEQ=4275714365 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB40DD90000000001030307) 
Dec 05 09:19:03 np0005546420.localdomain sudo[122701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqllvpqdqdjpcwvxmvhwyufspnntmfpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926343.3572555-1147-101132645952352/AnsiballZ_dnf.py
Dec 05 09:19:03 np0005546420.localdomain sudo[122701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:03 np0005546420.localdomain python3.9[122703]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:19:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23523 DF PROTO=TCP SPT=51878 DPT=9105 SEQ=1482288821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB413D90000000001030307) 
Dec 05 09:19:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13930 DF PROTO=TCP SPT=59008 DPT=9101 SEQ=157649555 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB41DD90000000001030307) 
Dec 05 09:19:07 np0005546420.localdomain sudo[122701]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:08 np0005546420.localdomain python3.9[122795]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:19:08 np0005546420.localdomain python3.9[122887]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 05 09:19:09 np0005546420.localdomain sshd[122978]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:19:09 np0005546420.localdomain python3.9[122977]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:19:10 np0005546420.localdomain sudo[123069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ycmjbopwjzyhvojenoezptctayghorzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926350.0445495-1270-11549554382655/AnsiballZ_systemd.py
Dec 05 09:19:10 np0005546420.localdomain sudo[123069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21576 DF PROTO=TCP SPT=36956 DPT=9100 SEQ=4268416201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB42C9A0000000001030307) 
Dec 05 09:19:10 np0005546420.localdomain python3.9[123071]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:19:10 np0005546420.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 05 09:19:10 np0005546420.localdomain systemd[1]: tuned.service: Deactivated successfully.
Dec 05 09:19:10 np0005546420.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 05 09:19:10 np0005546420.localdomain systemd[1]: tuned.service: Consumed 2.045s CPU time, no IO.
Dec 05 09:19:10 np0005546420.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 05 09:19:11 np0005546420.localdomain sshd[122978]: Connection reset by authenticating user root 45.140.17.124 port 52534 [preauth]
Dec 05 09:19:11 np0005546420.localdomain sshd[123081]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:19:12 np0005546420.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Dec 05 09:19:12 np0005546420.localdomain sudo[123069]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21577 DF PROTO=TCP SPT=36956 DPT=9100 SEQ=4268416201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB434990000000001030307) 
Dec 05 09:19:12 np0005546420.localdomain python3.9[123175]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 05 09:19:14 np0005546420.localdomain sshd[123081]: Connection reset by authenticating user root 45.140.17.124 port 64536 [preauth]
Dec 05 09:19:14 np0005546420.localdomain sshd[123190]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:19:16 np0005546420.localdomain sudo[123267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vwitnwkpklziicajxyabklvwinpfritr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926355.7950964-1441-249184688349040/AnsiballZ_systemd.py
Dec 05 09:19:16 np0005546420.localdomain sudo[123267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:16 np0005546420.localdomain python3.9[123269]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:19:16 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:19:16 np0005546420.localdomain systemd-rc-local-generator[123293]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:19:16 np0005546420.localdomain systemd-sysv-generator[123302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:19:16 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:19:16 np0005546420.localdomain sudo[123267]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21578 DF PROTO=TCP SPT=36956 DPT=9100 SEQ=4268416201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB444590000000001030307) 
Dec 05 09:19:16 np0005546420.localdomain sshd[123190]: Connection reset by authenticating user root 45.140.17.124 port 64556 [preauth]
Dec 05 09:19:17 np0005546420.localdomain sudo[123398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hyksyrudzhpmnfalimhotelcnqoopxrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926356.8355114-1441-70438401924828/AnsiballZ_systemd.py
Dec 05 09:19:17 np0005546420.localdomain sudo[123398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:17 np0005546420.localdomain sshd[123400]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:19:17 np0005546420.localdomain python3.9[123401]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:19:17 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:19:17 np0005546420.localdomain systemd-sysv-generator[123436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:19:17 np0005546420.localdomain systemd-rc-local-generator[123433]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:19:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:19:17 np0005546420.localdomain sudo[123398]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:18 np0005546420.localdomain sudo[123531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pzswyjvpfegfjgmcdtoegxconiqljcms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926358.1655426-1489-226797531298486/AnsiballZ_command.py
Dec 05 09:19:18 np0005546420.localdomain sudo[123531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:18 np0005546420.localdomain sshd[123400]: Invalid user guest from 45.140.17.124 port 64568
Dec 05 09:19:18 np0005546420.localdomain python3.9[123533]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:19:18 np0005546420.localdomain sudo[123531]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1973 DF PROTO=TCP SPT=39054 DPT=9105 SEQ=2868152299 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB44C740000000001030307) 
Dec 05 09:19:19 np0005546420.localdomain sshd[123400]: Connection reset by invalid user guest 45.140.17.124 port 64568 [preauth]
Dec 05 09:19:19 np0005546420.localdomain sshd[123616]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:19:19 np0005546420.localdomain sudo[123625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nrjlqqwgzxnlpudnbrocfiizbddasgwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926359.0411713-1513-162273142583850/AnsiballZ_command.py
Dec 05 09:19:19 np0005546420.localdomain sudo[123625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:19 np0005546420.localdomain python3.9[123627]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:19:19 np0005546420.localdomain kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k FS
Dec 05 09:19:19 np0005546420.localdomain sudo[123625]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:19 np0005546420.localdomain sudo[123719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blqjtvcxoggccjadafftannhtqsyjasm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926359.67601-1537-49270646515296/AnsiballZ_command.py
Dec 05 09:19:19 np0005546420.localdomain sudo[123719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:20 np0005546420.localdomain python3.9[123721]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:19:21 np0005546420.localdomain sudo[123719]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:21 np0005546420.localdomain sudo[123818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vxqiiitjoijwkywnsygezeeriyjeuvxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926361.448943-1561-168743905127308/AnsiballZ_command.py
Dec 05 09:19:21 np0005546420.localdomain sudo[123818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:21 np0005546420.localdomain python3.9[123820]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:19:21 np0005546420.localdomain sudo[123818]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1975 DF PROTO=TCP SPT=39054 DPT=9105 SEQ=2868152299 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB458990000000001030307) 
Dec 05 09:19:22 np0005546420.localdomain sshd[123616]: Connection reset by authenticating user root 45.140.17.124 port 64572 [preauth]
Dec 05 09:19:22 np0005546420.localdomain sudo[123911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vslboxfnwhmgpnhbsobraqmkthnfjhua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926362.1049082-1586-140484318908605/AnsiballZ_systemd.py
Dec 05 09:19:22 np0005546420.localdomain sudo[123911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:22 np0005546420.localdomain python3.9[123913]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:19:22 np0005546420.localdomain systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 05 09:19:22 np0005546420.localdomain systemd[1]: Stopped Apply Kernel Variables.
Dec 05 09:19:22 np0005546420.localdomain systemd[1]: Stopping Apply Kernel Variables...
Dec 05 09:19:22 np0005546420.localdomain systemd[1]: Starting Apply Kernel Variables...
Dec 05 09:19:22 np0005546420.localdomain systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 05 09:19:22 np0005546420.localdomain systemd[1]: Finished Apply Kernel Variables.
Dec 05 09:19:22 np0005546420.localdomain sudo[123911]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:23 np0005546420.localdomain sshd[117621]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:19:23 np0005546420.localdomain systemd[1]: session-38.scope: Deactivated successfully.
Dec 05 09:19:23 np0005546420.localdomain systemd[1]: session-38.scope: Consumed 1min 59.914s CPU time.
Dec 05 09:19:23 np0005546420.localdomain systemd-logind[762]: Session 38 logged out. Waiting for processes to exit.
Dec 05 09:19:23 np0005546420.localdomain systemd-logind[762]: Removed session 38.
Dec 05 09:19:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21579 DF PROTO=TCP SPT=36956 DPT=9100 SEQ=4268416201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB463D90000000001030307) 
Dec 05 09:19:28 np0005546420.localdomain sshd[123933]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:19:28 np0005546420.localdomain sshd[123933]: Accepted publickey for zuul from 192.168.122.30 port 49298 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:19:28 np0005546420.localdomain systemd-logind[762]: New session 39 of user zuul.
Dec 05 09:19:28 np0005546420.localdomain systemd[1]: Started Session 39 of User zuul.
Dec 05 09:19:28 np0005546420.localdomain sshd[123933]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:19:29 np0005546420.localdomain python3.9[124026]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:19:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46171 DF PROTO=TCP SPT=35898 DPT=9102 SEQ=1042752822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB47A190000000001030307) 
Dec 05 09:19:30 np0005546420.localdomain python3.9[124120]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:19:31 np0005546420.localdomain sudo[124214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfnmsuqgeetwfyupixviqlyfxhclizgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926371.371717-111-142507985265132/AnsiballZ_command.py
Dec 05 09:19:31 np0005546420.localdomain sudo[124214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:31 np0005546420.localdomain python3.9[124216]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:19:31 np0005546420.localdomain sudo[124214]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11637 DF PROTO=TCP SPT=38164 DPT=9882 SEQ=1686722225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB481DA0000000001030307) 
Dec 05 09:19:32 np0005546420.localdomain python3.9[124307]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:19:33 np0005546420.localdomain sudo[124401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-weywnjnygjzyybkeeorzvtrxkadrycsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926373.4514034-171-160753012755483/AnsiballZ_setup.py
Dec 05 09:19:33 np0005546420.localdomain sudo[124401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:33 np0005546420.localdomain python3.9[124403]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:19:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1977 DF PROTO=TCP SPT=39054 DPT=9105 SEQ=2868152299 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB487D90000000001030307) 
Dec 05 09:19:34 np0005546420.localdomain sudo[124401]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:34 np0005546420.localdomain sudo[124455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tafadxbvsxcfqhnqyzqoaldmgznrqqjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926373.4514034-171-160753012755483/AnsiballZ_dnf.py
Dec 05 09:19:34 np0005546420.localdomain sudo[124455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:34 np0005546420.localdomain python3.9[124457]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:19:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15248 DF PROTO=TCP SPT=43666 DPT=9101 SEQ=1552708690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB493DA0000000001030307) 
Dec 05 09:19:38 np0005546420.localdomain sudo[124455]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:38 np0005546420.localdomain sudo[124549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ibwetksttjymmnwjneulnwwnzfdodeng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926378.2925375-207-195539541087466/AnsiballZ_setup.py
Dec 05 09:19:38 np0005546420.localdomain sudo[124549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:38 np0005546420.localdomain python3.9[124551]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:19:39 np0005546420.localdomain sudo[124549]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:39 np0005546420.localdomain sudo[124696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfpddtaomvfpknypmncselpdngtmthii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926379.516215-240-88654792449400/AnsiballZ_file.py
Dec 05 09:19:39 np0005546420.localdomain sudo[124696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:40 np0005546420.localdomain python3.9[124698]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:19:40 np0005546420.localdomain sudo[124696]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:40 np0005546420.localdomain sudo[124788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdwxijuhmpngodnvkwqqaudnhusqfmhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926380.283989-264-236837375471111/AnsiballZ_command.py
Dec 05 09:19:40 np0005546420.localdomain sudo[124788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29642 DF PROTO=TCP SPT=53116 DPT=9100 SEQ=2426514655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB4A1D90000000001030307) 
Dec 05 09:19:40 np0005546420.localdomain python3.9[124790]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:19:40 np0005546420.localdomain sudo[124788]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:41 np0005546420.localdomain sudo[124892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ryjekqmpxjhtdewfxrdocdmvswzftxuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926381.0125182-288-188782495320901/AnsiballZ_stat.py
Dec 05 09:19:41 np0005546420.localdomain sudo[124892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:41 np0005546420.localdomain python3.9[124894]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:19:41 np0005546420.localdomain sudo[124892]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:41 np0005546420.localdomain sudo[124940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zmixniljhvpybgwowlpnsbkjmwiqfurs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926381.0125182-288-188782495320901/AnsiballZ_file.py
Dec 05 09:19:41 np0005546420.localdomain sudo[124940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:42 np0005546420.localdomain python3.9[124942]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:19:42 np0005546420.localdomain sudo[124940]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:42 np0005546420.localdomain sudo[125032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rvtpiudusxnktgkiyiotuxutvzuysvde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926382.2218096-324-205432765801682/AnsiballZ_stat.py
Dec 05 09:19:42 np0005546420.localdomain sudo[125032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:42 np0005546420.localdomain python3.9[125034]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:19:42 np0005546420.localdomain sudo[125032]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29643 DF PROTO=TCP SPT=53116 DPT=9100 SEQ=2426514655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB4A9D90000000001030307) 
Dec 05 09:19:43 np0005546420.localdomain sudo[125105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfbllmgmaakjaxvuucorvbuqufqwcxto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926382.2218096-324-205432765801682/AnsiballZ_copy.py
Dec 05 09:19:43 np0005546420.localdomain sudo[125105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:43 np0005546420.localdomain python3.9[125107]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926382.2218096-324-205432765801682/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:19:43 np0005546420.localdomain sudo[125105]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:43 np0005546420.localdomain sudo[125197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vockyfivhvrrakivluvvxbslokceihkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926383.5820508-372-118845584643535/AnsiballZ_ini_file.py
Dec 05 09:19:43 np0005546420.localdomain sudo[125197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:44 np0005546420.localdomain python3.9[125199]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:19:44 np0005546420.localdomain sudo[125197]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:44 np0005546420.localdomain sudo[125289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxsnyffehnceleyknnpldtojhioagtue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926384.275039-372-184778803244383/AnsiballZ_ini_file.py
Dec 05 09:19:44 np0005546420.localdomain sudo[125289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:44 np0005546420.localdomain python3.9[125291]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:19:44 np0005546420.localdomain sudo[125289]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:45 np0005546420.localdomain sudo[125381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-awqnpfrczlfzatfgljczsbgzrhibcocr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926384.8593135-372-44093901517728/AnsiballZ_ini_file.py
Dec 05 09:19:45 np0005546420.localdomain sudo[125381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:45 np0005546420.localdomain python3.9[125383]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:19:45 np0005546420.localdomain sudo[125381]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:45 np0005546420.localdomain sudo[125473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-luiouvowfhoclgidvzekmbjvhcxddvdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926385.4462547-372-242246132938347/AnsiballZ_ini_file.py
Dec 05 09:19:45 np0005546420.localdomain sudo[125473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:45 np0005546420.localdomain python3.9[125475]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:19:45 np0005546420.localdomain sudo[125473]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29644 DF PROTO=TCP SPT=53116 DPT=9100 SEQ=2426514655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB4B9990000000001030307) 
Dec 05 09:19:46 np0005546420.localdomain python3.9[125565]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:19:47 np0005546420.localdomain sudo[125657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmbhgubnpzndmljxewsoflfjampvyozh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926387.358527-492-212345340910775/AnsiballZ_dnf.py
Dec 05 09:19:47 np0005546420.localdomain sudo[125657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:47 np0005546420.localdomain python3.9[125659]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:19:48 np0005546420.localdomain sshd[125662]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:19:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59909 DF PROTO=TCP SPT=38438 DPT=9105 SEQ=200159402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB4C1A10000000001030307) 
Dec 05 09:19:51 np0005546420.localdomain sudo[125657]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:51 np0005546420.localdomain sshd[125662]: Connection reset by authenticating user root 45.135.232.92 port 36476 [preauth]
Dec 05 09:19:51 np0005546420.localdomain sudo[125753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqkkovwltliluntzoxgmfhzcpojfsnpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926391.2353148-516-167987333495503/AnsiballZ_dnf.py
Dec 05 09:19:51 np0005546420.localdomain sudo[125753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:51 np0005546420.localdomain sshd[125756]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:19:51 np0005546420.localdomain python3.9[125755]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:19:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59911 DF PROTO=TCP SPT=38438 DPT=9105 SEQ=200159402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB4CD990000000001030307) 
Dec 05 09:19:53 np0005546420.localdomain sshd[125756]: Connection reset by authenticating user root 45.135.232.92 port 36490 [preauth]
Dec 05 09:19:53 np0005546420.localdomain sshd[125760]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:19:54 np0005546420.localdomain sshd[125760]: Invalid user admin from 45.135.232.92 port 36500
Dec 05 09:19:55 np0005546420.localdomain sudo[125753]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46173 DF PROTO=TCP SPT=35898 DPT=9102 SEQ=1042752822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB4D9D90000000001030307) 
Dec 05 09:19:55 np0005546420.localdomain sshd[125760]: Connection reset by invalid user admin 45.135.232.92 port 36500 [preauth]
Dec 05 09:19:55 np0005546420.localdomain sshd[125821]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:19:55 np0005546420.localdomain sudo[125852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nxcjqwmwykzjhbyhdcvlvvqrrxpejaov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926395.3479795-546-214067060810584/AnsiballZ_dnf.py
Dec 05 09:19:55 np0005546420.localdomain sudo[125852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:19:55 np0005546420.localdomain python3.9[125854]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:19:57 np0005546420.localdomain sshd[125821]: Connection reset by authenticating user root 45.135.232.92 port 64978 [preauth]
Dec 05 09:19:57 np0005546420.localdomain sshd[125858]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:19:59 np0005546420.localdomain sudo[125852]: pam_unix(sudo:session): session closed for user root
Dec 05 09:19:59 np0005546420.localdomain sshd[125858]: Invalid user admin from 45.135.232.92 port 64986
Dec 05 09:19:59 np0005546420.localdomain sshd[125858]: Connection reset by invalid user admin 45.135.232.92 port 64986 [preauth]
Dec 05 09:19:59 np0005546420.localdomain sudo[125955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-stpygjfqubbywmpsyhvonrcdqyavyjys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926399.7157664-573-219848095799962/AnsiballZ_dnf.py
Dec 05 09:19:59 np0005546420.localdomain sudo[125955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:00 np0005546420.localdomain python3.9[125957]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:20:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16589 DF PROTO=TCP SPT=42236 DPT=9102 SEQ=894172124 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB4EF590000000001030307) 
Dec 05 09:20:02 np0005546420.localdomain sudo[125960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:20:02 np0005546420.localdomain sudo[125960]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:20:02 np0005546420.localdomain sudo[125960]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:02 np0005546420.localdomain sudo[125975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:20:02 np0005546420.localdomain sudo[125975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:20:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38556 DF PROTO=TCP SPT=39088 DPT=9882 SEQ=3883990522 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB4F7D90000000001030307) 
Dec 05 09:20:03 np0005546420.localdomain sudo[125975]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:03 np0005546420.localdomain sudo[125955]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:04 np0005546420.localdomain sudo[126122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mnhvvimlqruuvccdoksytoeimvlcfdtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926403.997085-609-18010740241147/AnsiballZ_dnf.py
Dec 05 09:20:04 np0005546420.localdomain sudo[126122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:04 np0005546420.localdomain sudo[126101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:20:04 np0005546420.localdomain sudo[126101]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:20:04 np0005546420.localdomain sudo[126101]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59913 DF PROTO=TCP SPT=38438 DPT=9105 SEQ=200159402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB4FDD90000000001030307) 
Dec 05 09:20:04 np0005546420.localdomain python3.9[126127]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:20:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40329 DF PROTO=TCP SPT=36688 DPT=9101 SEQ=2756202649 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB507D90000000001030307) 
Dec 05 09:20:07 np0005546420.localdomain sudo[126122]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:08 np0005546420.localdomain sudo[126220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ehrwuwruakacdxkxjtidbznpcturcsti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926408.002214-636-254038793258612/AnsiballZ_dnf.py
Dec 05 09:20:08 np0005546420.localdomain sudo[126220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:08 np0005546420.localdomain python3.9[126222]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:20:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5777 DF PROTO=TCP SPT=52328 DPT=9100 SEQ=1511573438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB517190000000001030307) 
Dec 05 09:20:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5778 DF PROTO=TCP SPT=52328 DPT=9100 SEQ=1511573438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB51F190000000001030307) 
Dec 05 09:20:12 np0005546420.localdomain sudo[126220]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:13 np0005546420.localdomain sudo[126314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhsxwqeapyjaiczssirkqxwfbtdnaaqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926413.1744099-663-47214049857601/AnsiballZ_dnf.py
Dec 05 09:20:13 np0005546420.localdomain sudo[126314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:13 np0005546420.localdomain python3.9[126316]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:20:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5779 DF PROTO=TCP SPT=52328 DPT=9100 SEQ=1511573438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB52ED90000000001030307) 
Dec 05 09:20:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52044 DF PROTO=TCP SPT=39806 DPT=9105 SEQ=3363068414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB536D10000000001030307) 
Dec 05 09:20:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52046 DF PROTO=TCP SPT=39806 DPT=9105 SEQ=3363068414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB542DA0000000001030307) 
Dec 05 09:20:24 np0005546420.localdomain sudo[126314]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16591 DF PROTO=TCP SPT=42236 DPT=9102 SEQ=894172124 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB54FD90000000001030307) 
Dec 05 09:20:25 np0005546420.localdomain sudo[126481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mfsnlfzmgyevmlmpzwamcwesrfyftilk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926425.2050786-699-199648685803144/AnsiballZ_file.py
Dec 05 09:20:25 np0005546420.localdomain sudo[126481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:25 np0005546420.localdomain python3.9[126483]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:20:25 np0005546420.localdomain sudo[126481]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:26 np0005546420.localdomain sudo[126586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htolmdmdepszmukukwoeqqoghygkknra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926425.8709702-723-191046346106295/AnsiballZ_stat.py
Dec 05 09:20:26 np0005546420.localdomain sudo[126586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:26 np0005546420.localdomain python3.9[126588]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:20:26 np0005546420.localdomain sudo[126586]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:26 np0005546420.localdomain sudo[126659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hnkttqquxtfcongthrelgtnzlwlgustx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926425.8709702-723-191046346106295/AnsiballZ_copy.py
Dec 05 09:20:26 np0005546420.localdomain sudo[126659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:26 np0005546420.localdomain python3.9[126661]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764926425.8709702-723-191046346106295/.source.json _original_basename=.y4q3v_wp follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:20:26 np0005546420.localdomain sudo[126659]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:27 np0005546420.localdomain sudo[126751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-achozhktrfzffwrgyixwerjvuznirbfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926427.2085893-777-148190698993736/AnsiballZ_podman_image.py
Dec 05 09:20:27 np0005546420.localdomain sudo[126751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:27 np0005546420.localdomain python3.9[126753]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 05 09:20:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13875 DF PROTO=TCP SPT=60224 DPT=9102 SEQ=389405211 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB564590000000001030307) 
Dec 05 09:20:30 np0005546420.localdomain systemd-journald[48245]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 77.5 (258 of 333 items), suggesting rotation.
Dec 05 09:20:30 np0005546420.localdomain systemd-journald[48245]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 05 09:20:30 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:20:30 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:20:30 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:20:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56430 DF PROTO=TCP SPT=39124 DPT=9882 SEQ=2041789614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB56BD90000000001030307) 
Dec 05 09:20:33 np0005546420.localdomain podman[126766]: 2025-12-05 09:20:27.98156316 +0000 UTC m=+0.039227713 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 05 09:20:34 np0005546420.localdomain sudo[126751]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52048 DF PROTO=TCP SPT=39806 DPT=9105 SEQ=3363068414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB573D90000000001030307) 
Dec 05 09:20:35 np0005546420.localdomain sudo[126969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mnxqewgfbqtjzpedpysdhtytkctxpyxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926434.7480125-810-1583253949544/AnsiballZ_podman_image.py
Dec 05 09:20:35 np0005546420.localdomain sudo[126969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:35 np0005546420.localdomain python3.9[126971]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 05 09:20:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64371 DF PROTO=TCP SPT=35714 DPT=9101 SEQ=816709087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB57DD90000000001030307) 
Dec 05 09:20:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3291 DF PROTO=TCP SPT=49122 DPT=9100 SEQ=4079397195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB58C590000000001030307) 
Dec 05 09:20:42 np0005546420.localdomain podman[126983]: 2025-12-05 09:20:35.36493802 +0000 UTC m=+0.045151526 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 09:20:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3292 DF PROTO=TCP SPT=49122 DPT=9100 SEQ=4079397195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB5945A0000000001030307) 
Dec 05 09:20:43 np0005546420.localdomain sudo[126969]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:44 np0005546420.localdomain sudo[127178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mnuryaiinfhzkvdesscwsszicfmwogik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926443.9768014-846-119831253572772/AnsiballZ_podman_image.py
Dec 05 09:20:44 np0005546420.localdomain sudo[127178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:44 np0005546420.localdomain python3.9[127180]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 05 09:20:46 np0005546420.localdomain podman[127192]: 2025-12-05 09:20:44.610507686 +0000 UTC m=+0.046418646 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Dec 05 09:20:46 np0005546420.localdomain sudo[127178]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3293 DF PROTO=TCP SPT=49122 DPT=9100 SEQ=4079397195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB5A4190000000001030307) 
Dec 05 09:20:47 np0005546420.localdomain sudo[127354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvoepfjvzhbphchpqgaqlzdenvxrvmhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926446.9206853-873-159332516286831/AnsiballZ_podman_image.py
Dec 05 09:20:47 np0005546420.localdomain sudo[127354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:47 np0005546420.localdomain python3.9[127356]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 05 09:20:48 np0005546420.localdomain podman[127368]: 2025-12-05 09:20:47.542390944 +0000 UTC m=+0.047906152 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 09:20:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56431 DF PROTO=TCP SPT=39124 DPT=9882 SEQ=2041789614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB5ABDA0000000001030307) 
Dec 05 09:20:48 np0005546420.localdomain sudo[127354]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:49 np0005546420.localdomain sudo[127527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqqysynjzsjmuqsgulpqrlhrsciutamt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926449.2144575-900-258505345171564/AnsiballZ_podman_image.py
Dec 05 09:20:49 np0005546420.localdomain sudo[127527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:49 np0005546420.localdomain python3.9[127529]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 05 09:20:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37802 DF PROTO=TCP SPT=52370 DPT=9105 SEQ=4155209246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB5B8190000000001030307) 
Dec 05 09:20:53 np0005546420.localdomain podman[127541]: 2025-12-05 09:20:49.829402347 +0000 UTC m=+0.047420696 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Dec 05 09:20:53 np0005546420.localdomain sudo[127527]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:54 np0005546420.localdomain sudo[127715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxqnwzftihxwhpukycedepwlorfjziem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926453.8490644-900-146842068286940/AnsiballZ_podman_image.py
Dec 05 09:20:54 np0005546420.localdomain sudo[127715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:20:54 np0005546420.localdomain python3.9[127717]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 05 09:20:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3294 DF PROTO=TCP SPT=49122 DPT=9100 SEQ=4079397195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB5C3D90000000001030307) 
Dec 05 09:20:55 np0005546420.localdomain podman[127731]: 2025-12-05 09:20:54.441266673 +0000 UTC m=+0.045066993 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Dec 05 09:20:56 np0005546420.localdomain sudo[127715]: pam_unix(sudo:session): session closed for user root
Dec 05 09:20:56 np0005546420.localdomain sshd[123933]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:20:56 np0005546420.localdomain systemd[1]: session-39.scope: Deactivated successfully.
Dec 05 09:20:56 np0005546420.localdomain systemd[1]: session-39.scope: Consumed 1min 33.147s CPU time.
Dec 05 09:20:56 np0005546420.localdomain systemd-logind[762]: Session 39 logged out. Waiting for processes to exit.
Dec 05 09:20:56 np0005546420.localdomain systemd-logind[762]: Removed session 39.
Dec 05 09:21:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11092 DF PROTO=TCP SPT=54748 DPT=9102 SEQ=740763160 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB5D99A0000000001030307) 
Dec 05 09:21:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49536 DF PROTO=TCP SPT=47484 DPT=9882 SEQ=299858532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB5E1D90000000001030307) 
Dec 05 09:21:03 np0005546420.localdomain sshd[127842]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:21:03 np0005546420.localdomain sshd[127842]: Accepted publickey for zuul from 192.168.122.30 port 47712 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:21:03 np0005546420.localdomain systemd-logind[762]: New session 40 of user zuul.
Dec 05 09:21:03 np0005546420.localdomain systemd[1]: Started Session 40 of User zuul.
Dec 05 09:21:03 np0005546420.localdomain sshd[127842]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:21:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37804 DF PROTO=TCP SPT=52370 DPT=9105 SEQ=4155209246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB5E7DA0000000001030307) 
Dec 05 09:21:04 np0005546420.localdomain sudo[127936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:21:04 np0005546420.localdomain sudo[127936]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:21:04 np0005546420.localdomain sudo[127936]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:04 np0005546420.localdomain sudo[127951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:21:04 np0005546420.localdomain sudo[127951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:21:04 np0005546420.localdomain python3.9[127935]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:21:05 np0005546420.localdomain sudo[127951]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:05 np0005546420.localdomain sudo[128016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:21:05 np0005546420.localdomain sudo[128016]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:21:05 np0005546420.localdomain sudo[128016]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:05 np0005546420.localdomain sudo[128031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 05 09:21:05 np0005546420.localdomain sudo[128031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:21:05 np0005546420.localdomain sudo[128031]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:06 np0005546420.localdomain sudo[128140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-awmrdsxrysqienjawmwmsvtshvnpmgtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926465.647011-69-78240329300139/AnsiballZ_getent.py
Dec 05 09:21:06 np0005546420.localdomain sudo[128140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:06 np0005546420.localdomain python3.9[128142]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 05 09:21:06 np0005546420.localdomain sudo[128140]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48719 DF PROTO=TCP SPT=53264 DPT=9101 SEQ=3460648467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB5F3D90000000001030307) 
Dec 05 09:21:07 np0005546420.localdomain sudo[128233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yewgidbcmvmhdmkkunbqlnefsqazqxpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926467.0503058-105-190604657189628/AnsiballZ_setup.py
Dec 05 09:21:07 np0005546420.localdomain sudo[128233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:07 np0005546420.localdomain python3.9[128235]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:21:07 np0005546420.localdomain sudo[128233]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:08 np0005546420.localdomain sudo[128287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubsglshvczbyxwfihbpmbixolzacgciy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926467.0503058-105-190604657189628/AnsiballZ_dnf.py
Dec 05 09:21:08 np0005546420.localdomain sudo[128287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:08 np0005546420.localdomain python3.9[128289]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:21:08 np0005546420.localdomain sudo[128291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:21:08 np0005546420.localdomain sudo[128291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:21:08 np0005546420.localdomain sudo[128291]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48537 DF PROTO=TCP SPT=53730 DPT=9100 SEQ=3916400577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB601590000000001030307) 
Dec 05 09:21:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48538 DF PROTO=TCP SPT=53730 DPT=9100 SEQ=3916400577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB609590000000001030307) 
Dec 05 09:21:12 np0005546420.localdomain sudo[128287]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:13 np0005546420.localdomain sudo[128654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ppowydresrgjyrmhxewvrcjdllyxmaaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926473.1625018-147-217462144960226/AnsiballZ_dnf.py
Dec 05 09:21:13 np0005546420.localdomain sudo[128654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:13 np0005546420.localdomain python3.9[128656]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:21:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48539 DF PROTO=TCP SPT=53730 DPT=9100 SEQ=3916400577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB619190000000001030307) 
Dec 05 09:21:17 np0005546420.localdomain sudo[128654]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:18 np0005546420.localdomain sudo[128748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbzspknpmycfndjzcokwuoghtfutwlon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926477.9685292-171-41269489675551/AnsiballZ_systemd.py
Dec 05 09:21:18 np0005546420.localdomain sudo[128748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:18 np0005546420.localdomain python3.9[128750]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:21:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27820 DF PROTO=TCP SPT=35672 DPT=9105 SEQ=2424986157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB621300000000001030307) 
Dec 05 09:21:18 np0005546420.localdomain sudo[128748]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:20 np0005546420.localdomain python3.9[128843]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:21:21 np0005546420.localdomain sudo[128933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dgojiuralxglbmhokmtnwmklvcrnbewv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926481.0340521-225-148966977848851/AnsiballZ_sefcontext.py
Dec 05 09:21:21 np0005546420.localdomain sudo[128933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:21 np0005546420.localdomain python3.9[128935]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 05 09:21:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27822 DF PROTO=TCP SPT=35672 DPT=9105 SEQ=2424986157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB62D1A0000000001030307) 
Dec 05 09:21:23 np0005546420.localdomain kernel: SELinux:  Converting 2744 SID table entries...
Dec 05 09:21:23 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:21:23 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 09:21:23 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:21:23 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:21:23 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:21:23 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:21:23 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:21:23 np0005546420.localdomain sudo[128933]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:24 np0005546420.localdomain python3.9[129104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:21:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48540 DF PROTO=TCP SPT=53730 DPT=9100 SEQ=3916400577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB639D90000000001030307) 
Dec 05 09:21:25 np0005546420.localdomain sudo[129200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gofzygosvnvhditslqegpsrkwhuvgntk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926485.4414237-279-9514205952954/AnsiballZ_dnf.py
Dec 05 09:21:25 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=18 res=1
Dec 05 09:21:25 np0005546420.localdomain sudo[129200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:25 np0005546420.localdomain python3.9[129202]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:21:29 np0005546420.localdomain sudo[129200]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:29 np0005546420.localdomain sudo[129294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qixgsvvkmhxxezgmlrjthxjokdwhqpga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926489.5812492-303-266391357734359/AnsiballZ_command.py
Dec 05 09:21:29 np0005546420.localdomain sudo[129294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:30 np0005546420.localdomain python3.9[129296]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:21:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37908 DF PROTO=TCP SPT=34890 DPT=9102 SEQ=189168474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB64ED90000000001030307) 
Dec 05 09:21:30 np0005546420.localdomain sudo[129294]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:31 np0005546420.localdomain sudo[129539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqmkmxelrzkdjebafkzkfihltldtzouv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926491.103439-327-83162380148428/AnsiballZ_file.py
Dec 05 09:21:31 np0005546420.localdomain sudo[129539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:31 np0005546420.localdomain python3.9[129541]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 09:21:31 np0005546420.localdomain sudo[129539]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:32 np0005546420.localdomain python3.9[129631]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:21:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41901 DF PROTO=TCP SPT=59828 DPT=9882 SEQ=1958857324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB657D90000000001030307) 
Dec 05 09:21:33 np0005546420.localdomain sudo[129723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hyhadnullmpmpoubcljandreyuzblvlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926493.231406-381-99101642850370/AnsiballZ_dnf.py
Dec 05 09:21:33 np0005546420.localdomain sudo[129723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:33 np0005546420.localdomain python3.9[129725]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:21:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27824 DF PROTO=TCP SPT=35672 DPT=9105 SEQ=2424986157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB65DDA0000000001030307) 
Dec 05 09:21:36 np0005546420.localdomain sudo[129723]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41726 DF PROTO=TCP SPT=45570 DPT=9101 SEQ=2642155262 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB667D90000000001030307) 
Dec 05 09:21:37 np0005546420.localdomain sudo[129817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zmzscmtcuksyxpdehpuqpmcmbdqvzmty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926497.1218445-405-112591688636077/AnsiballZ_dnf.py
Dec 05 09:21:37 np0005546420.localdomain sudo[129817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:37 np0005546420.localdomain python3.9[129819]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:21:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50895 DF PROTO=TCP SPT=51112 DPT=9100 SEQ=280587940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB6769A0000000001030307) 
Dec 05 09:21:40 np0005546420.localdomain sudo[129817]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:41 np0005546420.localdomain sudo[129911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxfczgngvjsnvvtjszcravcloomixmer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926501.1081605-429-38555598421470/AnsiballZ_systemd.py
Dec 05 09:21:41 np0005546420.localdomain sudo[129911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:41 np0005546420.localdomain python3.9[129913]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 09:21:41 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:21:41 np0005546420.localdomain systemd-rc-local-generator[129940]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:21:41 np0005546420.localdomain systemd-sysv-generator[129944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:21:41 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:21:42 np0005546420.localdomain sudo[129911]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:42 np0005546420.localdomain sudo[130043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xbycyvteyqifqpewzwhhwpdzdlqtbluz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926502.4564452-459-171746073148557/AnsiballZ_stat.py
Dec 05 09:21:42 np0005546420.localdomain sudo[130043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50896 DF PROTO=TCP SPT=51112 DPT=9100 SEQ=280587940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB67E990000000001030307) 
Dec 05 09:21:42 np0005546420.localdomain python3.9[130045]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:21:42 np0005546420.localdomain sudo[130043]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:44 np0005546420.localdomain sudo[130135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qsktvryfzvvaoeuxjcjwbkfqeuuywuhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926503.1761937-486-131381556748341/AnsiballZ_ini_file.py
Dec 05 09:21:44 np0005546420.localdomain sudo[130135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:44 np0005546420.localdomain python3.9[130137]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:21:44 np0005546420.localdomain sudo[130135]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:44 np0005546420.localdomain sudo[130229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzflmwdwuabtqqanwhxqnsvbfdorcwfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926504.6457634-510-175256258634355/AnsiballZ_ini_file.py
Dec 05 09:21:44 np0005546420.localdomain sudo[130229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:45 np0005546420.localdomain python3.9[130231]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:21:45 np0005546420.localdomain sudo[130229]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:46 np0005546420.localdomain sudo[130321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzxilodibuheftcglutjifabmmgqzwwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926505.846881-534-193398983126145/AnsiballZ_ini_file.py
Dec 05 09:21:46 np0005546420.localdomain sudo[130321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:46 np0005546420.localdomain python3.9[130323]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:21:46 np0005546420.localdomain sudo[130321]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50897 DF PROTO=TCP SPT=51112 DPT=9100 SEQ=280587940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB68E5A0000000001030307) 
Dec 05 09:21:46 np0005546420.localdomain sudo[130413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bgnxqgldmfedshemkwszmnqbzpdhcrbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926506.6444037-564-110938591830360/AnsiballZ_stat.py
Dec 05 09:21:46 np0005546420.localdomain sudo[130413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:47 np0005546420.localdomain python3.9[130415]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:21:47 np0005546420.localdomain sudo[130413]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:47 np0005546420.localdomain sudo[130486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sehlyyurbcxrxznkpmcnxpnpocqhxjop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926506.6444037-564-110938591830360/AnsiballZ_copy.py
Dec 05 09:21:47 np0005546420.localdomain sudo[130486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:47 np0005546420.localdomain python3.9[130488]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926506.6444037-564-110938591830360/.source _original_basename=.txvbywcl follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:21:47 np0005546420.localdomain sudo[130486]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:48 np0005546420.localdomain sudo[130578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jdpuyjkihceqpfrmuhtqrjegkciyjuhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926507.9623945-609-188406356482228/AnsiballZ_file.py
Dec 05 09:21:48 np0005546420.localdomain sudo[130578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:48 np0005546420.localdomain python3.9[130580]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:21:48 np0005546420.localdomain sudo[130578]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31816 DF PROTO=TCP SPT=46586 DPT=9105 SEQ=1252309068 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB696610000000001030307) 
Dec 05 09:21:49 np0005546420.localdomain sudo[130670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hitxizyxkhhqxhugqtuznqjkrcgivlrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926508.626629-633-49299558963704/AnsiballZ_edpm_os_net_config_mappings.py
Dec 05 09:21:49 np0005546420.localdomain sudo[130670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:49 np0005546420.localdomain python3.9[130672]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 05 09:21:49 np0005546420.localdomain sudo[130670]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:49 np0005546420.localdomain sudo[130762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwcqhxfpaujgylyhbwsvhlepfghkcwyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926509.4780352-660-250725856794043/AnsiballZ_file.py
Dec 05 09:21:49 np0005546420.localdomain sudo[130762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:49 np0005546420.localdomain python3.9[130764]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:21:49 np0005546420.localdomain sudo[130762]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:50 np0005546420.localdomain sudo[130854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttlrusnkcapimgdztsnaeeylixlvockw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926510.2683883-690-34223458701929/AnsiballZ_stat.py
Dec 05 09:21:50 np0005546420.localdomain sudo[130854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:50 np0005546420.localdomain python3.9[130856]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:21:50 np0005546420.localdomain sudo[130854]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:51 np0005546420.localdomain sudo[130927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pzqgldenyesxofhhjwoqnwqseivscopl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926510.2683883-690-34223458701929/AnsiballZ_copy.py
Dec 05 09:21:51 np0005546420.localdomain sudo[130927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:51 np0005546420.localdomain python3.9[130929]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926510.2683883-690-34223458701929/.source.yaml _original_basename=.txv3gnul follow=False checksum=06d744ebe702728c19f6d1a8f97158d086012058 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:21:51 np0005546420.localdomain sudo[130927]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31818 DF PROTO=TCP SPT=46586 DPT=9105 SEQ=1252309068 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB6A2590000000001030307) 
Dec 05 09:21:51 np0005546420.localdomain sudo[131019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exmhmhdyqtjsyyvagavdhjvlfjijevpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926511.564576-735-25160460849443/AnsiballZ_slurp.py
Dec 05 09:21:51 np0005546420.localdomain sudo[131019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:52 np0005546420.localdomain python3.9[131021]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 05 09:21:52 np0005546420.localdomain sudo[131019]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:53 np0005546420.localdomain sudo[131124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yaaogzcituonbuejimgimmckxuykzbtx ; ANSIBLE_ASYNC_DIR='~/.ansible_async' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926512.602173-762-164077102903129/async_wrapper.py j656285747437 300 /home/zuul/.ansible/tmp/ansible-tmp-1764926512.602173-762-164077102903129/AnsiballZ_edpm_os_net_config.py _
Dec 05 09:21:53 np0005546420.localdomain sudo[131124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:53 np0005546420.localdomain ansible-async_wrapper.py[131126]: Invoked with j656285747437 300 /home/zuul/.ansible/tmp/ansible-tmp-1764926512.602173-762-164077102903129/AnsiballZ_edpm_os_net_config.py _
Dec 05 09:21:53 np0005546420.localdomain ansible-async_wrapper.py[131129]: Starting module and watcher
Dec 05 09:21:53 np0005546420.localdomain ansible-async_wrapper.py[131129]: Start watching 131130 (300)
Dec 05 09:21:53 np0005546420.localdomain ansible-async_wrapper.py[131130]: Start module (131130)
Dec 05 09:21:53 np0005546420.localdomain ansible-async_wrapper.py[131126]: Return async_wrapper task started.
Dec 05 09:21:53 np0005546420.localdomain sudo[131124]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:53 np0005546420.localdomain python3.9[131131]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False
Dec 05 09:21:54 np0005546420.localdomain ansible-async_wrapper.py[131130]: Module complete (131130)
Dec 05 09:21:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50898 DF PROTO=TCP SPT=51112 DPT=9100 SEQ=280587940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB6ADD90000000001030307) 
Dec 05 09:21:57 np0005546420.localdomain sudo[131221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itkdiblstgavbkyakkvyomjxbmxxdzqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926516.614812-762-68492473156060/AnsiballZ_async_status.py
Dec 05 09:21:57 np0005546420.localdomain sudo[131221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:57 np0005546420.localdomain python3.9[131223]: ansible-ansible.legacy.async_status Invoked with jid=j656285747437.131126 mode=status _async_dir=/root/.ansible_async
Dec 05 09:21:57 np0005546420.localdomain sudo[131221]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:57 np0005546420.localdomain sudo[131280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahssqecazgssmxxldydrabzivpwexbhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926516.614812-762-68492473156060/AnsiballZ_async_status.py
Dec 05 09:21:57 np0005546420.localdomain sudo[131280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:57 np0005546420.localdomain python3.9[131282]: ansible-ansible.legacy.async_status Invoked with jid=j656285747437.131126 mode=cleanup _async_dir=/root/.ansible_async
Dec 05 09:21:57 np0005546420.localdomain sudo[131280]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:58 np0005546420.localdomain sudo[131372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjcuyzqyprqebvwqnhkmgccxxdmanzvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926518.0125628-828-142109567306603/AnsiballZ_stat.py
Dec 05 09:21:58 np0005546420.localdomain sudo[131372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:58 np0005546420.localdomain python3.9[131374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:21:58 np0005546420.localdomain sudo[131372]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:58 np0005546420.localdomain ansible-async_wrapper.py[131129]: Done in kid B.
Dec 05 09:21:58 np0005546420.localdomain sudo[131445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oixmpvytryimzxvoagcfaligpoatkiyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926518.0125628-828-142109567306603/AnsiballZ_copy.py
Dec 05 09:21:58 np0005546420.localdomain sudo[131445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:59 np0005546420.localdomain python3.9[131447]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926518.0125628-828-142109567306603/.source.returncode _original_basename=.fm3do_i3 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:21:59 np0005546420.localdomain sudo[131445]: pam_unix(sudo:session): session closed for user root
Dec 05 09:21:59 np0005546420.localdomain sudo[131537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vvxcgdzmvyxuedrcufbliizkjsmvxupg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926519.29475-876-61286523559261/AnsiballZ_stat.py
Dec 05 09:21:59 np0005546420.localdomain sudo[131537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:21:59 np0005546420.localdomain python3.9[131539]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:21:59 np0005546420.localdomain sudo[131537]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:00 np0005546420.localdomain sudo[131610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-comeogduxhjvxdgaupqequmtdsccopew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926519.29475-876-61286523559261/AnsiballZ_copy.py
Dec 05 09:22:00 np0005546420.localdomain sudo[131610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:00 np0005546420.localdomain python3.9[131612]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926519.29475-876-61286523559261/.source.cfg _original_basename=.xwe2wox9 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:22:00 np0005546420.localdomain sudo[131610]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21437 DF PROTO=TCP SPT=52660 DPT=9102 SEQ=353029562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB6C4190000000001030307) 
Dec 05 09:22:00 np0005546420.localdomain sudo[131702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ctjgrquszpraihbkrlxsvvbsgujexxtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926520.4882724-921-237489504190009/AnsiballZ_systemd.py
Dec 05 09:22:00 np0005546420.localdomain sudo[131702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:01 np0005546420.localdomain python3.9[131704]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:22:01 np0005546420.localdomain systemd[1]: Reloading Network Manager...
Dec 05 09:22:01 np0005546420.localdomain NetworkManager[5963]: <info>  [1764926521.1662] audit: op="reload" arg="0" pid=131708 uid=0 result="success"
Dec 05 09:22:01 np0005546420.localdomain NetworkManager[5963]: <info>  [1764926521.1674] config: signal: SIGHUP (no changes from disk)
Dec 05 09:22:01 np0005546420.localdomain systemd[1]: Reloaded Network Manager.
Dec 05 09:22:01 np0005546420.localdomain sudo[131702]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:01 np0005546420.localdomain sshd[127842]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:22:01 np0005546420.localdomain systemd-logind[762]: Session 40 logged out. Waiting for processes to exit.
Dec 05 09:22:01 np0005546420.localdomain systemd[1]: session-40.scope: Deactivated successfully.
Dec 05 09:22:01 np0005546420.localdomain systemd[1]: session-40.scope: Consumed 36.119s CPU time.
Dec 05 09:22:01 np0005546420.localdomain systemd-logind[762]: Removed session 40.
Dec 05 09:22:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32180 DF PROTO=TCP SPT=42442 DPT=9882 SEQ=2025296633 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB6CBD90000000001030307) 
Dec 05 09:22:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31820 DF PROTO=TCP SPT=46586 DPT=9105 SEQ=1252309068 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB6D1D90000000001030307) 
Dec 05 09:22:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:22:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 5400.1 total, 600.0 interval
                                                          Cumulative writes: 5715 writes, 25K keys, 5715 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5715 writes, 734 syncs, 7.79 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 09:22:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65088 DF PROTO=TCP SPT=48190 DPT=9101 SEQ=2682704251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB6DDD90000000001030307) 
Dec 05 09:22:08 np0005546420.localdomain sshd[131723]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:22:08 np0005546420.localdomain sshd[131723]: Accepted publickey for zuul from 192.168.122.30 port 54492 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:22:08 np0005546420.localdomain systemd-logind[762]: New session 41 of user zuul.
Dec 05 09:22:08 np0005546420.localdomain systemd[1]: Started Session 41 of User zuul.
Dec 05 09:22:08 np0005546420.localdomain sshd[131723]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:22:09 np0005546420.localdomain sudo[131817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:22:09 np0005546420.localdomain sudo[131817]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:22:09 np0005546420.localdomain sudo[131817]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:09 np0005546420.localdomain sudo[131832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:22:09 np0005546420.localdomain sudo[131832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:22:09 np0005546420.localdomain python3.9[131816]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:22:09 np0005546420.localdomain sudo[131832]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:10 np0005546420.localdomain python3.9[131972]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:22:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14806 DF PROTO=TCP SPT=50834 DPT=9100 SEQ=29098868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB6EBD90000000001030307) 
Dec 05 09:22:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:22:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 5400.1 total, 600.0 interval
                                                          Cumulative writes: 4690 writes, 21K keys, 4690 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4690 writes, 584 syncs, 8.03 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 09:22:11 np0005546420.localdomain python3.9[132117]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:22:12 np0005546420.localdomain sshd[131723]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:22:12 np0005546420.localdomain systemd[1]: session-41.scope: Deactivated successfully.
Dec 05 09:22:12 np0005546420.localdomain systemd[1]: session-41.scope: Consumed 2.198s CPU time.
Dec 05 09:22:12 np0005546420.localdomain systemd-logind[762]: Session 41 logged out. Waiting for processes to exit.
Dec 05 09:22:12 np0005546420.localdomain systemd-logind[762]: Removed session 41.
Dec 05 09:22:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14807 DF PROTO=TCP SPT=50834 DPT=9100 SEQ=29098868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB6F3D90000000001030307) 
Dec 05 09:22:13 np0005546420.localdomain sudo[132133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:22:13 np0005546420.localdomain sudo[132133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:22:13 np0005546420.localdomain sudo[132133]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14808 DF PROTO=TCP SPT=50834 DPT=9100 SEQ=29098868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB703990000000001030307) 
Dec 05 09:22:16 np0005546420.localdomain sshd[132148]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:22:17 np0005546420.localdomain sshd[132148]: Accepted publickey for zuul from 192.168.122.30 port 54914 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:22:17 np0005546420.localdomain systemd-logind[762]: New session 42 of user zuul.
Dec 05 09:22:17 np0005546420.localdomain systemd[1]: Started Session 42 of User zuul.
Dec 05 09:22:17 np0005546420.localdomain sshd[132148]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:22:18 np0005546420.localdomain python3.9[132241]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:22:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2493 DF PROTO=TCP SPT=47696 DPT=9105 SEQ=2933489973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB70B910000000001030307) 
Dec 05 09:22:19 np0005546420.localdomain python3.9[132335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:22:19 np0005546420.localdomain sudo[132429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lskdtlhafcrmbelgfyqcxruozbdcdpfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926539.4750662-81-279906343829999/AnsiballZ_setup.py
Dec 05 09:22:19 np0005546420.localdomain sudo[132429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:20 np0005546420.localdomain python3.9[132431]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:22:20 np0005546420.localdomain sudo[132429]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:20 np0005546420.localdomain sudo[132483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eepdadylyiaersrsjzyaxjimcgpnmjxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926539.4750662-81-279906343829999/AnsiballZ_dnf.py
Dec 05 09:22:20 np0005546420.localdomain sudo[132483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:21 np0005546420.localdomain python3.9[132485]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:22:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2495 DF PROTO=TCP SPT=47696 DPT=9105 SEQ=2933489973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB717990000000001030307) 
Dec 05 09:22:24 np0005546420.localdomain sudo[132483]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21439 DF PROTO=TCP SPT=52660 DPT=9102 SEQ=353029562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB723DA0000000001030307) 
Dec 05 09:22:25 np0005546420.localdomain sudo[132577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gotprphryslxvvfetuqlhrnwpjrrowlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926544.9431264-117-147511813740774/AnsiballZ_setup.py
Dec 05 09:22:25 np0005546420.localdomain sudo[132577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:25 np0005546420.localdomain python3.9[132579]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:22:25 np0005546420.localdomain sudo[132577]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:26 np0005546420.localdomain sudo[132724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kcobsdcifutzwperyvlbgcybflsneupm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926546.140818-150-236231501226236/AnsiballZ_file.py
Dec 05 09:22:26 np0005546420.localdomain sudo[132724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:26 np0005546420.localdomain python3.9[132726]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:22:26 np0005546420.localdomain sudo[132724]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:27 np0005546420.localdomain sudo[132816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhebxaaxbrmsazrcagckhnympkozwgkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926546.9528322-174-60968261130418/AnsiballZ_command.py
Dec 05 09:22:27 np0005546420.localdomain sudo[132816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:27 np0005546420.localdomain python3.9[132818]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:22:27 np0005546420.localdomain sudo[132816]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:28 np0005546420.localdomain auditd[708]: Audit daemon rotating log files
Dec 05 09:22:28 np0005546420.localdomain sudo[132919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pntcfblfpedzxiqzysjidjwhasijzrey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926547.7790835-198-99839563365641/AnsiballZ_stat.py
Dec 05 09:22:28 np0005546420.localdomain sudo[132919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:28 np0005546420.localdomain python3.9[132921]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:22:28 np0005546420.localdomain sudo[132919]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:28 np0005546420.localdomain sudo[132967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxlhdvtbzithnvxlcyxnqelzwqljpktg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926547.7790835-198-99839563365641/AnsiballZ_file.py
Dec 05 09:22:28 np0005546420.localdomain sudo[132967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:28 np0005546420.localdomain python3.9[132969]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:22:28 np0005546420.localdomain sudo[132967]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:29 np0005546420.localdomain sudo[133059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ayohdiggrdxppcyvildymocnnhoqagdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926549.0027425-234-219504976369531/AnsiballZ_stat.py
Dec 05 09:22:29 np0005546420.localdomain sudo[133059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:29 np0005546420.localdomain python3.9[133061]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:22:29 np0005546420.localdomain sudo[133059]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:29 np0005546420.localdomain sudo[133107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nkaospuqvypxxztgrvgqgiahqfnrdqdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926549.0027425-234-219504976369531/AnsiballZ_file.py
Dec 05 09:22:29 np0005546420.localdomain sudo[133107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:30 np0005546420.localdomain python3.9[133109]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:22:30 np0005546420.localdomain sudo[133107]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9871 DF PROTO=TCP SPT=48384 DPT=9102 SEQ=3114646565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB739190000000001030307) 
Dec 05 09:22:30 np0005546420.localdomain sudo[133199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qpopamdcbvpwryocjqtkanovvtznboxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926550.3387733-273-55870673354279/AnsiballZ_ini_file.py
Dec 05 09:22:30 np0005546420.localdomain sudo[133199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:30 np0005546420.localdomain python3.9[133201]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:22:31 np0005546420.localdomain sudo[133199]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:31 np0005546420.localdomain sudo[133291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-plinwmcdoywhcreohmkmghckoioebqqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926551.09116-273-70213460108511/AnsiballZ_ini_file.py
Dec 05 09:22:31 np0005546420.localdomain sudo[133291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:31 np0005546420.localdomain python3.9[133293]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:22:31 np0005546420.localdomain sudo[133291]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:31 np0005546420.localdomain sudo[133383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fauamzmuqytnpvqdqitvhowymkzawdme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926551.5975769-273-22972976099677/AnsiballZ_ini_file.py
Dec 05 09:22:31 np0005546420.localdomain sudo[133383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:31 np0005546420.localdomain python3.9[133385]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:22:31 np0005546420.localdomain sudo[133383]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:32 np0005546420.localdomain sudo[133475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxlqsmeedbsctlayhldcskmhjgquheoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926552.11152-273-144581385033246/AnsiballZ_ini_file.py
Dec 05 09:22:32 np0005546420.localdomain sudo[133475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:32 np0005546420.localdomain python3.9[133477]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:22:32 np0005546420.localdomain sudo[133475]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37295 DF PROTO=TCP SPT=43568 DPT=9882 SEQ=3855113664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB741D90000000001030307) 
Dec 05 09:22:33 np0005546420.localdomain sudo[133567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jsnrziycfaujarjnhcgfnsobqrqnyedd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926553.2579143-366-125345974386623/AnsiballZ_dnf.py
Dec 05 09:22:33 np0005546420.localdomain sudo[133567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:33 np0005546420.localdomain python3.9[133569]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:22:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2497 DF PROTO=TCP SPT=47696 DPT=9105 SEQ=2933489973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB747D90000000001030307) 
Dec 05 09:22:36 np0005546420.localdomain sudo[133567]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19839 DF PROTO=TCP SPT=41862 DPT=9101 SEQ=1460529122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB751D90000000001030307) 
Dec 05 09:22:37 np0005546420.localdomain sudo[133661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-womlmanqwzjzswsiouiaygxevoyqiglz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926557.263944-399-137863895186967/AnsiballZ_setup.py
Dec 05 09:22:37 np0005546420.localdomain sudo[133661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:37 np0005546420.localdomain python3.9[133663]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:22:37 np0005546420.localdomain sudo[133661]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:38 np0005546420.localdomain sudo[133755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dflgoerhgirqngydyabaitnzqgbvcqpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926558.0037777-423-199050678614982/AnsiballZ_stat.py
Dec 05 09:22:38 np0005546420.localdomain sudo[133755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:38 np0005546420.localdomain python3.9[133757]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:22:38 np0005546420.localdomain sudo[133755]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:38 np0005546420.localdomain sudo[133847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxdmxgsbggurcfuzwfwgaivdqiqmxjks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926558.6850667-450-149973327091075/AnsiballZ_stat.py
Dec 05 09:22:38 np0005546420.localdomain sudo[133847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:39 np0005546420.localdomain python3.9[133849]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:22:39 np0005546420.localdomain sudo[133847]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:39 np0005546420.localdomain sudo[133939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfekloivfnyuxskmkzayvneirlwqlfmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926559.4688325-480-40173937231122/AnsiballZ_command.py
Dec 05 09:22:39 np0005546420.localdomain sudo[133939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:39 np0005546420.localdomain python3.9[133941]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:22:39 np0005546420.localdomain sudo[133939]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30946 DF PROTO=TCP SPT=42748 DPT=9100 SEQ=1325973934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB761190000000001030307) 
Dec 05 09:22:41 np0005546420.localdomain sudo[134032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-srsmblkrmqmlzzehtmjayffdftprgrpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926561.1911259-510-5244354577853/AnsiballZ_service_facts.py
Dec 05 09:22:41 np0005546420.localdomain sudo[134032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:41 np0005546420.localdomain python3.9[134034]: ansible-service_facts Invoked
Dec 05 09:22:41 np0005546420.localdomain network[134051]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:22:41 np0005546420.localdomain network[134052]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:22:41 np0005546420.localdomain network[134053]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:22:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30947 DF PROTO=TCP SPT=42748 DPT=9100 SEQ=1325973934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB7691A0000000001030307) 
Dec 05 09:22:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:22:46 np0005546420.localdomain sudo[134032]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30948 DF PROTO=TCP SPT=42748 DPT=9100 SEQ=1325973934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB778DB0000000001030307) 
Dec 05 09:22:47 np0005546420.localdomain sudo[134266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vkwhhxbtxorvsashndfxugiqvxteucgz ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764926567.2677295-555-140172362264074/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764926567.2677295-555-140172362264074/args
Dec 05 09:22:47 np0005546420.localdomain sudo[134266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:47 np0005546420.localdomain sudo[134266]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:48 np0005546420.localdomain sudo[134373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcntjlvznrhyzasifjmllckvfbmmrewy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926567.950124-588-55032435292757/AnsiballZ_dnf.py
Dec 05 09:22:48 np0005546420.localdomain sudo[134373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:48 np0005546420.localdomain python3.9[134375]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:22:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15049 DF PROTO=TCP SPT=38350 DPT=9105 SEQ=188678886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB780C10000000001030307) 
Dec 05 09:22:51 np0005546420.localdomain sudo[134373]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15051 DF PROTO=TCP SPT=38350 DPT=9105 SEQ=188678886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB78CDA0000000001030307) 
Dec 05 09:22:53 np0005546420.localdomain sudo[134467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-demwdjwdnuywnfpjtiscsiklngjdjrwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926573.0641825-627-238053709811520/AnsiballZ_package_facts.py
Dec 05 09:22:53 np0005546420.localdomain sudo[134467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:53 np0005546420.localdomain python3.9[134469]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 05 09:22:54 np0005546420.localdomain sudo[134467]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9873 DF PROTO=TCP SPT=48384 DPT=9102 SEQ=3114646565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB799D90000000001030307) 
Dec 05 09:22:55 np0005546420.localdomain sudo[134559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpamaumasztmlycgvxntknwybzoguxne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926575.1811776-658-101828756779455/AnsiballZ_stat.py
Dec 05 09:22:55 np0005546420.localdomain sudo[134559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:55 np0005546420.localdomain python3.9[134561]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:22:55 np0005546420.localdomain sudo[134559]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:56 np0005546420.localdomain sudo[134634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cjnypbcspzidzityspsedcixivhflhxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926575.1811776-658-101828756779455/AnsiballZ_copy.py
Dec 05 09:22:56 np0005546420.localdomain sudo[134634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:56 np0005546420.localdomain python3.9[134636]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926575.1811776-658-101828756779455/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:22:56 np0005546420.localdomain sudo[134634]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:56 np0005546420.localdomain sudo[134728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-reepzumkfiprjxtlosdpwnoddzwhlxbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926576.628823-703-104789326260356/AnsiballZ_stat.py
Dec 05 09:22:56 np0005546420.localdomain sudo[134728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:57 np0005546420.localdomain python3.9[134730]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:22:57 np0005546420.localdomain sudo[134728]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:57 np0005546420.localdomain sudo[134803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-alexopfmwrwndldoixbytaaupxtclmgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926576.628823-703-104789326260356/AnsiballZ_copy.py
Dec 05 09:22:57 np0005546420.localdomain sudo[134803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:57 np0005546420.localdomain python3.9[134805]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926576.628823-703-104789326260356/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:22:57 np0005546420.localdomain sudo[134803]: pam_unix(sudo:session): session closed for user root
Dec 05 09:22:59 np0005546420.localdomain sudo[134897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hfvimojgvujfkcgulibzipahpgtbywap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926578.6210814-766-280073364056529/AnsiballZ_lineinfile.py
Dec 05 09:22:59 np0005546420.localdomain sudo[134897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:22:59 np0005546420.localdomain python3.9[134899]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:22:59 np0005546420.localdomain sudo[134897]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44697 DF PROTO=TCP SPT=54528 DPT=9102 SEQ=2650806933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB7AE590000000001030307) 
Dec 05 09:23:00 np0005546420.localdomain sudo[134991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fgbljefumjycytijvngbkjliyfrstqet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926580.3301148-811-188646554547521/AnsiballZ_setup.py
Dec 05 09:23:00 np0005546420.localdomain sudo[134991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:00 np0005546420.localdomain python3.9[134993]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:23:01 np0005546420.localdomain sudo[134991]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:01 np0005546420.localdomain sudo[135045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bnxkjeoaxkjrahjedezssoarbkueqgmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926580.3301148-811-188646554547521/AnsiballZ_systemd.py
Dec 05 09:23:01 np0005546420.localdomain sudo[135045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:02 np0005546420.localdomain python3.9[135047]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:23:02 np0005546420.localdomain sudo[135045]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32466 DF PROTO=TCP SPT=60902 DPT=9882 SEQ=1269929284 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB7B5DA0000000001030307) 
Dec 05 09:23:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15053 DF PROTO=TCP SPT=38350 DPT=9105 SEQ=188678886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB7BDD90000000001030307) 
Dec 05 09:23:04 np0005546420.localdomain sudo[135139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbvgdmsakrazwlppedswtkcbedfxkeci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926583.8153648-859-11246082011050/AnsiballZ_setup.py
Dec 05 09:23:04 np0005546420.localdomain sudo[135139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:04 np0005546420.localdomain python3.9[135141]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:23:05 np0005546420.localdomain sudo[135139]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:05 np0005546420.localdomain sudo[135193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nueahbgxeevamvuykjvdmvobnipucxqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926583.8153648-859-11246082011050/AnsiballZ_systemd.py
Dec 05 09:23:05 np0005546420.localdomain sudo[135193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:05 np0005546420.localdomain python3.9[135195]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:23:05 np0005546420.localdomain chronyd[26140]: chronyd exiting
Dec 05 09:23:05 np0005546420.localdomain systemd[1]: Stopping NTP client/server...
Dec 05 09:23:05 np0005546420.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Dec 05 09:23:05 np0005546420.localdomain systemd[1]: Stopped NTP client/server.
Dec 05 09:23:05 np0005546420.localdomain systemd[1]: Starting NTP client/server...
Dec 05 09:23:05 np0005546420.localdomain chronyd[135203]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 05 09:23:05 np0005546420.localdomain chronyd[135203]: Frequency -30.635 +/- 0.192 ppm read from /var/lib/chrony/drift
Dec 05 09:23:05 np0005546420.localdomain chronyd[135203]: Loaded seccomp filter (level 2)
Dec 05 09:23:05 np0005546420.localdomain systemd[1]: Started NTP client/server.
Dec 05 09:23:05 np0005546420.localdomain sudo[135193]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:06 np0005546420.localdomain sshd[132148]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:23:06 np0005546420.localdomain systemd-logind[762]: Session 42 logged out. Waiting for processes to exit.
Dec 05 09:23:06 np0005546420.localdomain systemd[1]: session-42.scope: Deactivated successfully.
Dec 05 09:23:06 np0005546420.localdomain systemd[1]: session-42.scope: Consumed 28.818s CPU time.
Dec 05 09:23:06 np0005546420.localdomain systemd-logind[762]: Removed session 42.
Dec 05 09:23:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17050 DF PROTO=TCP SPT=33110 DPT=9101 SEQ=2470225750 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB7C7DA0000000001030307) 
Dec 05 09:23:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64958 DF PROTO=TCP SPT=41084 DPT=9100 SEQ=1943152256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB7D6190000000001030307) 
Dec 05 09:23:11 np0005546420.localdomain sshd[135219]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:23:11 np0005546420.localdomain sshd[135219]: Accepted publickey for zuul from 192.168.122.30 port 44248 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:23:11 np0005546420.localdomain systemd-logind[762]: New session 43 of user zuul.
Dec 05 09:23:11 np0005546420.localdomain systemd[1]: Started Session 43 of User zuul.
Dec 05 09:23:11 np0005546420.localdomain sshd[135219]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:23:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64959 DF PROTO=TCP SPT=41084 DPT=9100 SEQ=1943152256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB7DE1A0000000001030307) 
Dec 05 09:23:13 np0005546420.localdomain python3.9[135312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:23:13 np0005546420.localdomain sudo[135317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:23:13 np0005546420.localdomain sudo[135317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:23:13 np0005546420.localdomain sudo[135317]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:13 np0005546420.localdomain sudo[135332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:23:13 np0005546420.localdomain sudo[135332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:23:14 np0005546420.localdomain sudo[135332]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:14 np0005546420.localdomain sudo[135424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:23:14 np0005546420.localdomain sudo[135424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:23:14 np0005546420.localdomain sudo[135424]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:14 np0005546420.localdomain sudo[135439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b -- inventory --format=json-pretty --filter-for-batch
Dec 05 09:23:14 np0005546420.localdomain sudo[135439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:23:14 np0005546420.localdomain sudo[135497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jicnhiopzphqnqrrhgsfrgrcqnowkncs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926593.5610476-60-84680282022448/AnsiballZ_file.py
Dec 05 09:23:14 np0005546420.localdomain sudo[135497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:14 np0005546420.localdomain python3.9[135499]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:14 np0005546420.localdomain sudo[135497]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:14 np0005546420.localdomain podman[135552]: 
Dec 05 09:23:14 np0005546420.localdomain podman[135552]: 2025-12-05 09:23:14.827510252 +0000 UTC m=+0.057919526 container create 32772c9a9b0db61b551b628faec90c7f0f08f603fdfc55744d1987118b52c284 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goodall, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:23:14 np0005546420.localdomain systemd[1]: Started libpod-conmon-32772c9a9b0db61b551b628faec90c7f0f08f603fdfc55744d1987118b52c284.scope.
Dec 05 09:23:14 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:23:14 np0005546420.localdomain podman[135552]: 2025-12-05 09:23:14.797056398 +0000 UTC m=+0.027465722 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:23:14 np0005546420.localdomain podman[135552]: 2025-12-05 09:23:14.899441711 +0000 UTC m=+0.129850965 container init 32772c9a9b0db61b551b628faec90c7f0f08f603fdfc55744d1987118b52c284 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goodall, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, release=1763362218, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Dec 05 09:23:14 np0005546420.localdomain podman[135552]: 2025-12-05 09:23:14.910073911 +0000 UTC m=+0.140483155 container start 32772c9a9b0db61b551b628faec90c7f0f08f603fdfc55744d1987118b52c284 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goodall, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-type=git, release=1763362218, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:23:14 np0005546420.localdomain podman[135552]: 2025-12-05 09:23:14.910261707 +0000 UTC m=+0.140671021 container attach 32772c9a9b0db61b551b628faec90c7f0f08f603fdfc55744d1987118b52c284 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goodall, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, architecture=x86_64, release=1763362218, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:23:14 np0005546420.localdomain thirsty_goodall[135570]: 167 167
Dec 05 09:23:14 np0005546420.localdomain systemd[1]: libpod-32772c9a9b0db61b551b628faec90c7f0f08f603fdfc55744d1987118b52c284.scope: Deactivated successfully.
Dec 05 09:23:14 np0005546420.localdomain podman[135552]: 2025-12-05 09:23:14.916593083 +0000 UTC m=+0.147002357 container died 32772c9a9b0db61b551b628faec90c7f0f08f603fdfc55744d1987118b52c284 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goodall, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z)
Dec 05 09:23:14 np0005546420.localdomain podman[135580]: 2025-12-05 09:23:14.987943133 +0000 UTC m=+0.066808920 container remove 32772c9a9b0db61b551b628faec90c7f0f08f603fdfc55744d1987118b52c284 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_goodall, io.buildah.version=1.41.4, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:23:14 np0005546420.localdomain systemd[1]: libpod-conmon-32772c9a9b0db61b551b628faec90c7f0f08f603fdfc55744d1987118b52c284.scope: Deactivated successfully.
Dec 05 09:23:15 np0005546420.localdomain podman[135639]: 
Dec 05 09:23:15 np0005546420.localdomain podman[135639]: 2025-12-05 09:23:15.185462015 +0000 UTC m=+0.076159691 container create 6d2b113d6bce646c4429385b0c95299a611ec403aa9f2242ef8a7379ceb34f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, name=rhceph, distribution-scope=public, architecture=x86_64, release=1763362218, ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.expose-services=)
Dec 05 09:23:15 np0005546420.localdomain systemd[1]: Started libpod-conmon-6d2b113d6bce646c4429385b0c95299a611ec403aa9f2242ef8a7379ceb34f12.scope.
Dec 05 09:23:15 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:23:15 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e088fb2b3051bd7780d444cb45fdf996e0da10bae1b01368fefc76b7dbc13457/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:23:15 np0005546420.localdomain podman[135639]: 2025-12-05 09:23:15.155345591 +0000 UTC m=+0.046043337 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:23:15 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e088fb2b3051bd7780d444cb45fdf996e0da10bae1b01368fefc76b7dbc13457/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:23:15 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e088fb2b3051bd7780d444cb45fdf996e0da10bae1b01368fefc76b7dbc13457/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:23:15 np0005546420.localdomain podman[135639]: 2025-12-05 09:23:15.259507159 +0000 UTC m=+0.150204835 container init 6d2b113d6bce646c4429385b0c95299a611ec403aa9f2242ef8a7379ceb34f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1763362218, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph)
Dec 05 09:23:15 np0005546420.localdomain podman[135639]: 2025-12-05 09:23:15.272182112 +0000 UTC m=+0.162879798 container start 6d2b113d6bce646c4429385b0c95299a611ec403aa9f2242ef8a7379ceb34f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, architecture=x86_64, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 09:23:15 np0005546420.localdomain podman[135639]: 2025-12-05 09:23:15.272450351 +0000 UTC m=+0.163148187 container attach 6d2b113d6bce646c4429385b0c95299a611ec403aa9f2242ef8a7379ceb34f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec 05 09:23:15 np0005546420.localdomain sudo[135702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svbxtdxjyksobrwjflmknsdceqpcwgka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926594.937965-84-245531128516729/AnsiballZ_stat.py
Dec 05 09:23:15 np0005546420.localdomain sudo[135702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:15 np0005546420.localdomain python3.9[135704]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:15 np0005546420.localdomain sudo[135702]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:15 np0005546420.localdomain sudo[135768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtpvjmajccbytoommzzlygxdxtzrdlzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926594.937965-84-245531128516729/AnsiballZ_file.py
Dec 05 09:23:15 np0005546420.localdomain sudo[135768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0f61c547f63776264d96b9c12bc9b08b4dfc2409c3365aab2fda406142981459-merged.mount: Deactivated successfully.
Dec 05 09:23:16 np0005546420.localdomain python3.9[135834]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.5qqx_s8q recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:16 np0005546420.localdomain sudo[135768]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]: [
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:     {
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:         "available": false,
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:         "ceph_device": false,
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:         "lsm_data": {},
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:         "lvs": [],
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:         "path": "/dev/sr0",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:         "rejected_reasons": [
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "Has a FileSystem",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "Insufficient space (<5GB)"
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:         ],
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:         "sys_api": {
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "actuators": null,
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "device_nodes": "sr0",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "human_readable_size": "482.00 KB",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "id_bus": "ata",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "model": "QEMU DVD-ROM",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "nr_requests": "2",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "partitions": {},
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "path": "/dev/sr0",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "removable": "1",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "rev": "2.5+",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "ro": "0",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "rotational": "1",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "sas_address": "",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "sas_device_handle": "",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "scheduler_mode": "mq-deadline",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "sectors": 0,
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "sectorsize": "2048",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "size": 493568.0,
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "support_discard": "0",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "type": "disk",
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:             "vendor": "QEMU"
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:         }
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]:     }
Dec 05 09:23:16 np0005546420.localdomain pedantic_wu[135667]: ]
Dec 05 09:23:16 np0005546420.localdomain systemd[1]: libpod-6d2b113d6bce646c4429385b0c95299a611ec403aa9f2242ef8a7379ceb34f12.scope: Deactivated successfully.
Dec 05 09:23:16 np0005546420.localdomain podman[135639]: 2025-12-05 09:23:16.149619775 +0000 UTC m=+1.040317451 container died 6d2b113d6bce646c4429385b0c95299a611ec403aa9f2242ef8a7379ceb34f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, version=7, ceph=True, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, release=1763362218, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, vcs-type=git)
Dec 05 09:23:16 np0005546420.localdomain systemd[1]: tmp-crun.vl5zgU.mount: Deactivated successfully.
Dec 05 09:23:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e088fb2b3051bd7780d444cb45fdf996e0da10bae1b01368fefc76b7dbc13457-merged.mount: Deactivated successfully.
Dec 05 09:23:16 np0005546420.localdomain podman[137123]: 2025-12-05 09:23:16.25530828 +0000 UTC m=+0.090049431 container remove 6d2b113d6bce646c4429385b0c95299a611ec403aa9f2242ef8a7379ceb34f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_wu, distribution-scope=public, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, RELEASE=main, io.buildah.version=1.41.4, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:23:16 np0005546420.localdomain systemd[1]: libpod-conmon-6d2b113d6bce646c4429385b0c95299a611ec403aa9f2242ef8a7379ceb34f12.scope: Deactivated successfully.
Dec 05 09:23:16 np0005546420.localdomain sudo[135439]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64960 DF PROTO=TCP SPT=41084 DPT=9100 SEQ=1943152256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB7EDDA0000000001030307) 
Dec 05 09:23:16 np0005546420.localdomain sudo[137212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkbidebibvpkegjpqeetujqeqfzeiapq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926596.5836692-144-81134793459691/AnsiballZ_stat.py
Dec 05 09:23:16 np0005546420.localdomain sudo[137212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:17 np0005546420.localdomain python3.9[137214]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:17 np0005546420.localdomain sudo[137212]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:17 np0005546420.localdomain sudo[137257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:23:17 np0005546420.localdomain sudo[137257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:23:17 np0005546420.localdomain sudo[137257]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:17 np0005546420.localdomain sudo[137302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fclauldrmyhcrqpsurnhanlmynntpzln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926596.5836692-144-81134793459691/AnsiballZ_copy.py
Dec 05 09:23:17 np0005546420.localdomain sudo[137302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:17 np0005546420.localdomain python3.9[137304]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926596.5836692-144-81134793459691/.source _original_basename=.xk3k00yy follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:17 np0005546420.localdomain sudo[137302]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:18 np0005546420.localdomain sudo[137394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozofvhykfxtgyrdvrwykpkmamikrjlpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926598.0452623-192-233850422505218/AnsiballZ_file.py
Dec 05 09:23:18 np0005546420.localdomain sudo[137394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:18 np0005546420.localdomain python3.9[137396]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:23:18 np0005546420.localdomain sudo[137394]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32467 DF PROTO=TCP SPT=60902 DPT=9882 SEQ=1269929284 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB7F5D90000000001030307) 
Dec 05 09:23:18 np0005546420.localdomain sudo[137486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bzdufzyivnlwjrxgoohyhdmtgcxjoezf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926598.6935554-216-252327429337436/AnsiballZ_stat.py
Dec 05 09:23:18 np0005546420.localdomain sudo[137486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:19 np0005546420.localdomain python3.9[137488]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:19 np0005546420.localdomain sudo[137486]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:19 np0005546420.localdomain sudo[137559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-karoaqohvlavopbpxumvmdmucblohhlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926598.6935554-216-252327429337436/AnsiballZ_copy.py
Dec 05 09:23:19 np0005546420.localdomain sudo[137559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:19 np0005546420.localdomain python3.9[137561]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926598.6935554-216-252327429337436/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:23:19 np0005546420.localdomain sudo[137559]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:20 np0005546420.localdomain sudo[137651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qjokhawwgbpibgdccxcslnfohqyvzyfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926599.827785-216-211089439989203/AnsiballZ_stat.py
Dec 05 09:23:20 np0005546420.localdomain sudo[137651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:20 np0005546420.localdomain python3.9[137653]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:20 np0005546420.localdomain sudo[137651]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:20 np0005546420.localdomain sudo[137724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbizebjeznoxowdvaenzsnuplnzwvvkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926599.827785-216-211089439989203/AnsiballZ_copy.py
Dec 05 09:23:20 np0005546420.localdomain sudo[137724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:20 np0005546420.localdomain python3.9[137726]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926599.827785-216-211089439989203/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:23:20 np0005546420.localdomain sudo[137724]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:21 np0005546420.localdomain sudo[137816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqubitvldyanijfozpyykuznijuoulzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926601.019944-303-176456302309644/AnsiballZ_file.py
Dec 05 09:23:21 np0005546420.localdomain sudo[137816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:21 np0005546420.localdomain python3.9[137818]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:21 np0005546420.localdomain sudo[137816]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:21 np0005546420.localdomain sudo[137908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kzlsktychzfaigwbbtdvqdubgulsxtge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926601.6180797-327-146567590385564/AnsiballZ_stat.py
Dec 05 09:23:21 np0005546420.localdomain sudo[137908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57693 DF PROTO=TCP SPT=47430 DPT=9105 SEQ=3900433487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB801D90000000001030307) 
Dec 05 09:23:22 np0005546420.localdomain python3.9[137910]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:22 np0005546420.localdomain sudo[137908]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:22 np0005546420.localdomain sudo[137981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nrnjmjihtatasmzfyzofpkydeyirvbrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926601.6180797-327-146567590385564/AnsiballZ_copy.py
Dec 05 09:23:22 np0005546420.localdomain sudo[137981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:22 np0005546420.localdomain python3.9[137983]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926601.6180797-327-146567590385564/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:22 np0005546420.localdomain sudo[137981]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:22 np0005546420.localdomain sudo[138073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zaimanbnbvbtdlisxbvavgixpcnzodlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926602.7519875-372-250011833587156/AnsiballZ_stat.py
Dec 05 09:23:22 np0005546420.localdomain sudo[138073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:23 np0005546420.localdomain python3.9[138075]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:23 np0005546420.localdomain sudo[138073]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:23 np0005546420.localdomain sudo[138146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xpaawlkmkomxytrmpfacsfnfhasbaclz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926602.7519875-372-250011833587156/AnsiballZ_copy.py
Dec 05 09:23:23 np0005546420.localdomain sudo[138146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:23 np0005546420.localdomain python3.9[138148]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926602.7519875-372-250011833587156/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:23 np0005546420.localdomain sudo[138146]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:24 np0005546420.localdomain sudo[138238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlckksatrxfpqlnkwvlixhdgdhakjrgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926603.886818-417-94078998901690/AnsiballZ_systemd.py
Dec 05 09:23:24 np0005546420.localdomain sudo[138238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64961 DF PROTO=TCP SPT=41084 DPT=9100 SEQ=1943152256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB80DD90000000001030307) 
Dec 05 09:23:25 np0005546420.localdomain python3.9[138240]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:23:25 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:23:25 np0005546420.localdomain systemd-rc-local-generator[138264]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:23:25 np0005546420.localdomain systemd-sysv-generator[138270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:23:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:23:25 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:23:25 np0005546420.localdomain systemd-sysv-generator[138306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:23:25 np0005546420.localdomain systemd-rc-local-generator[138301]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:23:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:23:25 np0005546420.localdomain systemd[1]: Starting EDPM Container Shutdown...
Dec 05 09:23:25 np0005546420.localdomain systemd[1]: Finished EDPM Container Shutdown.
Dec 05 09:23:25 np0005546420.localdomain sudo[138238]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:28 np0005546420.localdomain sudo[138407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdgqhgqigqjgsncivkmfvdpgtqepuylx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926607.8801394-441-176127653259583/AnsiballZ_stat.py
Dec 05 09:23:28 np0005546420.localdomain sudo[138407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:28 np0005546420.localdomain python3.9[138409]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:28 np0005546420.localdomain sudo[138407]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:28 np0005546420.localdomain sudo[138480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-knshpdiehpfwdguzpdthubjmtwljawzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926607.8801394-441-176127653259583/AnsiballZ_copy.py
Dec 05 09:23:28 np0005546420.localdomain sudo[138480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:28 np0005546420.localdomain python3.9[138482]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926607.8801394-441-176127653259583/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:28 np0005546420.localdomain sudo[138480]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:29 np0005546420.localdomain sudo[138572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tfbuwoohwogbobsoshsbkdxrkqdvgteu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926609.099113-486-185131104118898/AnsiballZ_stat.py
Dec 05 09:23:29 np0005546420.localdomain sudo[138572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:30 np0005546420.localdomain python3.9[138574]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:30 np0005546420.localdomain sudo[138572]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44896 DF PROTO=TCP SPT=46636 DPT=9102 SEQ=4200089253 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8239A0000000001030307) 
Dec 05 09:23:30 np0005546420.localdomain sudo[138645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sgoebvmzvhqvxgjhcnubajsumffytjtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926609.099113-486-185131104118898/AnsiballZ_copy.py
Dec 05 09:23:30 np0005546420.localdomain sudo[138645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:30 np0005546420.localdomain python3.9[138647]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926609.099113-486-185131104118898/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:30 np0005546420.localdomain sudo[138645]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:31 np0005546420.localdomain sudo[138737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmpgrhbetthfteiyxnljbxdoxkfsjvei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926611.5365875-531-105621256544071/AnsiballZ_systemd.py
Dec 05 09:23:31 np0005546420.localdomain sudo[138737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:32 np0005546420.localdomain python3.9[138739]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:23:32 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:23:32 np0005546420.localdomain systemd-rc-local-generator[138768]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:23:32 np0005546420.localdomain systemd-sysv-generator[138771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:23:32 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:23:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9975 DF PROTO=TCP SPT=39980 DPT=9882 SEQ=2013655122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB82BDA0000000001030307) 
Dec 05 09:23:33 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 09:23:33 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 09:23:33 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 09:23:33 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 09:23:33 np0005546420.localdomain sudo[138737]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:34 np0005546420.localdomain python3.9[138872]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:23:34 np0005546420.localdomain network[138889]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:23:34 np0005546420.localdomain network[138890]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:23:34 np0005546420.localdomain network[138891]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:23:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57695 DF PROTO=TCP SPT=47430 DPT=9105 SEQ=3900433487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB832920000000001030307) 
Dec 05 09:23:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14276 DF PROTO=TCP SPT=43642 DPT=9101 SEQ=2474780175 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB83BD90000000001030307) 
Dec 05 09:23:38 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:23:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58665 DF PROTO=TCP SPT=37806 DPT=9100 SEQ=3922794081 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB84B5A0000000001030307) 
Dec 05 09:23:41 np0005546420.localdomain sudo[139091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whozqlflxfwpshhlkdbswnidvowvkxgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926620.9254405-609-200952377645200/AnsiballZ_stat.py
Dec 05 09:23:41 np0005546420.localdomain sudo[139091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:41 np0005546420.localdomain python3.9[139093]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:41 np0005546420.localdomain sudo[139091]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:41 np0005546420.localdomain sudo[139166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cavaewnjcyuuyqedssaqhbootceqnrkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926620.9254405-609-200952377645200/AnsiballZ_copy.py
Dec 05 09:23:41 np0005546420.localdomain sudo[139166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:41 np0005546420.localdomain python3.9[139168]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926620.9254405-609-200952377645200/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:41 np0005546420.localdomain sudo[139166]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:42 np0005546420.localdomain sudo[139259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avlsruemytkvaykqwcdivppymstfggnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926622.1409001-654-97474222438182/AnsiballZ_systemd.py
Dec 05 09:23:42 np0005546420.localdomain sudo[139259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:42 np0005546420.localdomain python3.9[139261]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:23:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58666 DF PROTO=TCP SPT=37806 DPT=9100 SEQ=3922794081 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8535A0000000001030307) 
Dec 05 09:23:42 np0005546420.localdomain systemd[1]: Reloading OpenSSH server daemon...
Dec 05 09:23:42 np0005546420.localdomain sshd[119494]: Received SIGHUP; restarting.
Dec 05 09:23:42 np0005546420.localdomain systemd[1]: Reloaded OpenSSH server daemon.
Dec 05 09:23:42 np0005546420.localdomain sshd[119494]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:23:42 np0005546420.localdomain sshd[119494]: Server listening on 0.0.0.0 port 22.
Dec 05 09:23:42 np0005546420.localdomain sshd[119494]: Server listening on :: port 22.
Dec 05 09:23:42 np0005546420.localdomain sudo[139259]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:43 np0005546420.localdomain sudo[139355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-asgekcdswcrewsbulgpshiqvaeddlaym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926623.0061715-678-54239139740446/AnsiballZ_file.py
Dec 05 09:23:43 np0005546420.localdomain sudo[139355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:43 np0005546420.localdomain python3.9[139357]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:43 np0005546420.localdomain sudo[139355]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:44 np0005546420.localdomain sudo[139447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myqswbpcmvujjegufixqnxylocphiztb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926623.7120707-702-104208728583929/AnsiballZ_stat.py
Dec 05 09:23:44 np0005546420.localdomain sudo[139447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:44 np0005546420.localdomain python3.9[139449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:44 np0005546420.localdomain sudo[139447]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:44 np0005546420.localdomain sudo[139520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bysryqiwpshphmehqbvpvhpefuwspnos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926623.7120707-702-104208728583929/AnsiballZ_copy.py
Dec 05 09:23:44 np0005546420.localdomain sudo[139520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:44 np0005546420.localdomain python3.9[139522]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926623.7120707-702-104208728583929/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:44 np0005546420.localdomain sudo[139520]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:45 np0005546420.localdomain sudo[139612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oxvlracovtmmvooskrcnycjggxvueboq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926625.2074826-756-114911303393165/AnsiballZ_timezone.py
Dec 05 09:23:45 np0005546420.localdomain sudo[139612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:45 np0005546420.localdomain python3.9[139614]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 05 09:23:45 np0005546420.localdomain systemd[1]: Starting Time & Date Service...
Dec 05 09:23:45 np0005546420.localdomain systemd[1]: Started Time & Date Service.
Dec 05 09:23:45 np0005546420.localdomain sudo[139612]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:46 np0005546420.localdomain sudo[139708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mhnruzjwhunrrvdpbigyrjlxpkxwewgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926626.548029-783-59566604957360/AnsiballZ_file.py
Dec 05 09:23:46 np0005546420.localdomain sudo[139708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58667 DF PROTO=TCP SPT=37806 DPT=9100 SEQ=3922794081 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8631A0000000001030307) 
Dec 05 09:23:46 np0005546420.localdomain python3.9[139710]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:47 np0005546420.localdomain sudo[139708]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:47 np0005546420.localdomain sudo[139800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqmiwmrqrdaekpswhwmdjwzhjmrztbri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926627.1767209-807-143192490520013/AnsiballZ_stat.py
Dec 05 09:23:47 np0005546420.localdomain sudo[139800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:47 np0005546420.localdomain python3.9[139802]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:47 np0005546420.localdomain sudo[139800]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:48 np0005546420.localdomain sudo[139873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqtlscrkamkninjfonoxsqzpfhgaonnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926627.1767209-807-143192490520013/AnsiballZ_copy.py
Dec 05 09:23:48 np0005546420.localdomain sudo[139873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:48 np0005546420.localdomain python3.9[139875]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926627.1767209-807-143192490520013/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:48 np0005546420.localdomain sudo[139873]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35349 DF PROTO=TCP SPT=41334 DPT=9105 SEQ=2098303893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB86B210000000001030307) 
Dec 05 09:23:49 np0005546420.localdomain sudo[139965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uncmwokrhbeddzrnxvzfkyiktuzmvlrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926628.869852-852-199956450390190/AnsiballZ_stat.py
Dec 05 09:23:49 np0005546420.localdomain sudo[139965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:49 np0005546420.localdomain python3.9[139967]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:49 np0005546420.localdomain sudo[139965]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:49 np0005546420.localdomain sudo[140038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zawuoloukecwhbawqkhqffrtrwvbmyvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926628.869852-852-199956450390190/AnsiballZ_copy.py
Dec 05 09:23:49 np0005546420.localdomain sudo[140038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:49 np0005546420.localdomain python3.9[140040]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926628.869852-852-199956450390190/.source.yaml _original_basename=.d4u5_vwn follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:49 np0005546420.localdomain sudo[140038]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:50 np0005546420.localdomain sudo[140130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mzswraxrurqhfilvpaaprcljpgnmgvtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926630.0875435-898-121508620498173/AnsiballZ_stat.py
Dec 05 09:23:50 np0005546420.localdomain sudo[140130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:50 np0005546420.localdomain python3.9[140132]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:50 np0005546420.localdomain sudo[140130]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:50 np0005546420.localdomain sudo[140205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hsquyqsvfiutvfewlkyyhcbslzriiwil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926630.0875435-898-121508620498173/AnsiballZ_copy.py
Dec 05 09:23:50 np0005546420.localdomain sudo[140205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:51 np0005546420.localdomain python3.9[140207]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926630.0875435-898-121508620498173/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:51 np0005546420.localdomain sudo[140205]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:51 np0005546420.localdomain sudo[140297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptvbekcsugojssbyjgsnfhuqexplgwwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926631.3174496-942-262894912460516/AnsiballZ_command.py
Dec 05 09:23:51 np0005546420.localdomain sudo[140297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35351 DF PROTO=TCP SPT=41334 DPT=9105 SEQ=2098303893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB877190000000001030307) 
Dec 05 09:23:51 np0005546420.localdomain python3.9[140299]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:23:51 np0005546420.localdomain sudo[140297]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:52 np0005546420.localdomain sudo[140390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfwqmwgfyaffqemxipufscaxiyakgdzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926632.1443088-966-223639181681004/AnsiballZ_command.py
Dec 05 09:23:52 np0005546420.localdomain sudo[140390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:52 np0005546420.localdomain python3.9[140392]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:23:52 np0005546420.localdomain sudo[140390]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:53 np0005546420.localdomain sudo[140483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lipxzqcmakjwpnonlejpuuczlanqpmoh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764926632.812803-990-5384061276378/AnsiballZ_edpm_nftables_from_files.py
Dec 05 09:23:53 np0005546420.localdomain sudo[140483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:53 np0005546420.localdomain python3[140485]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 09:23:53 np0005546420.localdomain sudo[140483]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:53 np0005546420.localdomain sudo[140575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jplynnekemfpbnwcogrpqpqbojwwaajz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926633.60556-1014-82021052733955/AnsiballZ_stat.py
Dec 05 09:23:53 np0005546420.localdomain sudo[140575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:54 np0005546420.localdomain python3.9[140577]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:54 np0005546420.localdomain sudo[140575]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:54 np0005546420.localdomain sudo[140648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uiettgupgacmldmpidpywrglrkbzqzbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926633.60556-1014-82021052733955/AnsiballZ_copy.py
Dec 05 09:23:54 np0005546420.localdomain sudo[140648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:54 np0005546420.localdomain python3.9[140650]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926633.60556-1014-82021052733955/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:54 np0005546420.localdomain sudo[140648]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58668 DF PROTO=TCP SPT=37806 DPT=9100 SEQ=3922794081 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB883D90000000001030307) 
Dec 05 09:23:55 np0005546420.localdomain sudo[140740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbtqttkmjibhgjrjljqvqdvozhheynlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926634.984416-1059-234245983985142/AnsiballZ_stat.py
Dec 05 09:23:55 np0005546420.localdomain sudo[140740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:55 np0005546420.localdomain python3.9[140742]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:55 np0005546420.localdomain sudo[140740]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:55 np0005546420.localdomain sudo[140813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjspvtndkaqpcufzeubnnuhbykfwwsbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926634.984416-1059-234245983985142/AnsiballZ_copy.py
Dec 05 09:23:55 np0005546420.localdomain sudo[140813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:56 np0005546420.localdomain python3.9[140815]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926634.984416-1059-234245983985142/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:56 np0005546420.localdomain sudo[140813]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:57 np0005546420.localdomain sudo[140905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-onrgmshlujtltymiesohtxjanidiblts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926636.9360938-1104-23369062354512/AnsiballZ_stat.py
Dec 05 09:23:57 np0005546420.localdomain sudo[140905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:57 np0005546420.localdomain python3.9[140907]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:57 np0005546420.localdomain sudo[140905]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:57 np0005546420.localdomain sudo[140978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gigfemorujemkhpdjllfvjdgjojwzdcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926636.9360938-1104-23369062354512/AnsiballZ_copy.py
Dec 05 09:23:57 np0005546420.localdomain sudo[140978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:58 np0005546420.localdomain python3.9[140980]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926636.9360938-1104-23369062354512/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:58 np0005546420.localdomain sudo[140978]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:59 np0005546420.localdomain sudo[141070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xizjhobtitwmbwuutubxojblkbgwriuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926638.7418852-1149-95701369028847/AnsiballZ_stat.py
Dec 05 09:23:59 np0005546420.localdomain sudo[141070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:59 np0005546420.localdomain python3.9[141072]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:23:59 np0005546420.localdomain sudo[141070]: pam_unix(sudo:session): session closed for user root
Dec 05 09:23:59 np0005546420.localdomain sudo[141143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hlxgsueotyrxnukheokrprherlrizyjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926638.7418852-1149-95701369028847/AnsiballZ_copy.py
Dec 05 09:23:59 np0005546420.localdomain sudo[141143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:23:59 np0005546420.localdomain python3.9[141145]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926638.7418852-1149-95701369028847/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:23:59 np0005546420.localdomain sudo[141143]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:00 np0005546420.localdomain sudo[141235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbqrwgqssmehnxlxbaznqumekelimjmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926640.0808678-1194-133681212659699/AnsiballZ_stat.py
Dec 05 09:24:00 np0005546420.localdomain sudo[141235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:00 np0005546420.localdomain python3.9[141237]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:24:00 np0005546420.localdomain sudo[141235]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:00 np0005546420.localdomain sudo[141308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjbmguctxzrtedmwpklntknbehtvkyhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926640.0808678-1194-133681212659699/AnsiballZ_copy.py
Dec 05 09:24:00 np0005546420.localdomain sudo[141308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:01 np0005546420.localdomain python3.9[141310]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926640.0808678-1194-133681212659699/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:24:01 np0005546420.localdomain sudo[141308]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:01 np0005546420.localdomain sudo[141400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qiyqzdjksjlrqksmepdzhslfhekguwlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926641.4009304-1239-159317585491336/AnsiballZ_file.py
Dec 05 09:24:01 np0005546420.localdomain sudo[141400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:01 np0005546420.localdomain python3.9[141402]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:24:01 np0005546420.localdomain sudo[141400]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:02 np0005546420.localdomain sudo[141492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abdhvykxjgqzrtbjqznrwtsdtxjzypwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926642.0564244-1263-230896888200386/AnsiballZ_command.py
Dec 05 09:24:02 np0005546420.localdomain sudo[141492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:02 np0005546420.localdomain python3.9[141494]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:24:02 np0005546420.localdomain sudo[141492]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:03 np0005546420.localdomain sudo[141588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rgpjobycciotonbkgttannlzrbzqqldw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926642.7181923-1287-149713326550074/AnsiballZ_blockinfile.py
Dec 05 09:24:03 np0005546420.localdomain sudo[141588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:03 np0005546420.localdomain python3.9[141590]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:24:03 np0005546420.localdomain sudo[141588]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:03 np0005546420.localdomain sudo[141681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ossscyuhefxgocoonmxwgltxwzqrthgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926643.6143003-1314-171509902465915/AnsiballZ_file.py
Dec 05 09:24:03 np0005546420.localdomain sudo[141681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:04 np0005546420.localdomain python3.9[141683]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:24:04 np0005546420.localdomain sudo[141681]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:04 np0005546420.localdomain sudo[141773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htybfsfolprznuwxsoqaufksjnjwlzvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926644.1348493-1314-161391577681206/AnsiballZ_file.py
Dec 05 09:24:04 np0005546420.localdomain sudo[141773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:04 np0005546420.localdomain python3.9[141775]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:24:04 np0005546420.localdomain sudo[141773]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:05 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48731 DF PROTO=TCP SPT=52648 DPT=9101 SEQ=4111041936 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8AB110000000001030307) 
Dec 05 09:24:05 np0005546420.localdomain sudo[141865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bzfwfzetxfoukflbtbenyemwokdlejge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926645.0039597-1359-201791520588490/AnsiballZ_mount.py
Dec 05 09:24:05 np0005546420.localdomain sudo[141865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:05 np0005546420.localdomain python3.9[141867]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 05 09:24:05 np0005546420.localdomain sudo[141865]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:06 np0005546420.localdomain sudo[141958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tocvicxyqcnxtvezjsszxbggmkncjbuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926645.7891653-1359-100875609240194/AnsiballZ_mount.py
Dec 05 09:24:06 np0005546420.localdomain sudo[141958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:06 np0005546420.localdomain python3.9[141960]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 05 09:24:06 np0005546420.localdomain sudo[141958]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18364 DF PROTO=TCP SPT=36834 DPT=9101 SEQ=4070387668 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8B1D90000000001030307) 
Dec 05 09:24:07 np0005546420.localdomain sshd[135219]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:24:07 np0005546420.localdomain systemd[1]: session-43.scope: Deactivated successfully.
Dec 05 09:24:07 np0005546420.localdomain systemd[1]: session-43.scope: Consumed 29.076s CPU time.
Dec 05 09:24:07 np0005546420.localdomain systemd-logind[762]: Session 43 logged out. Waiting for processes to exit.
Dec 05 09:24:07 np0005546420.localdomain systemd-logind[762]: Removed session 43.
Dec 05 09:24:09 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14277 DF PROTO=TCP SPT=43642 DPT=9101 SEQ=2474780175 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8B9D90000000001030307) 
Dec 05 09:24:09 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47813 DF PROTO=TCP SPT=43540 DPT=9100 SEQ=2119353659 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8BC8A0000000001030307) 
Dec 05 09:24:12 np0005546420.localdomain sshd[141976]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:24:12 np0005546420.localdomain sshd[141976]: Accepted publickey for zuul from 192.168.122.31 port 35278 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:24:12 np0005546420.localdomain systemd-logind[762]: New session 44 of user zuul.
Dec 05 09:24:12 np0005546420.localdomain systemd[1]: Started Session 44 of User zuul.
Dec 05 09:24:12 np0005546420.localdomain sshd[141976]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:24:12 np0005546420.localdomain sudo[142069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-khjnhrknmjwksqgwynomoofeszaqahct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926652.5220964-22-71183822846962/AnsiballZ_tempfile.py
Dec 05 09:24:12 np0005546420.localdomain sudo[142069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:13 np0005546420.localdomain python3.9[142071]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 05 09:24:13 np0005546420.localdomain sudo[142069]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:13 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64963 DF PROTO=TCP SPT=41084 DPT=9100 SEQ=1943152256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8CBD90000000001030307) 
Dec 05 09:24:14 np0005546420.localdomain sudo[142161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hniddoflkzlrfqfqcjzqarjbauxmcdac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926654.0956738-94-261513003324932/AnsiballZ_stat.py
Dec 05 09:24:14 np0005546420.localdomain sudo[142161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:14 np0005546420.localdomain python3.9[142163]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:24:14 np0005546420.localdomain sudo[142161]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:15 np0005546420.localdomain sudo[142255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ksyzhdiplwuwxcaqnazvigkbebgdnnlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926655.5584948-142-120318512425209/AnsiballZ_slurp.py
Dec 05 09:24:15 np0005546420.localdomain sudo[142255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:15 np0005546420.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 09:24:16 np0005546420.localdomain python3.9[142257]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 05 09:24:16 np0005546420.localdomain sudo[142255]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:17 np0005546420.localdomain sudo[142349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qzqeceqkfzyqzthspttzpiwbwvxcdjza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926656.809422-190-273134836544238/AnsiballZ_stat.py
Dec 05 09:24:17 np0005546420.localdomain sudo[142349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:17 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63801 DF PROTO=TCP SPT=55748 DPT=9882 SEQ=3752716113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8D9D30000000001030307) 
Dec 05 09:24:17 np0005546420.localdomain python3.9[142351]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.hd0gnl1c follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:24:17 np0005546420.localdomain sudo[142349]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:17 np0005546420.localdomain sudo[142381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:24:17 np0005546420.localdomain sudo[142381]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:24:17 np0005546420.localdomain sudo[142381]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:17 np0005546420.localdomain sudo[142396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:24:17 np0005546420.localdomain sudo[142396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:24:18 np0005546420.localdomain sudo[142396]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:18 np0005546420.localdomain sudo[142485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ygnaokmwvwfpsdsgxxnfbzmpzbmqcalw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926656.809422-190-273134836544238/AnsiballZ_copy.py
Dec 05 09:24:18 np0005546420.localdomain sudo[142485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:18 np0005546420.localdomain python3.9[142487]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.hd0gnl1c mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926656.809422-190-273134836544238/.source.hd0gnl1c _original_basename=.brit_03k follow=False checksum=d59cece82d3bbdcbe4933f17a77de42369988983 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:24:18 np0005546420.localdomain sudo[142485]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55829 DF PROTO=TCP SPT=42232 DPT=9105 SEQ=1483517251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8E0510000000001030307) 
Dec 05 09:24:19 np0005546420.localdomain sudo[142502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:24:19 np0005546420.localdomain sudo[142502]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:24:19 np0005546420.localdomain sudo[142502]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:20 np0005546420.localdomain sudo[142592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iqlmutytdxmeqagstflvnymkqwdxrohu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926660.3529131-280-245191269019345/AnsiballZ_setup.py
Dec 05 09:24:20 np0005546420.localdomain sudo[142592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:21 np0005546420.localdomain python3.9[142594]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:24:21 np0005546420.localdomain sudo[142592]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9977 DF PROTO=TCP SPT=39980 DPT=9882 SEQ=2013655122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8E9DA0000000001030307) 
Dec 05 09:24:22 np0005546420.localdomain sudo[142684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mtaxkezyypaybeumlynxyccvppphhmfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926662.2220788-329-239921375982254/AnsiballZ_blockinfile.py
Dec 05 09:24:22 np0005546420.localdomain sudo[142684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:22 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57697 DF PROTO=TCP SPT=47430 DPT=9105 SEQ=3900433487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB8EFD90000000001030307) 
Dec 05 09:24:22 np0005546420.localdomain python3.9[142686]: ansible-ansible.builtin.blockinfile Invoked with block=np0005546418.localdomain,192.168.122.105,np0005546418* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD9S5Z7rzST5j/fEC81CBzjbVnN/b1iPQZ35oKFbDVSZ3xrScwTjVDnymCRMpkG7ZjaGvyyMSy6sRwzcBVzWZGF94EKpFeYMdUdfpsK2dbevK8wHAAm7cfqUZ5sgTKGF4TOZZ08RJZ9Xc1fGGKeE0bg2QCqoKA7YzWR++lzm/LXf8DTXUhBN+xvwQ3rVN4Y8AIlXB2YS/FAkc2s3u95spaTjW0hbNonz/q6QiuuElDTfezQ9IkzHyYOFqIxYRnttkUuXTp5FodFYAlU3VOLHCoI6tZQk2f1Kt1ZZX4Umqd2RA4zu0IBbblyns+2Jy/Jg5MuKEZSC5X2xQ/tUeClu2+ZHxwKRMxnwAgAiYuC5ryGQuyc0vphUN3uE6JIxKd+8YgAscYSYvc7VoWqodvvt8eIxoXCDh1XbIsKKbWqosjwoNWAoNZUh+LcHIDskM+7FNALGudbtKgKRazoMRvGbZPWQr8FB2eTWiqo2TOBwHArzAXZmnKcg+ad9eMQtW6PX0M=
                                                            np0005546418.localdomain,192.168.122.105,np0005546418* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID6QeDYhauphPCMjY2u0ByifchL6qXHZv2fvWO3OCIzj
                                                            np0005546418.localdomain,192.168.122.105,np0005546418* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAOZfmYNbsgnO+sF3RpydJnHhRBeoml8dGRqN2Azszo4+xVxpgaJM8FuFbX8uQ8UvFhzziQ2hitNeW+ljaSZAAI=
                                                            np0005546415.localdomain,192.168.122.103,np0005546415* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkD6dMrlstq/08/i19MSGJhEADExfxigVjJJQ88FcvZHbzGOgQVpolfx1koKTyWN+Arobw6wFmJvZLTo8Bb6WoVTK7S5Ea1OnfJHT61JMRl/WjdLjR5dZtwV62H6dAQuwXtLXjjbx/PIaHGhjGeQ3mAmwEgTU06ey152S+ChTCN3ft7vCFw4DHXAly+guOSgi5JGOb3gMATYrMGVu90ONPr0mfPn6T6oBZQPEWvdKFCulrlj9zVZu7HsSSRQFMxH7KgZJzpkLllA4WVfnGbj38AXD2k/HkyLfYzY27ZsoOL1HyT4ardSL2aUb55JnBNuOxkTcFwxKYlyCL/gWk20rx9nJe7mp5Rl6iK4a8UA5SEKO0sudwL9uZ4JEMNAAViZ+5xpl7M0+YowEMffNSUrVJ8/SSa0beqOu9JTnZ+cEwNCNkJJM/h8ajcjEaAHeRXDkTkujntrvNR6KskZa+g94xtpw1nrG6xl0yzppj6k4nsmcRGGlicsbZEc9SZOW+qaM=
                                                            np0005546415.localdomain,192.168.122.103,np0005546415* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE9fcXJw9ebV6QMxfXz7L10O7RAB37asensC545IUkHw
                                                            np0005546415.localdomain,192.168.122.103,np0005546415* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDI+cyJXebg9ZNE0jCet/hwGIodMSGaIYE1ZtXGNSChdyfsTHtwJwqShV/elymR6mzhS+fxVDnoM9id5H9F/Mdg=
                                                            np0005546420.localdomain,192.168.122.107,np0005546420* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI9Y4hTQymJJTi7lwfGVKCetJ5Q4auPNryuYcUqXhqNAkgJUht3nxbV0LL2zw4tBsorx+hqOtHy6QfyMWc4r5hOjGRUOhC2uarhQho1134qkdAt7Wd1XMZFeslg1Vk7F8G5TciLUUJBsqvfKAsGc9/SQS5rWRQ90ssw6RtnrhuCDasOzJIdPA2tYjLQ2emSbjgfd1OuXSpKpSkko9b1cwE6trMzU8G7508xssCoDz66P8kF4Kf+OGT8iWzM8xKE0cB8b50ltkwnrxsK5Hwc8zz9LoLSU01AS9CNm299lqjPgZZhTOu6zSXvN6p4+CylbKvJO19AnMSzMEJZEPoHNCQ2SM+/LxQ9rIH8MAVrpw9SUndYbtXTvUkEsZRYAkH64dyfn+9kcYTPaf/oqkrvxuc6Nlk/uZ79dbjW0Vc+/XJXX9F7hLsdu3PK0kt4oBXIG9B9jdKXVobNiH7lsArspEnZ13zzspPyojH0UV6v0AfaZgCMP8b7Erg9y9+HPradoE=
                                                            np0005546420.localdomain,192.168.122.107,np0005546420* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTDR+XZw+XJDs6turN9eWht+z1Rm7llRsIRVQhAgcwk
                                                            np0005546420.localdomain,192.168.122.107,np0005546420* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGypM6u5I9iLb0poBkdrsQDwt3k6eMH320BTVkOVz+AbyBbdiE/TH/XKczZXTI77QkP2gP3kP+SvqRlk9KvFc5I=
                                                            np0005546419.localdomain,192.168.122.106,np0005546419* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEuRXji6XnIqsABVq0Qqof5KS4SAvlk4RgdtizNBr7m3ROTYSahE5AJNLCTMugmJtGXewQjNvC8Gcrwjha423XMFi1NpQBCu/U72HR15GJ4x0DRTlvDzeuyqmAuTQEBnQcjNlSIQ4FOJnMjeI6JzpCzCvQ8kOvkGMj6A3Hg/syH7t97g6vL8Cua473lHIav6GTZkm3SmFKQ3Xwj9z3cxUxUnrSgES1zowNRjtoEtPZjSgoF5b8nFIjaQf2ZwMcV0lopTVTvmRVyYDvsR8wFpqMebvWZkW7NQNAaUhRwiYfvQM5/uX1R294FSkW4UiMA5xWT6BMUvtJzexoxZwmrJN3E8I5NLL2KsN33G/6CHA5roanPqECSsRgwyhgQ8bARZgymqoTR9u/p8RRwj7J+x+qJCKMrG+inICVI/o3oOAD2Kdc2rFHXCzdC7sNhjF9/0HPZy8Dt2phAaMcAs4ueBT1Qv/WP22vx3lBguSxEC09rfl+zsp5KAd3jOr9hJBn34E=
                                                            np0005546419.localdomain,192.168.122.106,np0005546419* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAFZUl1WIZPoNWbG6u+CKdVGmLGMNavWtmXvlinpmxXI
                                                            np0005546419.localdomain,192.168.122.106,np0005546419* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIedaj86c0CB1SnGeK8riOelIj4G/rDY87EdW81k0S2GBtRgoE0JgjOLF/vaGYuAwyijqzjSZkpIcPFGAAyqLhc=
                                                            np0005546416.localdomain,192.168.122.104,np0005546416* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBknSonWn2K7oVrigtLeGeXWlaMY1uJqi1743zO2mguB8ceS1WtlyZavpdSnzpqiGiIwguuYuBxNKWaZMI/9XZyZKspYWl5eArdwgxtnKFyHWmHop7/MeX+Y+J7CrfiQ8MajXX1sy1YpxunvdWo7DK3K9DJfTaJ6onr8amsw55w0Pf5HOW0UBGE+AqFmTy/5btxUh4cKFDwRjGeJJps2YFr/p9mdITdZy6sxC+0QCi9XHI7FrpRbYfK0zSSrOBpixOr0sahUWL/3ZUVF5uiJbGTaihxnFrAN3SqoJsWJNJADqmp+E0K3oSw2xsGEvRz02E5n3+GqaYejfpUMdLjvSmTfEKVqlMiL8M0AtBvfeP7KlZCpABiuvopbKIXNsjFfG1HXkFrFHbCgRsfmg7e+8ThU6J66lb2cJhHrtKuP+uePggolCX4bqdv8abdxV9keT+DJCOZ6iMJnDTI8ggTwMTBVwykvMZXIhwiJruh8oACUYaubPkkGSz4VhPIqfSch0=
                                                            np0005546416.localdomain,192.168.122.104,np0005546416* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKpX3BRJ2hZsEWilRzk5yd6bYl9erWlONU767dX4uGRx
                                                            np0005546416.localdomain,192.168.122.104,np0005546416* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJJ19D7bhWV35hM8ynnuv7nvdNTRKxzJm7cFU0HyHTG1AIBFi4DwBNR3b2ZvZnp6UJEwHut3lye8XaYCEAf+o+k=
                                                            np0005546421.localdomain,192.168.122.108,np0005546421* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOnO4FOEzvhvnfZvvg9C7oar+ml2He45IxleHN54kwSVAvs2ltf36WvXeS2XAi7WgRxM+SZhG+GxbHWO/u3KqZQXbOWufPkzZF3oMisaK3ZDVZLqKvlrQZf2+29fCEYI9L5zPC/HNP6jqIyDlBSXGYPLQgUjpxxieUICaQ0fIp4WhlqviONuO0ZTwWQdPf5CYPALkVZ74wN1aGPulFSaGYretHzLaUvZvZQVL4q4PRI+7YpxvT1NyDOyTvw5u8TpzZXKp67nFfFtlbX8BvY9f1FVlgzcPwQvxzYWeJy5j9Cv0xoJ56dXmUueau39rhB/CBpKfhymLq91H1nh+F175gPPt5KZA5cfZg7fWlshSRjozK3Z53WpNGrpQtCIjhxblJ5Z3mxAPGcyYYOXoG/iv/IDwMvhkswL2Cqb6/ww6osSP2EJQIjWsS+CoYjynw+g7e++29qN8QiRLOqOuges85TiZ2vxP5lkvs8V3oAF+k4OsPOGPKzibXNDl5PyGwhVU=
                                                            np0005546421.localdomain,192.168.122.108,np0005546421* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFiUBcxudmUyqsqFoMU+JV8/mQ+WLu/s/QM5WUg+IPfZ
                                                            np0005546421.localdomain,192.168.122.108,np0005546421* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJh/II40mXscwERhviKJwwzVT152Sof000TH1wr5DjK1TZmtkBC4tSetb4Sv3lusV4r4bPFittXVnUkY+bspkL8=
                                                             create=True mode=0644 path=/tmp/ansible.hd0gnl1c state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:24:22 np0005546420.localdomain sudo[142684]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:23 np0005546420.localdomain sudo[142776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfweehlanvwqxfblxipngtffwhzropoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926663.5602577-377-119072232228010/AnsiballZ_command.py
Dec 05 09:24:23 np0005546420.localdomain sudo[142776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:24 np0005546420.localdomain python3.9[142778]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.hd0gnl1c' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:24:24 np0005546420.localdomain sudo[142776]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:25 np0005546420.localdomain sudo[142870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-owolgsoytngxcxdbdxxqlwbbhajtycti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926664.9226735-425-200908552061211/AnsiballZ_file.py
Dec 05 09:24:25 np0005546420.localdomain sudo[142870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:25 np0005546420.localdomain python3.9[142872]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.hd0gnl1c state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:24:25 np0005546420.localdomain sudo[142870]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:26 np0005546420.localdomain sshd[141976]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:24:26 np0005546420.localdomain systemd[1]: session-44.scope: Deactivated successfully.
Dec 05 09:24:26 np0005546420.localdomain systemd[1]: session-44.scope: Consumed 4.278s CPU time.
Dec 05 09:24:26 np0005546420.localdomain systemd-logind[762]: Session 44 logged out. Waiting for processes to exit.
Dec 05 09:24:26 np0005546420.localdomain systemd-logind[762]: Removed session 44.
Dec 05 09:24:33 np0005546420.localdomain sshd[142887]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:24:33 np0005546420.localdomain sshd[142887]: Accepted publickey for zuul from 192.168.122.31 port 41432 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:24:33 np0005546420.localdomain systemd-logind[762]: New session 45 of user zuul.
Dec 05 09:24:33 np0005546420.localdomain systemd[1]: Started Session 45 of User zuul.
Dec 05 09:24:33 np0005546420.localdomain sshd[142887]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:24:34 np0005546420.localdomain python3.9[142980]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:24:35 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43080 DF PROTO=TCP SPT=50180 DPT=9101 SEQ=1222476142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB920420000000001030307) 
Dec 05 09:24:35 np0005546420.localdomain sudo[143074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ppxqsnmxwnuualcnzlntggkefvnzmqma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926674.694808-57-59012188988526/AnsiballZ_systemd.py
Dec 05 09:24:35 np0005546420.localdomain sudo[143074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:35 np0005546420.localdomain python3.9[143076]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 09:24:35 np0005546420.localdomain sudo[143074]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:38 np0005546420.localdomain sudo[143168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tydusvecqtkrofjlcypgawcjqsrybuze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926677.838139-81-72405954189277/AnsiballZ_systemd.py
Dec 05 09:24:38 np0005546420.localdomain sudo[143168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:38 np0005546420.localdomain python3.9[143170]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:24:38 np0005546420.localdomain sudo[143168]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:39 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12313 DF PROTO=TCP SPT=35680 DPT=9100 SEQ=1527965539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB931BB0000000001030307) 
Dec 05 09:24:39 np0005546420.localdomain sudo[143261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dnkxeupewtvzdstytbyouttzibdtyrpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926678.6979303-108-57651722338669/AnsiballZ_command.py
Dec 05 09:24:39 np0005546420.localdomain sudo[143261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:40 np0005546420.localdomain python3.9[143263]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:24:40 np0005546420.localdomain sudo[143261]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:40 np0005546420.localdomain sudo[143354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kfxjnyulcxiybzpdhkodhpiazrwqhgiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926680.2454877-132-268963013091274/AnsiballZ_stat.py
Dec 05 09:24:40 np0005546420.localdomain sudo[143354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:40 np0005546420.localdomain python3.9[143356]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:24:40 np0005546420.localdomain sudo[143354]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:41 np0005546420.localdomain sudo[143448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xniavhxejdpjajizucjgvbruvbfphlni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926681.5652554-156-167421251121674/AnsiballZ_command.py
Dec 05 09:24:41 np0005546420.localdomain sudo[143448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:41 np0005546420.localdomain python3.9[143450]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:24:42 np0005546420.localdomain sudo[143448]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:42 np0005546420.localdomain sudo[143543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rvkbriqtnzstsdlhdcuyumercqunwfte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926682.314133-180-50131434059505/AnsiballZ_file.py
Dec 05 09:24:42 np0005546420.localdomain sudo[143543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:42 np0005546420.localdomain python3.9[143545]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:24:42 np0005546420.localdomain sudo[143543]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:43 np0005546420.localdomain sshd[142887]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:24:43 np0005546420.localdomain systemd[1]: session-45.scope: Deactivated successfully.
Dec 05 09:24:43 np0005546420.localdomain systemd[1]: session-45.scope: Consumed 3.898s CPU time.
Dec 05 09:24:43 np0005546420.localdomain systemd-logind[762]: Session 45 logged out. Waiting for processes to exit.
Dec 05 09:24:43 np0005546420.localdomain systemd-logind[762]: Removed session 45.
Dec 05 09:24:47 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=493 DF PROTO=TCP SPT=52578 DPT=9882 SEQ=1810433406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB94F040000000001030307) 
Dec 05 09:24:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=494 DF PROTO=TCP SPT=52578 DPT=9882 SEQ=1810433406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB953190000000001030307) 
Dec 05 09:24:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21193 DF PROTO=TCP SPT=33106 DPT=9105 SEQ=573357200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB955800000000001030307) 
Dec 05 09:24:49 np0005546420.localdomain sshd[143560]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:24:49 np0005546420.localdomain sshd[143560]: Accepted publickey for zuul from 192.168.122.31 port 33518 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:24:49 np0005546420.localdomain systemd-logind[762]: New session 46 of user zuul.
Dec 05 09:24:49 np0005546420.localdomain systemd[1]: Started Session 46 of User zuul.
Dec 05 09:24:49 np0005546420.localdomain sshd[143560]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:24:49 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21194 DF PROTO=TCP SPT=33106 DPT=9105 SEQ=573357200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB9599A0000000001030307) 
Dec 05 09:24:50 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=495 DF PROTO=TCP SPT=52578 DPT=9882 SEQ=1810433406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB95B190000000001030307) 
Dec 05 09:24:50 np0005546420.localdomain python3.9[143653]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:24:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21195 DF PROTO=TCP SPT=33106 DPT=9105 SEQ=573357200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB961990000000001030307) 
Dec 05 09:24:52 np0005546420.localdomain sudo[143747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kddijwnimvtymsegmwqabqahrzwpmcqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926691.756088-63-62912753329501/AnsiballZ_setup.py
Dec 05 09:24:52 np0005546420.localdomain sudo[143747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:52 np0005546420.localdomain python3.9[143749]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:24:52 np0005546420.localdomain sudo[143747]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:53 np0005546420.localdomain sudo[143801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tfphmmuqubkoilfxqzxakjjjohexapxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926691.756088-63-62912753329501/AnsiballZ_dnf.py
Dec 05 09:24:53 np0005546420.localdomain sudo[143801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:53 np0005546420.localdomain python3.9[143803]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 09:24:53 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24623 DF PROTO=TCP SPT=35974 DPT=9102 SEQ=1991164398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB967580000000001030307) 
Dec 05 09:24:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=496 DF PROTO=TCP SPT=52578 DPT=9882 SEQ=1810433406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB96ADA0000000001030307) 
Dec 05 09:24:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21196 DF PROTO=TCP SPT=33106 DPT=9105 SEQ=573357200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB971590000000001030307) 
Dec 05 09:24:56 np0005546420.localdomain sudo[143801]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:57 np0005546420.localdomain python3.9[143895]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:24:58 np0005546420.localdomain sudo[143986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zerrwnyelaulfqctopxdcixgsjjulerg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926698.059729-126-154550674955303/AnsiballZ_file.py
Dec 05 09:24:58 np0005546420.localdomain sudo[143986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:58 np0005546420.localdomain python3.9[143988]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:24:58 np0005546420.localdomain sudo[143986]: pam_unix(sudo:session): session closed for user root
Dec 05 09:24:59 np0005546420.localdomain sudo[144078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nwyvryqmrodcjnhrxlbkbumccfisvgyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926699.016581-150-276997904592569/AnsiballZ_file.py
Dec 05 09:24:59 np0005546420.localdomain sudo[144078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:24:59 np0005546420.localdomain python3.9[144080]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:24:59 np0005546420.localdomain sudo[144078]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24626 DF PROTO=TCP SPT=35974 DPT=9102 SEQ=1991164398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB983190000000001030307) 
Dec 05 09:25:00 np0005546420.localdomain sudo[144170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cfmezbgrdhbnlfgkippcbshcepjjvuzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926700.3575895-174-67951427575413/AnsiballZ_lineinfile.py
Dec 05 09:25:00 np0005546420.localdomain sudo[144170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:01 np0005546420.localdomain python3.9[144172]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated
                                                            Core libraries or services have been updated since boot-up:
                                                              * systemd
                                                            
                                                            Reboot is required to fully utilize these updates.
                                                            More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:01 np0005546420.localdomain sudo[144170]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:01 np0005546420.localdomain python3.9[144262]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:25:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=497 DF PROTO=TCP SPT=52578 DPT=9882 SEQ=1810433406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB98BDA0000000001030307) 
Dec 05 09:25:02 np0005546420.localdomain python3.9[144352]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:25:03 np0005546420.localdomain python3.9[144444]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:25:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21197 DF PROTO=TCP SPT=33106 DPT=9105 SEQ=573357200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB991DA0000000001030307) 
Dec 05 09:25:04 np0005546420.localdomain sshd[143560]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:25:04 np0005546420.localdomain systemd[1]: session-46.scope: Deactivated successfully.
Dec 05 09:25:04 np0005546420.localdomain systemd[1]: session-46.scope: Consumed 8.880s CPU time.
Dec 05 09:25:04 np0005546420.localdomain systemd-logind[762]: Session 46 logged out. Waiting for processes to exit.
Dec 05 09:25:04 np0005546420.localdomain systemd-logind[762]: Removed session 46.
Dec 05 09:25:08 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45031 DF PROTO=TCP SPT=56190 DPT=9101 SEQ=4267002249 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB9A1990000000001030307) 
Dec 05 09:25:09 np0005546420.localdomain sshd[144459]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:25:10 np0005546420.localdomain sshd[144459]: Accepted publickey for zuul from 192.168.122.31 port 57724 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:25:10 np0005546420.localdomain systemd-logind[762]: New session 47 of user zuul.
Dec 05 09:25:10 np0005546420.localdomain systemd[1]: Started Session 47 of User zuul.
Dec 05 09:25:10 np0005546420.localdomain sshd[144459]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:25:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64185 DF PROTO=TCP SPT=33216 DPT=9100 SEQ=1854425296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB9AAD90000000001030307) 
Dec 05 09:25:11 np0005546420.localdomain python3.9[144552]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:25:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64186 DF PROTO=TCP SPT=33216 DPT=9100 SEQ=1854425296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB9B2DB0000000001030307) 
Dec 05 09:25:14 np0005546420.localdomain sudo[144646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgvtnrlodmssrkkrenytmqsyctudvesb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926713.7960598-158-256189337048730/AnsiballZ_file.py
Dec 05 09:25:14 np0005546420.localdomain sudo[144646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:14 np0005546420.localdomain python3.9[144648]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:25:14 np0005546420.localdomain sudo[144646]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:15 np0005546420.localdomain sudo[144738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-klqcyqdftjbokgrsfwjrqvcdoprdkfgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926714.6037886-183-55723392399626/AnsiballZ_stat.py
Dec 05 09:25:15 np0005546420.localdomain sudo[144738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:15 np0005546420.localdomain python3.9[144740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:25:15 np0005546420.localdomain sudo[144738]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:15 np0005546420.localdomain chronyd[135203]: Selected source 23.133.168.246 (pool.ntp.org)
Dec 05 09:25:15 np0005546420.localdomain sudo[144811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pedqgbxsgoacxizcgkkbehtbbqwhpekb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926714.6037886-183-55723392399626/AnsiballZ_copy.py
Dec 05 09:25:15 np0005546420.localdomain sudo[144811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:15 np0005546420.localdomain python3.9[144813]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926714.6037886-183-55723392399626/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:15 np0005546420.localdomain sudo[144811]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:16 np0005546420.localdomain sudo[144903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewqmencjfpykjskmbwbupvqowjanvhtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926716.0903971-233-64845823694147/AnsiballZ_file.py
Dec 05 09:25:16 np0005546420.localdomain sudo[144903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:16 np0005546420.localdomain python3.9[144905]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:25:16 np0005546420.localdomain sudo[144903]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64187 DF PROTO=TCP SPT=33216 DPT=9100 SEQ=1854425296 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB9C2990000000001030307) 
Dec 05 09:25:16 np0005546420.localdomain sudo[144995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nnrgfwmrowxmavctxllyxwirltipcvki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926716.7144866-257-74842118834799/AnsiballZ_stat.py
Dec 05 09:25:16 np0005546420.localdomain sudo[144995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:17 np0005546420.localdomain python3.9[144997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:25:17 np0005546420.localdomain sudo[144995]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:17 np0005546420.localdomain sudo[145068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yepijmjqlhokfbfrixoqnyjnhjaepwok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926716.7144866-257-74842118834799/AnsiballZ_copy.py
Dec 05 09:25:17 np0005546420.localdomain sudo[145068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:17 np0005546420.localdomain python3.9[145070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926716.7144866-257-74842118834799/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:17 np0005546420.localdomain sudo[145068]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:18 np0005546420.localdomain sudo[145160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbnuctunjzdmdyavwgyltbotddbwqwyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926717.9050386-306-216301076417447/AnsiballZ_file.py
Dec 05 09:25:18 np0005546420.localdomain sudo[145160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:18 np0005546420.localdomain python3.9[145162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:25:18 np0005546420.localdomain sudo[145160]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:18 np0005546420.localdomain sudo[145252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vcwofxlvzucsvzrtbwursyvxvzyynext ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926718.5428836-332-135872061785634/AnsiballZ_stat.py
Dec 05 09:25:18 np0005546420.localdomain sudo[145252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15125 DF PROTO=TCP SPT=41994 DPT=9105 SEQ=2991273458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB9CAB10000000001030307) 
Dec 05 09:25:18 np0005546420.localdomain python3.9[145254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:25:18 np0005546420.localdomain sudo[145252]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:19 np0005546420.localdomain sudo[145325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btpdsovsduppnqdoowrphnaxxiftmmyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926718.5428836-332-135872061785634/AnsiballZ_copy.py
Dec 05 09:25:19 np0005546420.localdomain sudo[145325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:19 np0005546420.localdomain python3.9[145327]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926718.5428836-332-135872061785634/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:19 np0005546420.localdomain sudo[145325]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:19 np0005546420.localdomain sudo[145328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:25:19 np0005546420.localdomain sudo[145328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:25:19 np0005546420.localdomain sudo[145328]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:19 np0005546420.localdomain sudo[145357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 09:25:19 np0005546420.localdomain sudo[145357]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:25:19 np0005546420.localdomain sudo[145454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ocsfqatvhlemxcxsovrdkylinufournb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926719.7011957-377-172471139101323/AnsiballZ_file.py
Dec 05 09:25:19 np0005546420.localdomain sudo[145454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:20 np0005546420.localdomain sudo[145357]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:20 np0005546420.localdomain python3.9[145461]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:25:20 np0005546420.localdomain sudo[145454]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:20 np0005546420.localdomain sudo[145470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:25:20 np0005546420.localdomain sudo[145470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:25:20 np0005546420.localdomain sudo[145470]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:20 np0005546420.localdomain sudo[145498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:25:20 np0005546420.localdomain sudo[145498]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:25:20 np0005546420.localdomain sudo[145589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fbxeuftcohpnqeoumkbvblsambjducml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926720.2923634-398-65769245797784/AnsiballZ_stat.py
Dec 05 09:25:20 np0005546420.localdomain sudo[145589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:20 np0005546420.localdomain python3.9[145591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:25:20 np0005546420.localdomain sudo[145589]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:20 np0005546420.localdomain sudo[145498]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:21 np0005546420.localdomain sudo[145650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:25:21 np0005546420.localdomain sudo[145650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:25:21 np0005546420.localdomain sudo[145650]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:21 np0005546420.localdomain sudo[145708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cuybqczyqbpdimddkdeyibtsvdbrvytm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926720.2923634-398-65769245797784/AnsiballZ_copy.py
Dec 05 09:25:21 np0005546420.localdomain sudo[145708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:21 np0005546420.localdomain python3.9[145710]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926720.2923634-398-65769245797784/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:21 np0005546420.localdomain sudo[145708]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15127 DF PROTO=TCP SPT=41994 DPT=9105 SEQ=2991273458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB9D69A0000000001030307) 
Dec 05 09:25:22 np0005546420.localdomain sudo[145800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdicuzugfpmjkvywdfuopodxmeehphhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926721.9805174-446-1239865938515/AnsiballZ_file.py
Dec 05 09:25:22 np0005546420.localdomain sudo[145800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:22 np0005546420.localdomain python3.9[145802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:25:22 np0005546420.localdomain sudo[145800]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:23 np0005546420.localdomain sudo[145892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmaogrxzkgpgzjoazpytpwlqtpgwwldy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926723.1437588-469-123035082154691/AnsiballZ_stat.py
Dec 05 09:25:23 np0005546420.localdomain sudo[145892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:23 np0005546420.localdomain python3.9[145894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:25:23 np0005546420.localdomain sudo[145892]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:23 np0005546420.localdomain sudo[145965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dtmcbwiscnbrvoqlzezdzhptguiknicp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926723.1437588-469-123035082154691/AnsiballZ_copy.py
Dec 05 09:25:23 np0005546420.localdomain sudo[145965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:24 np0005546420.localdomain python3.9[145967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926723.1437588-469-123035082154691/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:24 np0005546420.localdomain sudo[145965]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:24 np0005546420.localdomain sudo[146057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-astaatntaabsplrmlnzrvwxqxibfedvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926724.3831222-518-189577947913373/AnsiballZ_file.py
Dec 05 09:25:24 np0005546420.localdomain sudo[146057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:24 np0005546420.localdomain python3.9[146059]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:25:24 np0005546420.localdomain sudo[146057]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:25 np0005546420.localdomain sudo[146149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bergydzwoikqxebbqunkudwmzywklite ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926725.0129879-543-21729726374921/AnsiballZ_stat.py
Dec 05 09:25:25 np0005546420.localdomain sudo[146149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24628 DF PROTO=TCP SPT=35974 DPT=9102 SEQ=1991164398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB9E3DA0000000001030307) 
Dec 05 09:25:25 np0005546420.localdomain python3.9[146151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:25:25 np0005546420.localdomain sudo[146149]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:25 np0005546420.localdomain sudo[146222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-guidmhzkxhiumlcocvujnuzkbmroualy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926725.0129879-543-21729726374921/AnsiballZ_copy.py
Dec 05 09:25:25 np0005546420.localdomain sudo[146222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:25 np0005546420.localdomain python3.9[146224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926725.0129879-543-21729726374921/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:25 np0005546420.localdomain sudo[146222]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:26 np0005546420.localdomain sudo[146314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhamhyhyvvdisrfmsctrhtchoymvxrvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926726.1593006-588-148356926776457/AnsiballZ_file.py
Dec 05 09:25:26 np0005546420.localdomain sudo[146314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:26 np0005546420.localdomain python3.9[146316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:25:26 np0005546420.localdomain sudo[146314]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:27 np0005546420.localdomain sudo[146406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-taaxtsqfxvurhdzubqhulljppbjdwrtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926726.7457538-612-73129601584910/AnsiballZ_stat.py
Dec 05 09:25:27 np0005546420.localdomain sudo[146406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:27 np0005546420.localdomain python3.9[146408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:25:27 np0005546420.localdomain sudo[146406]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:27 np0005546420.localdomain sudo[146479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqxbhbdlfzuwtkxkdtnwcimntkbymfof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926726.7457538-612-73129601584910/AnsiballZ_copy.py
Dec 05 09:25:27 np0005546420.localdomain sudo[146479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:27 np0005546420.localdomain python3.9[146481]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926726.7457538-612-73129601584910/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:27 np0005546420.localdomain sudo[146479]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:28 np0005546420.localdomain sudo[146571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mmumylxfpxxwfcadxrwjlhpnaqjuwonk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926727.9474761-658-268102574890519/AnsiballZ_file.py
Dec 05 09:25:28 np0005546420.localdomain sudo[146571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:28 np0005546420.localdomain python3.9[146573]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:25:28 np0005546420.localdomain sudo[146571]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:28 np0005546420.localdomain sudo[146663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-esjgxdbczjvlybawkfzhqnsfsrpfqaqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926728.5885284-684-234052916383481/AnsiballZ_stat.py
Dec 05 09:25:28 np0005546420.localdomain sudo[146663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:29 np0005546420.localdomain python3.9[146665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:25:29 np0005546420.localdomain sudo[146663]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:29 np0005546420.localdomain sudo[146736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chgoioespfrrnigtobkfzlkaqqqmmidq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926728.5885284-684-234052916383481/AnsiballZ_copy.py
Dec 05 09:25:29 np0005546420.localdomain sudo[146736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:29 np0005546420.localdomain python3.9[146738]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926728.5885284-684-234052916383481/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:29 np0005546420.localdomain sudo[146736]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:30 np0005546420.localdomain sshd[144459]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:25:30 np0005546420.localdomain systemd-logind[762]: Session 47 logged out. Waiting for processes to exit.
Dec 05 09:25:30 np0005546420.localdomain systemd[1]: session-47.scope: Deactivated successfully.
Dec 05 09:25:30 np0005546420.localdomain systemd[1]: session-47.scope: Consumed 11.832s CPU time.
Dec 05 09:25:30 np0005546420.localdomain systemd-logind[762]: Removed session 47.
Dec 05 09:25:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47715 DF PROTO=TCP SPT=43170 DPT=9102 SEQ=2139411743 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB9F8590000000001030307) 
Dec 05 09:25:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22250 DF PROTO=TCP SPT=43140 DPT=9882 SEQ=2244964579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AB9FFDA0000000001030307) 
Dec 05 09:25:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15129 DF PROTO=TCP SPT=41994 DPT=9105 SEQ=2991273458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA05D90000000001030307) 
Dec 05 09:25:35 np0005546420.localdomain sshd[146753]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:25:35 np0005546420.localdomain sshd[146753]: Accepted publickey for zuul from 192.168.122.31 port 42696 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:25:35 np0005546420.localdomain systemd-logind[762]: New session 48 of user zuul.
Dec 05 09:25:35 np0005546420.localdomain systemd[1]: Started Session 48 of User zuul.
Dec 05 09:25:35 np0005546420.localdomain sshd[146753]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:25:36 np0005546420.localdomain sudo[146846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzxvbqtebtlbojrqxeybzwmlimzskjvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926736.1291103-27-151468546971281/AnsiballZ_file.py
Dec 05 09:25:36 np0005546420.localdomain sudo[146846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45034 DF PROTO=TCP SPT=56190 DPT=9101 SEQ=4267002249 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA11D90000000001030307) 
Dec 05 09:25:37 np0005546420.localdomain python3.9[146848]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:37 np0005546420.localdomain sudo[146846]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:37 np0005546420.localdomain sudo[146938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fzmxzjgsccblzffsbetfigsmcokotvse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926737.3584037-63-3302756684345/AnsiballZ_stat.py
Dec 05 09:25:37 np0005546420.localdomain sudo[146938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:37 np0005546420.localdomain python3.9[146940]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:25:37 np0005546420.localdomain sudo[146938]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:38 np0005546420.localdomain sudo[147011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ohoceztfhijfjbhxcmqjblqkbpadtlov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926737.3584037-63-3302756684345/AnsiballZ_copy.py
Dec 05 09:25:38 np0005546420.localdomain sudo[147011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:38 np0005546420.localdomain python3.9[147013]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926737.3584037-63-3302756684345/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=56b574bbcbb2378bafed25b3f279b3c007056bbe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:38 np0005546420.localdomain sudo[147011]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:38 np0005546420.localdomain sudo[147103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qikvqkyzflroklgxlizpsfctdnptvgba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926738.755682-63-117113246115748/AnsiballZ_stat.py
Dec 05 09:25:38 np0005546420.localdomain sudo[147103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:39 np0005546420.localdomain python3.9[147105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:25:39 np0005546420.localdomain sudo[147103]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:39 np0005546420.localdomain sudo[147176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ezkszcuprgznncqcbzzfkhunohekjkko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926738.755682-63-117113246115748/AnsiballZ_copy.py
Dec 05 09:25:39 np0005546420.localdomain sudo[147176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:39 np0005546420.localdomain python3.9[147178]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926738.755682-63-117113246115748/.source.conf _original_basename=ceph.conf follow=False checksum=9ddad61351f9bf53ca5a99a509b37f8f58fbf3e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:39 np0005546420.localdomain sudo[147176]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:40 np0005546420.localdomain sshd[146753]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:25:40 np0005546420.localdomain systemd[1]: session-48.scope: Deactivated successfully.
Dec 05 09:25:40 np0005546420.localdomain systemd[1]: session-48.scope: Consumed 2.310s CPU time.
Dec 05 09:25:40 np0005546420.localdomain systemd-logind[762]: Session 48 logged out. Waiting for processes to exit.
Dec 05 09:25:40 np0005546420.localdomain systemd-logind[762]: Removed session 48.
Dec 05 09:25:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14205 DF PROTO=TCP SPT=40828 DPT=9100 SEQ=368522736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA20190000000001030307) 
Dec 05 09:25:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14206 DF PROTO=TCP SPT=40828 DPT=9100 SEQ=368522736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA28190000000001030307) 
Dec 05 09:25:45 np0005546420.localdomain sshd[147193]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:25:45 np0005546420.localdomain sshd[147193]: Accepted publickey for zuul from 192.168.122.31 port 60718 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:25:45 np0005546420.localdomain systemd-logind[762]: New session 49 of user zuul.
Dec 05 09:25:45 np0005546420.localdomain systemd[1]: Started Session 49 of User zuul.
Dec 05 09:25:45 np0005546420.localdomain sshd[147193]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:25:46 np0005546420.localdomain python3.9[147286]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:25:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14207 DF PROTO=TCP SPT=40828 DPT=9100 SEQ=368522736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA37DA0000000001030307) 
Dec 05 09:25:47 np0005546420.localdomain sudo[147380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xbrqnbgcuktfsugsnwbxmkajhpxovplj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926747.3154685-63-155843053675749/AnsiballZ_file.py
Dec 05 09:25:47 np0005546420.localdomain sudo[147380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:47 np0005546420.localdomain python3.9[147382]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:25:47 np0005546420.localdomain sudo[147380]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:48 np0005546420.localdomain sudo[147472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hxiwijpotdqmssbtptpflgwscbjkeukv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926748.0498784-63-112707247771290/AnsiballZ_file.py
Dec 05 09:25:48 np0005546420.localdomain sudo[147472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:48 np0005546420.localdomain python3.9[147474]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:25:48 np0005546420.localdomain sudo[147472]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22251 DF PROTO=TCP SPT=43140 DPT=9882 SEQ=2244964579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA3FD90000000001030307) 
Dec 05 09:25:49 np0005546420.localdomain python3.9[147564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:25:49 np0005546420.localdomain sudo[147654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-doonzzqyawrehaoxjjxzdztfjujximkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926749.463254-132-168481499144844/AnsiballZ_seboolean.py
Dec 05 09:25:49 np0005546420.localdomain sudo[147654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:50 np0005546420.localdomain python3.9[147656]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 05 09:25:50 np0005546420.localdomain sudo[147654]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:50 np0005546420.localdomain sudo[147746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kenuncjonmvhthzmaacuxgmrwygvwzhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926750.5420449-162-156011685948698/AnsiballZ_setup.py
Dec 05 09:25:50 np0005546420.localdomain sudo[147746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:51 np0005546420.localdomain python3.9[147748]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:25:51 np0005546420.localdomain sudo[147746]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:51 np0005546420.localdomain sudo[147800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yeyhayyzhbdphucjpnxoajxfrzddsdfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926750.5420449-162-156011685948698/AnsiballZ_dnf.py
Dec 05 09:25:51 np0005546420.localdomain sudo[147800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41691 DF PROTO=TCP SPT=45812 DPT=9105 SEQ=1874585292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA4BD90000000001030307) 
Dec 05 09:25:52 np0005546420.localdomain python3.9[147802]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:25:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14208 DF PROTO=TCP SPT=40828 DPT=9100 SEQ=368522736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA57D90000000001030307) 
Dec 05 09:25:55 np0005546420.localdomain sudo[147800]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:56 np0005546420.localdomain sudo[147894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhpbvnualyuyznnfazsthmilixdwkrux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926755.973199-198-68167267323736/AnsiballZ_systemd.py
Dec 05 09:25:56 np0005546420.localdomain sudo[147894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:56 np0005546420.localdomain python3.9[147896]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:25:56 np0005546420.localdomain sudo[147894]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:57 np0005546420.localdomain sudo[147989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yanwxwafgtdmnujencrechwlsgnfzozv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764926757.0925505-222-267753811707451/AnsiballZ_edpm_nftables_snippet.py
Dec 05 09:25:57 np0005546420.localdomain sudo[147989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:57 np0005546420.localdomain python3[147991]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                                            rule:
                                                              proto: udp
                                                              dport: 4789
                                                          - rule_name: 119 neutron geneve networks
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              state: ["UNTRACKED"]
                                                          - rule_name: 120 neutron geneve networks no conntrack
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              table: raw
                                                              chain: OUTPUT
                                                              jump: NOTRACK
                                                              action: append
                                                              state: []
                                                          - rule_name: 121 neutron geneve networks no conntrack
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              table: raw
                                                              chain: PREROUTING
                                                              jump: NOTRACK
                                                              action: append
                                                              state: []
                                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 05 09:25:57 np0005546420.localdomain sudo[147989]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:58 np0005546420.localdomain sudo[148081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbflfqihzepkhlwcqsplmyrblclarzrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926758.028543-249-146285520090370/AnsiballZ_file.py
Dec 05 09:25:58 np0005546420.localdomain sudo[148081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:58 np0005546420.localdomain python3.9[148083]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:58 np0005546420.localdomain sudo[148081]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:59 np0005546420.localdomain sudo[148173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwurfzfmrkbtdqjrznoghcgwwufcdvsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926758.714035-273-113866829890330/AnsiballZ_stat.py
Dec 05 09:25:59 np0005546420.localdomain sudo[148173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:59 np0005546420.localdomain python3.9[148175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:25:59 np0005546420.localdomain sudo[148173]: pam_unix(sudo:session): session closed for user root
Dec 05 09:25:59 np0005546420.localdomain sudo[148221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xvsqosljtynojociclmqcpmguacwsgpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926758.714035-273-113866829890330/AnsiballZ_file.py
Dec 05 09:25:59 np0005546420.localdomain sudo[148221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:25:59 np0005546420.localdomain python3.9[148223]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:25:59 np0005546420.localdomain sudo[148221]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:00 np0005546420.localdomain sudo[148313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bhfrwbeccpyryqbeoldfqfcxzzjclxqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926759.891147-309-56277427890352/AnsiballZ_stat.py
Dec 05 09:26:00 np0005546420.localdomain sudo[148313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:00 np0005546420.localdomain python3.9[148315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:00 np0005546420.localdomain sudo[148313]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52813 DF PROTO=TCP SPT=57306 DPT=9102 SEQ=1178284323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA6D9A0000000001030307) 
Dec 05 09:26:00 np0005546420.localdomain sudo[148361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozhbaufhbjecxaidksrxvsvdkljyyrqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926759.891147-309-56277427890352/AnsiballZ_file.py
Dec 05 09:26:00 np0005546420.localdomain sudo[148361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:00 np0005546420.localdomain python3.9[148363]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.m98wg_jg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:00 np0005546420.localdomain sudo[148361]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:01 np0005546420.localdomain sudo[148453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxscgqarmyjlzbwttlulenvlvsvfrdgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926760.9577477-345-140828568142517/AnsiballZ_stat.py
Dec 05 09:26:01 np0005546420.localdomain sudo[148453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:01 np0005546420.localdomain python3.9[148455]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:01 np0005546420.localdomain sudo[148453]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:01 np0005546420.localdomain sudo[148501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-khpnqtjpyosctdsmtvzkbnskfvjsgrnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926760.9577477-345-140828568142517/AnsiballZ_file.py
Dec 05 09:26:01 np0005546420.localdomain sudo[148501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:01 np0005546420.localdomain python3.9[148503]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:01 np0005546420.localdomain sudo[148501]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:02 np0005546420.localdomain sudo[148593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kiofynezpjbjtgjkmxgluluyqkfzkbmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926762.1558425-384-218613689810703/AnsiballZ_command.py
Dec 05 09:26:02 np0005546420.localdomain sudo[148593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19580 DF PROTO=TCP SPT=56288 DPT=9882 SEQ=3861867146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA75DA0000000001030307) 
Dec 05 09:26:02 np0005546420.localdomain python3.9[148595]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:26:02 np0005546420.localdomain sudo[148593]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:03 np0005546420.localdomain sudo[148686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqmwzyfxhrceqfxpfwwggbhueeshamwn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764926762.9196672-408-1947051989826/AnsiballZ_edpm_nftables_from_files.py
Dec 05 09:26:03 np0005546420.localdomain sudo[148686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:03 np0005546420.localdomain python3[148688]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 09:26:03 np0005546420.localdomain sudo[148686]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41693 DF PROTO=TCP SPT=45812 DPT=9105 SEQ=1874585292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA7BDA0000000001030307) 
Dec 05 09:26:04 np0005546420.localdomain sudo[148778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvrqmxihwujwxpnsgttaxyjsopxhufzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926764.5619414-432-272403910289124/AnsiballZ_stat.py
Dec 05 09:26:04 np0005546420.localdomain sudo[148778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:05 np0005546420.localdomain python3.9[148780]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:05 np0005546420.localdomain sudo[148778]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:06 np0005546420.localdomain sudo[148853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftydjuswoazhcmqeaeplxxmfvjfbmsjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926764.5619414-432-272403910289124/AnsiballZ_copy.py
Dec 05 09:26:06 np0005546420.localdomain sudo[148853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:06 np0005546420.localdomain python3.9[148855]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926764.5619414-432-272403910289124/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:06 np0005546420.localdomain sudo[148853]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28925 DF PROTO=TCP SPT=60494 DPT=9101 SEQ=3140822844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA85D90000000001030307) 
Dec 05 09:26:06 np0005546420.localdomain sudo[148945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cdrhhallmzxzsczajkobjtqinhcujdjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926766.4641871-477-124263758773818/AnsiballZ_stat.py
Dec 05 09:26:06 np0005546420.localdomain sudo[148945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:07 np0005546420.localdomain python3.9[148947]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:07 np0005546420.localdomain sudo[148945]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:07 np0005546420.localdomain sudo[149020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwfklepoohrudowrmhdiplgwkcltawac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926766.4641871-477-124263758773818/AnsiballZ_copy.py
Dec 05 09:26:07 np0005546420.localdomain sudo[149020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:07 np0005546420.localdomain python3.9[149022]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926766.4641871-477-124263758773818/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:07 np0005546420.localdomain sudo[149020]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:08 np0005546420.localdomain sudo[149112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nxkwmhbyrrzcsjjmhhgdjpcbkcnhmjwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926767.755084-522-265868551962976/AnsiballZ_stat.py
Dec 05 09:26:08 np0005546420.localdomain sudo[149112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:08 np0005546420.localdomain python3.9[149114]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:08 np0005546420.localdomain sudo[149112]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:08 np0005546420.localdomain sudo[149187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lizpnzxvsxgvwzliuytxqugniwcydtgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926767.755084-522-265868551962976/AnsiballZ_copy.py
Dec 05 09:26:08 np0005546420.localdomain sudo[149187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:08 np0005546420.localdomain python3.9[149189]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926767.755084-522-265868551962976/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:08 np0005546420.localdomain sudo[149187]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:09 np0005546420.localdomain sudo[149279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gakxvuixzljqdnmedkyliamkknubyptw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926769.0389729-567-11745389165604/AnsiballZ_stat.py
Dec 05 09:26:09 np0005546420.localdomain sudo[149279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:09 np0005546420.localdomain python3.9[149281]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:09 np0005546420.localdomain sudo[149279]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:09 np0005546420.localdomain sudo[149354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxijpsfdaqjzxopxjghparpyzzciuyek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926769.0389729-567-11745389165604/AnsiballZ_copy.py
Dec 05 09:26:09 np0005546420.localdomain sudo[149354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:10 np0005546420.localdomain python3.9[149356]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926769.0389729-567-11745389165604/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:10 np0005546420.localdomain sudo[149354]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:10 np0005546420.localdomain sudo[149446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-liwrtrmlpsgqcngyfwgabmutdcwtzhmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926770.3519924-612-164824452705869/AnsiballZ_stat.py
Dec 05 09:26:10 np0005546420.localdomain sudo[149446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9740 DF PROTO=TCP SPT=43826 DPT=9100 SEQ=1085690586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA955A0000000001030307) 
Dec 05 09:26:10 np0005546420.localdomain python3.9[149448]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:10 np0005546420.localdomain sudo[149446]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:11 np0005546420.localdomain sudo[149521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-npqwikrkzgvawwxcqvlewvldqpsyeihu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926770.3519924-612-164824452705869/AnsiballZ_copy.py
Dec 05 09:26:11 np0005546420.localdomain sudo[149521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:11 np0005546420.localdomain python3.9[149523]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926770.3519924-612-164824452705869/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:11 np0005546420.localdomain sudo[149521]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:11 np0005546420.localdomain sudo[149613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxlnxuxfbqisypnjhkramzuvotnexkfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926771.6179583-657-145838540978233/AnsiballZ_file.py
Dec 05 09:26:11 np0005546420.localdomain sudo[149613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:12 np0005546420.localdomain python3.9[149615]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:12 np0005546420.localdomain sudo[149613]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:12 np0005546420.localdomain sudo[149705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vgblkbbxrfzhxvucnhmmveqrszlaqhhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926772.276473-681-24361050272451/AnsiballZ_command.py
Dec 05 09:26:12 np0005546420.localdomain sudo[149705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:12 np0005546420.localdomain python3.9[149707]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:26:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9741 DF PROTO=TCP SPT=43826 DPT=9100 SEQ=1085690586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABA9D590000000001030307) 
Dec 05 09:26:12 np0005546420.localdomain sudo[149705]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:13 np0005546420.localdomain sudo[149800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsrtltvmvcvubxtatbscfevsbmrxjope ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926772.9238398-705-105807333762664/AnsiballZ_blockinfile.py
Dec 05 09:26:13 np0005546420.localdomain sudo[149800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:13 np0005546420.localdomain python3.9[149802]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:13 np0005546420.localdomain sudo[149800]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:14 np0005546420.localdomain sudo[149892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdkxqutgketezbtiouhnnoaeagcunrzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926774.4450376-732-140010390894521/AnsiballZ_command.py
Dec 05 09:26:14 np0005546420.localdomain sudo[149892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:14 np0005546420.localdomain python3.9[149894]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:26:14 np0005546420.localdomain sudo[149892]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:15 np0005546420.localdomain sudo[149985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgnidixmpptcbzavgcemtvzkefoyhmnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926775.0792868-756-184960545897914/AnsiballZ_stat.py
Dec 05 09:26:15 np0005546420.localdomain sudo[149985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:15 np0005546420.localdomain python3.9[149987]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:26:15 np0005546420.localdomain sudo[149985]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:16 np0005546420.localdomain sudo[150079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nemrgyncjssiomzzldoaaurhyuquiomb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926775.7056968-780-43448281351485/AnsiballZ_command.py
Dec 05 09:26:16 np0005546420.localdomain sudo[150079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:16 np0005546420.localdomain python3.9[150081]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:26:16 np0005546420.localdomain sudo[150079]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9742 DF PROTO=TCP SPT=43826 DPT=9100 SEQ=1085690586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABAAD190000000001030307) 
Dec 05 09:26:17 np0005546420.localdomain sudo[150174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mbtbyaqnkrfakliyxowlvodwffcoxaml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926776.8467264-804-257286436616143/AnsiballZ_file.py
Dec 05 09:26:17 np0005546420.localdomain sudo[150174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:17 np0005546420.localdomain python3.9[150176]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:17 np0005546420.localdomain sudo[150174]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:18 np0005546420.localdomain python3.9[150266]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:26:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26055 DF PROTO=TCP SPT=43138 DPT=9105 SEQ=2654720317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABAB5110000000001030307) 
Dec 05 09:26:19 np0005546420.localdomain sudo[150357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myfjqgdrwezdenujduvkazbytvqmlhbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926779.1787746-924-16437064623296/AnsiballZ_command.py
Dec 05 09:26:19 np0005546420.localdomain sudo[150357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:19 np0005546420.localdomain python3.9[150359]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005546420.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:12:08:8a:a9" external_ids:ovn-encap-ip=172.19.0.107 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:26:19 np0005546420.localdomain ovs-vsctl[150360]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005546420.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:12:08:8a:a9 external_ids:ovn-encap-ip=172.19.0.107 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 05 09:26:19 np0005546420.localdomain sudo[150357]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:20 np0005546420.localdomain sudo[150450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vuendintyneujxtramacgpsejeehkewi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926779.9585721-951-94985376444679/AnsiballZ_command.py
Dec 05 09:26:20 np0005546420.localdomain sudo[150450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:20 np0005546420.localdomain python3.9[150452]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                                            ovs-vsctl show | grep -q "Manager"
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:26:20 np0005546420.localdomain sudo[150450]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:21 np0005546420.localdomain python3.9[150545]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:26:21 np0005546420.localdomain sudo[150621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:26:21 np0005546420.localdomain sudo[150621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:26:21 np0005546420.localdomain sudo[150621]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:21 np0005546420.localdomain sudo[150651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aysvhzqgzsngmhlmcgrqjocchovveggf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926781.3804157-1005-11826430089048/AnsiballZ_file.py
Dec 05 09:26:21 np0005546420.localdomain sudo[150651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:21 np0005546420.localdomain sudo[150655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:26:21 np0005546420.localdomain sudo[150655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:26:21 np0005546420.localdomain python3.9[150654]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:26:21 np0005546420.localdomain sudo[150651]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26057 DF PROTO=TCP SPT=43138 DPT=9105 SEQ=2654720317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABAC1190000000001030307) 
Dec 05 09:26:22 np0005546420.localdomain sudo[150778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqyxuizvhtxcheubqnhlufxlcwzwosjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926782.0568929-1029-228948428417275/AnsiballZ_stat.py
Dec 05 09:26:22 np0005546420.localdomain sudo[150778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:22 np0005546420.localdomain sudo[150655]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:22 np0005546420.localdomain python3.9[150784]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:22 np0005546420.localdomain sudo[150778]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:22 np0005546420.localdomain sudo[150838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wyfsermgembyacrkxrvzyydltjikpuvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926782.0568929-1029-228948428417275/AnsiballZ_file.py
Dec 05 09:26:22 np0005546420.localdomain sudo[150838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:22 np0005546420.localdomain sudo[150841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:26:22 np0005546420.localdomain sudo[150841]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:26:22 np0005546420.localdomain sudo[150841]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:22 np0005546420.localdomain python3.9[150840]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:26:22 np0005546420.localdomain sudo[150838]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:23 np0005546420.localdomain sudo[150945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbwtavrqcrmzpcdxfdsgfujieqychyxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926783.0818963-1029-88547768939842/AnsiballZ_stat.py
Dec 05 09:26:23 np0005546420.localdomain sudo[150945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:23 np0005546420.localdomain python3.9[150947]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:23 np0005546420.localdomain sudo[150945]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:23 np0005546420.localdomain sudo[150993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mgtmcqlqgjodibipvmgsxnptbfipggqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926783.0818963-1029-88547768939842/AnsiballZ_file.py
Dec 05 09:26:23 np0005546420.localdomain sudo[150993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:23 np0005546420.localdomain python3.9[150995]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:26:23 np0005546420.localdomain sudo[150993]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:24 np0005546420.localdomain sudo[151085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnyroljujtgwhkzsdkpzxtzkewolurzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926784.6537015-1098-243376502433977/AnsiballZ_file.py
Dec 05 09:26:24 np0005546420.localdomain sudo[151085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:25 np0005546420.localdomain python3.9[151087]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:25 np0005546420.localdomain sudo[151085]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9743 DF PROTO=TCP SPT=43826 DPT=9100 SEQ=1085690586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABACDD90000000001030307) 
Dec 05 09:26:25 np0005546420.localdomain sudo[151177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdyxfcwawsyhpkvxubetnosktxlazfwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926785.2986763-1122-111551516034785/AnsiballZ_stat.py
Dec 05 09:26:25 np0005546420.localdomain sudo[151177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:25 np0005546420.localdomain python3.9[151179]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:25 np0005546420.localdomain sudo[151177]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:25 np0005546420.localdomain sudo[151225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uhrscqkalvzangzbhjeqidlamxyrjytx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926785.2986763-1122-111551516034785/AnsiballZ_file.py
Dec 05 09:26:25 np0005546420.localdomain sudo[151225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:26 np0005546420.localdomain python3.9[151227]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:26 np0005546420.localdomain sudo[151225]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:27 np0005546420.localdomain sudo[151317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qglhqklgsriarmgrlttpcgttijlcgrvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926787.1104646-1158-16770774746158/AnsiballZ_stat.py
Dec 05 09:26:27 np0005546420.localdomain sudo[151317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:27 np0005546420.localdomain python3.9[151319]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:27 np0005546420.localdomain sudo[151317]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:27 np0005546420.localdomain sudo[151365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzaksgbheeggvectyjnnrnkuxojcqgye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926787.1104646-1158-16770774746158/AnsiballZ_file.py
Dec 05 09:26:27 np0005546420.localdomain sudo[151365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:28 np0005546420.localdomain python3.9[151367]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:28 np0005546420.localdomain sudo[151365]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:28 np0005546420.localdomain sudo[151457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cazfunowzlixrvbwzvqpxqpacyjwcxrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926788.256787-1194-32166263323174/AnsiballZ_systemd.py
Dec 05 09:26:28 np0005546420.localdomain sudo[151457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:28 np0005546420.localdomain python3.9[151459]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:26:28 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:26:28 np0005546420.localdomain systemd-rc-local-generator[151483]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:26:28 np0005546420.localdomain systemd-sysv-generator[151489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:26:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:26:30 np0005546420.localdomain sudo[151457]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44219 DF PROTO=TCP SPT=56932 DPT=9102 SEQ=2438337327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABAE2990000000001030307) 
Dec 05 09:26:30 np0005546420.localdomain sudo[151587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jupblzzsdtqnepizjazjqcgbtsftxbgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926790.359325-1218-51359806592678/AnsiballZ_stat.py
Dec 05 09:26:30 np0005546420.localdomain sudo[151587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:30 np0005546420.localdomain python3.9[151589]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:30 np0005546420.localdomain sudo[151587]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:31 np0005546420.localdomain sudo[151635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-skisuhtxsipgzxyozefkuqiiejepebam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926790.359325-1218-51359806592678/AnsiballZ_file.py
Dec 05 09:26:31 np0005546420.localdomain sudo[151635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:31 np0005546420.localdomain python3.9[151637]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:31 np0005546420.localdomain sudo[151635]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:31 np0005546420.localdomain sudo[151727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pwqezvvlidqxqvyaqptlpcjujvemmocp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926791.4280615-1254-1934196967151/AnsiballZ_stat.py
Dec 05 09:26:31 np0005546420.localdomain sudo[151727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:31 np0005546420.localdomain python3.9[151729]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:31 np0005546420.localdomain sudo[151727]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:32 np0005546420.localdomain sudo[151775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcfszfkthdnnprirpkrcpqtxoygrjkzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926791.4280615-1254-1934196967151/AnsiballZ_file.py
Dec 05 09:26:32 np0005546420.localdomain sudo[151775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:32 np0005546420.localdomain python3.9[151777]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:32 np0005546420.localdomain sudo[151775]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2857 DF PROTO=TCP SPT=54290 DPT=9882 SEQ=3088669944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABAE9DA0000000001030307) 
Dec 05 09:26:32 np0005546420.localdomain sudo[151867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vkfihyuxfvqqokmjefjijzatrwwtdhsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926792.5151558-1290-182431237653504/AnsiballZ_systemd.py
Dec 05 09:26:32 np0005546420.localdomain sudo[151867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:33 np0005546420.localdomain python3.9[151869]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:26:33 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:26:33 np0005546420.localdomain systemd-rc-local-generator[151895]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:26:33 np0005546420.localdomain systemd-sysv-generator[151899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:26:33 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:26:33 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 09:26:33 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 09:26:33 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 09:26:33 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 09:26:33 np0005546420.localdomain sudo[151867]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:34 np0005546420.localdomain sudo[152000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxhochlzwvplvaqgfowlaxqzdewvysiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926793.8533561-1320-41177107645303/AnsiballZ_file.py
Dec 05 09:26:34 np0005546420.localdomain sudo[152000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:34 np0005546420.localdomain python3.9[152002]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:26:34 np0005546420.localdomain sudo[152000]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26059 DF PROTO=TCP SPT=43138 DPT=9105 SEQ=2654720317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABAF1DA0000000001030307) 
Dec 05 09:26:34 np0005546420.localdomain sudo[152092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-frrdtgswftjpeodieicnpucsylarqmfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926794.491793-1344-76970814512444/AnsiballZ_stat.py
Dec 05 09:26:34 np0005546420.localdomain sudo[152092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:34 np0005546420.localdomain python3.9[152094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:34 np0005546420.localdomain sudo[152092]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:35 np0005546420.localdomain sudo[152165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uiqxmsvjkkhclkuzjnldzvievvvpyfqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926794.491793-1344-76970814512444/AnsiballZ_copy.py
Dec 05 09:26:35 np0005546420.localdomain sudo[152165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:36 np0005546420.localdomain python3.9[152167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926794.491793-1344-76970814512444/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:26:36 np0005546420.localdomain sudo[152165]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:36 np0005546420.localdomain sudo[152257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrofmmkwsyuelrhhwktnztcyinhhakia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926796.5807426-1395-142429068229004/AnsiballZ_file.py
Dec 05 09:26:36 np0005546420.localdomain sudo[152257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3804 DF PROTO=TCP SPT=51720 DPT=9101 SEQ=2914233395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABAFBD90000000001030307) 
Dec 05 09:26:37 np0005546420.localdomain python3.9[152259]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:26:37 np0005546420.localdomain sudo[152257]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:37 np0005546420.localdomain sudo[152349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-skyfzsolyjzauhvukvrwrocvwyhrggid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926797.3863363-1419-154965383198916/AnsiballZ_stat.py
Dec 05 09:26:37 np0005546420.localdomain sudo[152349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:37 np0005546420.localdomain python3.9[152351]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:26:37 np0005546420.localdomain sudo[152349]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:38 np0005546420.localdomain sudo[152424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gvxfiebjcrbcfybisqqlrhlskeaxkhof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926797.3863363-1419-154965383198916/AnsiballZ_copy.py
Dec 05 09:26:38 np0005546420.localdomain sudo[152424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:38 np0005546420.localdomain python3.9[152426]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926797.3863363-1419-154965383198916/.source.json _original_basename=.nve51up4 follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:38 np0005546420.localdomain sudo[152424]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:38 np0005546420.localdomain sudo[152516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdupfrnhunfqryzfmizytpwizbfilqwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926798.5413992-1464-225126458203436/AnsiballZ_file.py
Dec 05 09:26:38 np0005546420.localdomain sudo[152516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:39 np0005546420.localdomain python3.9[152518]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:39 np0005546420.localdomain sudo[152516]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:39 np0005546420.localdomain sudo[152608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbseejtsewidbrgjlybzeeqhzrccohug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926799.2515266-1488-147230068094775/AnsiballZ_stat.py
Dec 05 09:26:39 np0005546420.localdomain sudo[152608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:39 np0005546420.localdomain sudo[152608]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:40 np0005546420.localdomain sudo[152681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ekdzxprjojviffhfodddbybqnibvrzgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926799.2515266-1488-147230068094775/AnsiballZ_copy.py
Dec 05 09:26:40 np0005546420.localdomain sudo[152681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:40 np0005546420.localdomain sudo[152681]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50904 DF PROTO=TCP SPT=49588 DPT=9100 SEQ=2961270254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB0A990000000001030307) 
Dec 05 09:26:41 np0005546420.localdomain sudo[152773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ujvhbwlirdihnbhxfjokphvyyphiifgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926800.6177533-1539-50127094853983/AnsiballZ_container_config_data.py
Dec 05 09:26:41 np0005546420.localdomain sudo[152773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:41 np0005546420.localdomain python3.9[152775]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 05 09:26:41 np0005546420.localdomain sudo[152773]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:41 np0005546420.localdomain sudo[152865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wdgkvqsrmqjwpgdkkqllockftynqyrax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926801.538003-1566-80117958244437/AnsiballZ_container_config_hash.py
Dec 05 09:26:41 np0005546420.localdomain sudo[152865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:42 np0005546420.localdomain python3.9[152867]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:26:42 np0005546420.localdomain sudo[152865]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50905 DF PROTO=TCP SPT=49588 DPT=9100 SEQ=2961270254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB12990000000001030307) 
Dec 05 09:26:42 np0005546420.localdomain sudo[152957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-faoaisumdukmlrsgmnmfeleqlubzdluj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926802.3952942-1593-258769375654819/AnsiballZ_podman_container_info.py
Dec 05 09:26:42 np0005546420.localdomain sudo[152957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:43 np0005546420.localdomain python3.9[152959]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 09:26:43 np0005546420.localdomain sudo[152957]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50906 DF PROTO=TCP SPT=49588 DPT=9100 SEQ=2961270254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB22590000000001030307) 
Dec 05 09:26:46 np0005546420.localdomain sudo[153075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-frnytnrnmonqqpmmkdnoijaqgwhszurt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764926806.3760273-1632-18981646209860/AnsiballZ_edpm_container_manage.py
Dec 05 09:26:46 np0005546420.localdomain sudo[153075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:47 np0005546420.localdomain python3[153077]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:26:47 np0005546420.localdomain python3[153077]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",
                                                                    "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:38:47.246477714Z",
                                                                    "Config": {
                                                                         "User": "root",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 345722821,
                                                                    "VirtualSize": 345722821,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",
                                                                              "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "root",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:22.759131427Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:13:25.258260855Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openvswitch openvswitch-ovn-common python3-netifaces python3-openvswitch tcpdump && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:13:28.025145079Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:38:13.535675197Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:38:47.244104142Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openvswitch-ovn-host && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:38:48.759416475Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 05 09:26:47 np0005546420.localdomain podman[153127]: 2025-12-05 09:26:47.559316488 +0000 UTC m=+0.091739836 container remove 1e838ec5cb4266d535f2c11cda1958ff41b26b32ba3a8c65e082a6e01507eadb (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 05 09:26:47 np0005546420.localdomain python3[153077]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller
Dec 05 09:26:47 np0005546420.localdomain podman[153141]: 
Dec 05 09:26:47 np0005546420.localdomain podman[153141]: 2025-12-05 09:26:47.664791567 +0000 UTC m=+0.087126408 container create d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec 05 09:26:47 np0005546420.localdomain podman[153141]: 2025-12-05 09:26:47.622365487 +0000 UTC m=+0.044700378 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 05 09:26:47 np0005546420.localdomain python3[153077]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 05 09:26:47 np0005546420.localdomain sudo[153075]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2858 DF PROTO=TCP SPT=54290 DPT=9882 SEQ=3088669944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB29DA0000000001030307) 
Dec 05 09:26:48 np0005546420.localdomain sudo[153266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fbsbcfbnlcbvkmirderjjpjqnhleyljl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926808.6469307-1656-83557854946715/AnsiballZ_stat.py
Dec 05 09:26:48 np0005546420.localdomain sudo[153266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:49 np0005546420.localdomain python3.9[153268]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:26:49 np0005546420.localdomain sudo[153266]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:49 np0005546420.localdomain sudo[153360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gdxinauzuegpmjjgadgqejexgkezttwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926809.4020052-1683-144403476484694/AnsiballZ_file.py
Dec 05 09:26:49 np0005546420.localdomain sudo[153360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:49 np0005546420.localdomain python3.9[153362]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:49 np0005546420.localdomain sudo[153360]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:50 np0005546420.localdomain sudo[153406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-muueqsvsmpsofumtojewofunuicuuqte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926809.4020052-1683-144403476484694/AnsiballZ_stat.py
Dec 05 09:26:50 np0005546420.localdomain sudo[153406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:50 np0005546420.localdomain python3.9[153408]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:26:50 np0005546420.localdomain sudo[153406]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:50 np0005546420.localdomain sudo[153497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lngsbaiobntromhfaewngrivvymtosqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926810.3411791-1683-209524570296941/AnsiballZ_copy.py
Dec 05 09:26:50 np0005546420.localdomain sudo[153497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:50 np0005546420.localdomain python3.9[153499]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764926810.3411791-1683-209524570296941/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:26:50 np0005546420.localdomain sudo[153497]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:51 np0005546420.localdomain sudo[153543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-erzfwrjaydubfffdvgptopjzrqdrfftp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926810.3411791-1683-209524570296941/AnsiballZ_systemd.py
Dec 05 09:26:51 np0005546420.localdomain sudo[153543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:51 np0005546420.localdomain python3.9[153545]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:26:51 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:26:51 np0005546420.localdomain systemd-rc-local-generator[153568]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:26:51 np0005546420.localdomain systemd-sysv-generator[153575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:26:51 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:26:51 np0005546420.localdomain sudo[153543]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46339 DF PROTO=TCP SPT=44730 DPT=9105 SEQ=1948378998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB36590000000001030307) 
Dec 05 09:26:52 np0005546420.localdomain sudo[153625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xmelftxhimveecfeesgowecolquspend ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926810.3411791-1683-209524570296941/AnsiballZ_systemd.py
Dec 05 09:26:52 np0005546420.localdomain sudo[153625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:52 np0005546420.localdomain python3.9[153627]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:26:53 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:26:53 np0005546420.localdomain systemd-sysv-generator[153656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:26:53 np0005546420.localdomain systemd-rc-local-generator[153652]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:26:53 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:26:53 np0005546420.localdomain systemd[1]: Starting ovn_controller container...
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:26:54 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6996a39e80c4b40f84c755e1576eaea209f23ceb95551327feab906478118243/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:26:54 np0005546420.localdomain podman[153669]: 2025-12-05 09:26:54.097814994 +0000 UTC m=+0.183705567 container init d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: + sudo -E kolla_set_configs
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:26:54 np0005546420.localdomain podman[153669]: 2025-12-05 09:26:54.144719439 +0000 UTC m=+0.230609992 container start d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:26:54 np0005546420.localdomain edpm-start-podman-container[153669]: ovn_controller
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 05 09:26:54 np0005546420.localdomain edpm-start-podman-container[153668]: Creating additional drop-in dependency for "ovn_controller" (d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0)
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:26:54 np0005546420.localdomain podman[153691]: 2025-12-05 09:26:54.307744104 +0000 UTC m=+0.154713273 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:26:54 np0005546420.localdomain podman[153691]: 2025-12-05 09:26:54.319036354 +0000 UTC m=+0.166005513 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:26:54 np0005546420.localdomain podman[153691]: unhealthy
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Queued start job for default target Main User Target.
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Created slice User Application Slice.
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Reached target Paths.
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Reached target Timers.
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Starting D-Bus User Message Bus Socket...
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Starting Create User's Volatile Files and Directories...
Dec 05 09:26:54 np0005546420.localdomain systemd-rc-local-generator[153771]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:26:54 np0005546420.localdomain systemd-sysv-generator[153774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Listening on D-Bus User Message Bus Socket.
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Reached target Sockets.
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Finished Create User's Volatile Files and Directories.
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Reached target Basic System.
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Reached target Main User Target.
Dec 05 09:26:54 np0005546420.localdomain systemd[153714]: Startup finished in 137ms.
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Started User Manager for UID 0.
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Started ovn_controller container.
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Failed with result 'exit-code'.
Dec 05 09:26:54 np0005546420.localdomain systemd-journald[48245]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation.
Dec 05 09:26:54 np0005546420.localdomain systemd-journald[48245]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 05 09:26:54 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:26:54 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Started Session c11 of User root.
Dec 05 09:26:54 np0005546420.localdomain sudo[153625]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: INFO:__main__:Validating config file
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: INFO:__main__:Writing out command to execute
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: session-c11.scope: Deactivated successfully.
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: ++ cat /run_command
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: + ARGS=
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: + sudo kolla_copy_cacerts
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: Started Session c12 of User root.
Dec 05 09:26:54 np0005546420.localdomain systemd[1]: session-c12.scope: Deactivated successfully.
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: + [[ ! -n '' ]]
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: + . kolla_extend_start
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\'''
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: + umask 0022
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00013|main|INFO|OVS feature set changed, force recompute.
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00018|main|INFO|OVS feature set changed, force recompute.
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 09:26:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:26:54Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 09:26:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50907 DF PROTO=TCP SPT=49588 DPT=9100 SEQ=2961270254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB41D90000000001030307) 
Dec 05 09:26:55 np0005546420.localdomain sudo[153885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yltxxhlbolyxulgkmuxpcwkqhdlwimym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926814.7894363-1767-80870264934310/AnsiballZ_command.py
Dec 05 09:26:55 np0005546420.localdomain sudo[153885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:55 np0005546420.localdomain python3.9[153887]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:26:55 np0005546420.localdomain ovs-vsctl[153888]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 05 09:26:55 np0005546420.localdomain sudo[153885]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:55 np0005546420.localdomain sudo[153978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzcsifefvmeffuuvgbsdumcanxspllxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926815.5314996-1791-220525491784042/AnsiballZ_command.py
Dec 05 09:26:55 np0005546420.localdomain sudo[153978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:55 np0005546420.localdomain python3.9[153980]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:26:55 np0005546420.localdomain ovs-vsctl[153982]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 05 09:26:55 np0005546420.localdomain sudo[153978]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:57 np0005546420.localdomain sudo[154073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-elvetefdtvngvpmrnoewwiemfqpfxsaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926817.0435903-1833-9211845076669/AnsiballZ_command.py
Dec 05 09:26:57 np0005546420.localdomain sudo[154073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:26:57 np0005546420.localdomain python3.9[154075]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:26:57 np0005546420.localdomain ovs-vsctl[154076]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 05 09:26:57 np0005546420.localdomain sudo[154073]: pam_unix(sudo:session): session closed for user root
Dec 05 09:26:58 np0005546420.localdomain sshd[147193]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:26:58 np0005546420.localdomain systemd[1]: session-49.scope: Deactivated successfully.
Dec 05 09:26:58 np0005546420.localdomain systemd[1]: session-49.scope: Consumed 42.259s CPU time.
Dec 05 09:26:58 np0005546420.localdomain systemd-logind[762]: Session 49 logged out. Waiting for processes to exit.
Dec 05 09:26:58 np0005546420.localdomain systemd-logind[762]: Removed session 49.
Dec 05 09:27:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11940 DF PROTO=TCP SPT=60368 DPT=9102 SEQ=3732110048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB57D90000000001030307) 
Dec 05 09:27:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6488 DF PROTO=TCP SPT=60862 DPT=9882 SEQ=3834190432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB5FD90000000001030307) 
Dec 05 09:27:03 np0005546420.localdomain sshd[154091]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:27:03 np0005546420.localdomain sshd[154091]: Accepted publickey for zuul from 192.168.122.31 port 52070 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:27:03 np0005546420.localdomain systemd-logind[762]: New session 51 of user zuul.
Dec 05 09:27:03 np0005546420.localdomain systemd[1]: Started Session 51 of User zuul.
Dec 05 09:27:03 np0005546420.localdomain sshd[154091]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:27:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46341 DF PROTO=TCP SPT=44730 DPT=9105 SEQ=1948378998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB65D90000000001030307) 
Dec 05 09:27:04 np0005546420.localdomain python3.9[154184]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:27:04 np0005546420.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Activating special unit Exit the Session...
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Stopped target Main User Target.
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Stopped target Basic System.
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Stopped target Paths.
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Stopped target Sockets.
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Stopped target Timers.
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Closed D-Bus User Message Bus Socket.
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Stopped Create User's Volatile Files and Directories.
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Removed slice User Application Slice.
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Reached target Shutdown.
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Finished Exit the Session.
Dec 05 09:27:04 np0005546420.localdomain systemd[153714]: Reached target Exit the Session.
Dec 05 09:27:04 np0005546420.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 05 09:27:04 np0005546420.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 05 09:27:04 np0005546420.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 05 09:27:04 np0005546420.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 05 09:27:04 np0005546420.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 05 09:27:04 np0005546420.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 05 09:27:04 np0005546420.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 05 09:27:05 np0005546420.localdomain sudo[154282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmcpwgitpnoitsiaprgafzxvungbmrgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926825.179204-63-200660005981224/AnsiballZ_file.py
Dec 05 09:27:05 np0005546420.localdomain sudo[154282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:05 np0005546420.localdomain python3.9[154284]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:05 np0005546420.localdomain sudo[154282]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:06 np0005546420.localdomain sudo[154374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uerrvdxohkfcvttqvmkdodwexzmraths ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926825.9387941-63-222040212013945/AnsiballZ_file.py
Dec 05 09:27:06 np0005546420.localdomain sudo[154374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:06 np0005546420.localdomain python3.9[154376]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:06 np0005546420.localdomain sudo[154374]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:06 np0005546420.localdomain sudo[154466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pzwtnwtgesmsbcdsriallirubhydnrdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926826.5234385-63-5422827887890/AnsiballZ_file.py
Dec 05 09:27:06 np0005546420.localdomain sudo[154466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:06 np0005546420.localdomain python3.9[154468]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:07 np0005546420.localdomain sudo[154466]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59454 DF PROTO=TCP SPT=46498 DPT=9101 SEQ=1357317472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB71D90000000001030307) 
Dec 05 09:27:07 np0005546420.localdomain sudo[154558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-donpldsdmcewiixathokscygodvbzoqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926827.141441-63-231538579241969/AnsiballZ_file.py
Dec 05 09:27:07 np0005546420.localdomain sudo[154558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:08 np0005546420.localdomain python3.9[154560]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:08 np0005546420.localdomain sudo[154558]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:08 np0005546420.localdomain sudo[154650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltgyuzstobiiovofmmjjnmiqzqhzwdjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926828.21873-63-12072466979399/AnsiballZ_file.py
Dec 05 09:27:08 np0005546420.localdomain sudo[154650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:08 np0005546420.localdomain python3.9[154652]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:08 np0005546420.localdomain sudo[154650]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:09 np0005546420.localdomain python3.9[154742]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:27:10 np0005546420.localdomain sudo[154832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yaapalwcgbpulwvsgdktetikwykivwnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926829.7722967-195-50038586621223/AnsiballZ_seboolean.py
Dec 05 09:27:10 np0005546420.localdomain sudo[154832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:10 np0005546420.localdomain python3.9[154834]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 05 09:27:10 np0005546420.localdomain sudo[154832]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16265 DF PROTO=TCP SPT=41988 DPT=9100 SEQ=2865149807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB7F9A0000000001030307) 
Dec 05 09:27:11 np0005546420.localdomain python3.9[154924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:11 np0005546420.localdomain python3.9[154997]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926830.786702-219-215338068305680/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16266 DF PROTO=TCP SPT=41988 DPT=9100 SEQ=2865149807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB87990000000001030307) 
Dec 05 09:27:13 np0005546420.localdomain python3.9[155088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:14 np0005546420.localdomain python3.9[155161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926833.115377-264-221917679226671/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:14 np0005546420.localdomain sudo[155251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rdpfcdhudourhylctqfgrblakvisajgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926834.4824727-315-144904755541346/AnsiballZ_setup.py
Dec 05 09:27:14 np0005546420.localdomain sudo[155251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:15 np0005546420.localdomain python3.9[155253]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:27:15 np0005546420.localdomain sudo[155251]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:15 np0005546420.localdomain sudo[155305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nuxwwyyvhypnjguhruxlpsaadpvfnprw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926834.4824727-315-144904755541346/AnsiballZ_dnf.py
Dec 05 09:27:15 np0005546420.localdomain sudo[155305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:16 np0005546420.localdomain python3.9[155307]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:27:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16267 DF PROTO=TCP SPT=41988 DPT=9100 SEQ=2865149807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB975B0000000001030307) 
Dec 05 09:27:17 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:27:17Z|00023|memory|INFO|13108 kB peak resident set size after 22.9 seconds
Dec 05 09:27:17 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T09:27:17Z|00024|memory|INFO|idl-cells-OVN_Southbound:4028 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:10 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3
Dec 05 09:27:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57273 DF PROTO=TCP SPT=45768 DPT=9105 SEQ=4935835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABB9F700000000001030307) 
Dec 05 09:27:19 np0005546420.localdomain sudo[155305]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:20 np0005546420.localdomain sudo[155399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-unmjzqxbvpdswhsptxqzuzsrtzmcpeta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926839.6692357-351-245377793773959/AnsiballZ_systemd.py
Dec 05 09:27:20 np0005546420.localdomain sudo[155399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:20 np0005546420.localdomain python3.9[155401]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:27:20 np0005546420.localdomain sudo[155399]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57275 DF PROTO=TCP SPT=45768 DPT=9105 SEQ=4935835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABBAB590000000001030307) 
Dec 05 09:27:22 np0005546420.localdomain python3.9[155494]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:22 np0005546420.localdomain python3.9[155565]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926841.8245907-375-270149327679817/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:23 np0005546420.localdomain sudo[155656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:27:23 np0005546420.localdomain sudo[155656]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:27:23 np0005546420.localdomain sudo[155656]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:23 np0005546420.localdomain sudo[155671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:27:23 np0005546420.localdomain python3.9[155655]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:23 np0005546420.localdomain sudo[155671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:27:23 np0005546420.localdomain python3.9[155777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926842.7985563-375-130375942169220/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:23 np0005546420.localdomain podman[155843]: 2025-12-05 09:27:23.937798203 +0000 UTC m=+0.074383676 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, vcs-type=git, RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:27:24 np0005546420.localdomain podman[155843]: 2025-12-05 09:27:24.017938007 +0000 UTC m=+0.154523440 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, ceph=True, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vendor=Red Hat, Inc., RELEASE=main, build-date=2025-11-26T19:44:28Z)
Dec 05 09:27:24 np0005546420.localdomain sudo[155671]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:24 np0005546420.localdomain sudo[155911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:27:24 np0005546420.localdomain sudo[155911]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:27:24 np0005546420.localdomain sudo[155911]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:24 np0005546420.localdomain sudo[155939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:27:24 np0005546420.localdomain sudo[155939]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:27:24 np0005546420.localdomain python3.9[156016]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11942 DF PROTO=TCP SPT=60368 DPT=9102 SEQ=3732110048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABBB7DA0000000001030307) 
Dec 05 09:27:25 np0005546420.localdomain sudo[155939]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:25 np0005546420.localdomain python3.9[156116]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926844.5078764-507-27844756329628/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:27:25 np0005546420.localdomain podman[156145]: 2025-12-05 09:27:25.519505111 +0000 UTC m=+0.085667131 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:27:25 np0005546420.localdomain podman[156145]: 2025-12-05 09:27:25.602771203 +0000 UTC m=+0.168933173 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 09:27:25 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:27:25 np0005546420.localdomain sudo[156234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:27:25 np0005546420.localdomain sudo[156234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:27:25 np0005546420.localdomain sudo[156234]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:25 np0005546420.localdomain python3.9[156233]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:26 np0005546420.localdomain python3.9[156319]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926845.434313-507-216889327017106/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:27 np0005546420.localdomain python3.9[156409]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:27:27 np0005546420.localdomain sudo[156501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfyurkoytmnfprhxpkmsvlmmsfrllmnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926847.4938095-621-239156652926663/AnsiballZ_file.py
Dec 05 09:27:27 np0005546420.localdomain sudo[156501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:27 np0005546420.localdomain python3.9[156503]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:27 np0005546420.localdomain sudo[156501]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:28 np0005546420.localdomain sudo[156593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csphbbgazepcteancvuiwqcqgrtyiovr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926848.1583104-645-16513160449534/AnsiballZ_stat.py
Dec 05 09:27:28 np0005546420.localdomain sudo[156593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:28 np0005546420.localdomain python3.9[156595]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:28 np0005546420.localdomain sudo[156593]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:28 np0005546420.localdomain sudo[156641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kafdtznoflxtarocqefoygcfrygpzaxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926848.1583104-645-16513160449534/AnsiballZ_file.py
Dec 05 09:27:28 np0005546420.localdomain sudo[156641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:29 np0005546420.localdomain python3.9[156643]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:29 np0005546420.localdomain sudo[156641]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:29 np0005546420.localdomain sudo[156733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rddqqswzhmbegnslwtfcejldlhnpygmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926849.2148025-645-26233770910166/AnsiballZ_stat.py
Dec 05 09:27:29 np0005546420.localdomain sudo[156733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:29 np0005546420.localdomain python3.9[156735]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:29 np0005546420.localdomain sudo[156733]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:29 np0005546420.localdomain sudo[156781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqhavfbnlyldkbasqcjwfpvenmmtugyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926849.2148025-645-26233770910166/AnsiballZ_file.py
Dec 05 09:27:29 np0005546420.localdomain sudo[156781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:30 np0005546420.localdomain python3.9[156783]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:30 np0005546420.localdomain sudo[156781]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61307 DF PROTO=TCP SPT=50920 DPT=9102 SEQ=2604076778 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABBCD1A0000000001030307) 
Dec 05 09:27:32 np0005546420.localdomain sudo[156873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vaztiojcenmmjhijzzlkqbgbcbudclxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926851.9974337-714-90098880178653/AnsiballZ_file.py
Dec 05 09:27:32 np0005546420.localdomain sudo[156873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:32 np0005546420.localdomain python3.9[156875]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:27:32 np0005546420.localdomain sudo[156873]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28193 DF PROTO=TCP SPT=45076 DPT=9882 SEQ=2723191646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABBD5D90000000001030307) 
Dec 05 09:27:33 np0005546420.localdomain sudo[156965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzalrfteikhhstdewvriwnclgbkjqufr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926853.1328874-738-73332642807344/AnsiballZ_stat.py
Dec 05 09:27:33 np0005546420.localdomain sudo[156965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:33 np0005546420.localdomain python3.9[156967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:33 np0005546420.localdomain sudo[156965]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:33 np0005546420.localdomain sudo[157013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oecvrbqvstofziekconhxrkajbtxcoqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926853.1328874-738-73332642807344/AnsiballZ_file.py
Dec 05 09:27:33 np0005546420.localdomain sudo[157013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:34 np0005546420.localdomain python3.9[157015]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:27:34 np0005546420.localdomain sudo[157013]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57277 DF PROTO=TCP SPT=45768 DPT=9105 SEQ=4935835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABBDBD90000000001030307) 
Dec 05 09:27:34 np0005546420.localdomain sudo[157105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dpymekckehvhpdvscluxoyinwqonhlqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926854.2590332-774-140115723844233/AnsiballZ_stat.py
Dec 05 09:27:34 np0005546420.localdomain sudo[157105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:34 np0005546420.localdomain python3.9[157107]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:34 np0005546420.localdomain sudo[157105]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:34 np0005546420.localdomain sudo[157153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkqspwphzcgkdajipwqukzcdvgvmtbtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926854.2590332-774-140115723844233/AnsiballZ_file.py
Dec 05 09:27:34 np0005546420.localdomain sudo[157153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:35 np0005546420.localdomain python3.9[157155]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:27:35 np0005546420.localdomain sudo[157153]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:35 np0005546420.localdomain sudo[157245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvbmbglbpitxsykatpxoxhjdovukolie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926855.4823337-810-102208402804993/AnsiballZ_systemd.py
Dec 05 09:27:35 np0005546420.localdomain sudo[157245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:36 np0005546420.localdomain python3.9[157247]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:27:36 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:27:36 np0005546420.localdomain systemd-rc-local-generator[157273]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:27:36 np0005546420.localdomain systemd-sysv-generator[157276]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:27:36 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:27:36 np0005546420.localdomain sudo[157245]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:36 np0005546420.localdomain sudo[157375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hthysrcvggswzvvmypbtnhwamahrhknz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926856.6495287-834-82317903304147/AnsiballZ_stat.py
Dec 05 09:27:36 np0005546420.localdomain sudo[157375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55054 DF PROTO=TCP SPT=48592 DPT=9101 SEQ=3109721823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABBE5DA0000000001030307) 
Dec 05 09:27:37 np0005546420.localdomain python3.9[157377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:37 np0005546420.localdomain sudo[157375]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:37 np0005546420.localdomain sudo[157423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-enasimxacnddcmxucoxbwhgozqvyghmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926856.6495287-834-82317903304147/AnsiballZ_file.py
Dec 05 09:27:37 np0005546420.localdomain sudo[157423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:37 np0005546420.localdomain python3.9[157425]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:27:37 np0005546420.localdomain sudo[157423]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:37 np0005546420.localdomain sudo[157515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dtsunfuxfbiogvoyvxpyfexqswljcicx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926857.7242956-870-230550193784139/AnsiballZ_stat.py
Dec 05 09:27:37 np0005546420.localdomain sudo[157515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:38 np0005546420.localdomain python3.9[157517]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:38 np0005546420.localdomain sudo[157515]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:38 np0005546420.localdomain sudo[157563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhsflzpkvblzbgnmirjzfsjiynexhqyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926857.7242956-870-230550193784139/AnsiballZ_file.py
Dec 05 09:27:38 np0005546420.localdomain sudo[157563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:38 np0005546420.localdomain python3.9[157565]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:27:38 np0005546420.localdomain sudo[157563]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:39 np0005546420.localdomain sudo[157655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scuratilnyffyfhiikrawpuvttqeabdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926858.8417642-906-148388632344032/AnsiballZ_systemd.py
Dec 05 09:27:39 np0005546420.localdomain sudo[157655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:39 np0005546420.localdomain python3.9[157657]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:27:39 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:27:39 np0005546420.localdomain systemd-rc-local-generator[157681]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:27:39 np0005546420.localdomain systemd-sysv-generator[157684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:27:39 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:27:39 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 09:27:39 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 09:27:39 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 09:27:39 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 09:27:39 np0005546420.localdomain sudo[157655]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10572 DF PROTO=TCP SPT=53730 DPT=9100 SEQ=1420216778 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABBF4D90000000001030307) 
Dec 05 09:27:42 np0005546420.localdomain sudo[157790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xjfnrknwfolbdtpmwyhwcytcblvqfncd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926862.2443087-936-75876067467469/AnsiballZ_file.py
Dec 05 09:27:42 np0005546420.localdomain sudo[157790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:42 np0005546420.localdomain python3.9[157792]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:42 np0005546420.localdomain sudo[157790]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10573 DF PROTO=TCP SPT=53730 DPT=9100 SEQ=1420216778 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABBFCD90000000001030307) 
Dec 05 09:27:43 np0005546420.localdomain sudo[157882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odoqryorxrlobotdrxewqcpybonyusdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926862.9361846-960-184042819702676/AnsiballZ_stat.py
Dec 05 09:27:43 np0005546420.localdomain sudo[157882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:43 np0005546420.localdomain python3.9[157884]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:43 np0005546420.localdomain sudo[157882]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:44 np0005546420.localdomain sudo[157955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yuvxydaejmcfheqfbybteaemiwobpkfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926862.9361846-960-184042819702676/AnsiballZ_copy.py
Dec 05 09:27:44 np0005546420.localdomain sudo[157955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:44 np0005546420.localdomain python3.9[157957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926862.9361846-960-184042819702676/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:44 np0005546420.localdomain sudo[157955]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:45 np0005546420.localdomain sudo[158047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lekyjqesizkmfcyphmmrcjgvnduhnrft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926865.1216938-1011-188694776717798/AnsiballZ_file.py
Dec 05 09:27:45 np0005546420.localdomain sudo[158047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:45 np0005546420.localdomain python3.9[158049]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:27:45 np0005546420.localdomain sudo[158047]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:46 np0005546420.localdomain sudo[158139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdqftykldsjdmdqmrzvktpmhjlyjoizk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926866.2466757-1035-116302150882480/AnsiballZ_stat.py
Dec 05 09:27:46 np0005546420.localdomain sudo[158139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:46 np0005546420.localdomain python3.9[158141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:27:46 np0005546420.localdomain sudo[158139]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10574 DF PROTO=TCP SPT=53730 DPT=9100 SEQ=1420216778 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC0C990000000001030307) 
Dec 05 09:27:47 np0005546420.localdomain sudo[158214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tpkhpexfmixfynvayzhgispkpwpxjasn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926866.2466757-1035-116302150882480/AnsiballZ_copy.py
Dec 05 09:27:47 np0005546420.localdomain sudo[158214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:47 np0005546420.localdomain python3.9[158216]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926866.2466757-1035-116302150882480/.source.json _original_basename=.aj_4casw follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:27:47 np0005546420.localdomain sudo[158214]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:47 np0005546420.localdomain sudo[158306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jculanxtgzbjsnhzfgcmpzlhucsrdkek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926867.4534318-1080-161822562638361/AnsiballZ_file.py
Dec 05 09:27:47 np0005546420.localdomain sudo[158306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:48 np0005546420.localdomain python3.9[158308]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:27:48 np0005546420.localdomain sudo[158306]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:48 np0005546420.localdomain sudo[158398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cdgqxyqwwguhmuucgqdugrbmyihzixxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926868.2470644-1104-239853428886262/AnsiballZ_stat.py
Dec 05 09:27:48 np0005546420.localdomain sudo[158398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:48 np0005546420.localdomain sudo[158398]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7742 DF PROTO=TCP SPT=36840 DPT=9105 SEQ=3750590727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC14A10000000001030307) 
Dec 05 09:27:48 np0005546420.localdomain sudo[158471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xspzygoqcnfbvbeseggnlnazdfrfsqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926868.2470644-1104-239853428886262/AnsiballZ_copy.py
Dec 05 09:27:48 np0005546420.localdomain sudo[158471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:49 np0005546420.localdomain sudo[158471]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:50 np0005546420.localdomain sudo[158563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rtuctblzvbkevwhmzvdzmvdhdqnscaze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926869.6517022-1155-37384647695759/AnsiballZ_container_config_data.py
Dec 05 09:27:50 np0005546420.localdomain sudo[158563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:50 np0005546420.localdomain python3.9[158565]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 05 09:27:50 np0005546420.localdomain sudo[158563]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:50 np0005546420.localdomain sudo[158655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vxvqzujzttojsomugrtewznzvqjzvjfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926870.538702-1182-107107391896664/AnsiballZ_container_config_hash.py
Dec 05 09:27:50 np0005546420.localdomain sudo[158655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:51 np0005546420.localdomain python3.9[158657]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:27:51 np0005546420.localdomain sudo[158655]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7744 DF PROTO=TCP SPT=36840 DPT=9105 SEQ=3750590727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC20990000000001030307) 
Dec 05 09:27:52 np0005546420.localdomain sudo[158747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-duqivkjivdkqjverfnxpnteyvshzsmto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926871.4553647-1209-142135450831900/AnsiballZ_podman_container_info.py
Dec 05 09:27:52 np0005546420.localdomain sudo[158747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:52 np0005546420.localdomain python3.9[158749]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 09:27:53 np0005546420.localdomain sudo[158747]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61309 DF PROTO=TCP SPT=50920 DPT=9102 SEQ=2604076778 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC2DD90000000001030307) 
Dec 05 09:27:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:27:56 np0005546420.localdomain sudo[158877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wuqmmswcsvlznrgtfmidcsvkqxldslgd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764926876.0147307-1248-129344092607748/AnsiballZ_edpm_container_manage.py
Dec 05 09:27:56 np0005546420.localdomain sudo[158877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:56 np0005546420.localdomain podman[158848]: 2025-12-05 09:27:56.527380394 +0000 UTC m=+0.096640474 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 09:27:56 np0005546420.localdomain podman[158848]: 2025-12-05 09:27:56.59181809 +0000 UTC m=+0.161078170 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 09:27:56 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:27:56 np0005546420.localdomain python3[158879]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:27:57 np0005546420.localdomain python3[158879]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9",
                                                                    "Digest": "sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:29:20.327314945Z",
                                                                    "Config": {
                                                                         "User": "neutron",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 784141054,
                                                                    "VirtualSize": 784141054,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53/diff:/var/lib/containers/storage/overlay/2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:75abaaa40a93c0e2bba524b6f8d4eb5f1c4c9a33db70c892c7582ec5b0827e5e",
                                                                              "sha256:01f43f620d1ea2a9e584abe0cc14c336bedcf55765127c000d743f536dd36f25",
                                                                              "sha256:0bf5bd378602f28be423f5e84abddff3b103396fae3c167031b6e3fcfcf6f120"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "neutron",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:50.18897737Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:50.762138914Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage neutron",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:13.720608935Z",
                                                                              "created_by": "/bin/sh -c dnf -y install iputils net-tools openstack-neutron openstack-neutron-rpc-server openstack-neutron-ml2 openvswitch python3-networking-baremetal python3-openvswitch python3-unbound && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:27.636630318Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/neutron-base/neutron_sudoers /etc/sudoers.d/neutron_sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:40.546186661Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers.d/neutron_sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:52.875291445Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:27:22.608862134Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:28:35.764559413Z",
                                                                              "created_by": "/bin/sh -c dnf -y install libseccomp podman && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:28:40.983506098Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:28:44.803537768Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-agent-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:20.324920691Z",
                                                                              "created_by": "/bin/sh -c dnf -y install python3-networking-ovn-metadata-agent && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:20.324983383Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER neutron",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:24.215761584Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 09:27:57 np0005546420.localdomain podman[158942]: 2025-12-05 09:27:57.200165431 +0000 UTC m=+0.099220329 container remove dec08eea02d16bde9f4a18983f1f3d0c2dcbb8489bdea584103673079e176fce (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd6812e1160bfb2e956bcab4e760845cf'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 05 09:27:57 np0005546420.localdomain python3[158879]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent
Dec 05 09:27:57 np0005546420.localdomain podman[158955]: 
Dec 05 09:27:57 np0005546420.localdomain podman[158955]: 2025-12-05 09:27:57.320165356 +0000 UTC m=+0.096463458 container create e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 05 09:27:57 np0005546420.localdomain podman[158955]: 2025-12-05 09:27:57.274867687 +0000 UTC m=+0.051165839 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 09:27:57 np0005546420.localdomain python3[158879]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 09:27:57 np0005546420.localdomain sudo[158877]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:58 np0005546420.localdomain sudo[159083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rlsscavdekguosecgtpktwwdatzmdsxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926877.8829248-1273-177385977260332/AnsiballZ_stat.py
Dec 05 09:27:58 np0005546420.localdomain sudo[159083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:58 np0005546420.localdomain python3.9[159085]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:27:58 np0005546420.localdomain sudo[159083]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:58 np0005546420.localdomain sudo[159177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uemurgihfaraldmszbtqckblyzzbjdcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926878.6793056-1299-161542122728966/AnsiballZ_file.py
Dec 05 09:27:58 np0005546420.localdomain sudo[159177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:59 np0005546420.localdomain python3.9[159179]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:27:59 np0005546420.localdomain sudo[159177]: pam_unix(sudo:session): session closed for user root
Dec 05 09:27:59 np0005546420.localdomain sudo[159223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pckvtmcepgtdxosqmeucdwmdzcpachvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926878.6793056-1299-161542122728966/AnsiballZ_stat.py
Dec 05 09:27:59 np0005546420.localdomain sudo[159223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:27:59 np0005546420.localdomain python3.9[159225]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:27:59 np0005546420.localdomain sudo[159223]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:00 np0005546420.localdomain sudo[159314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hsimkpdaarrnnrxvpyvepzcdoshlssgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926879.6517103-1299-10479953166050/AnsiballZ_copy.py
Dec 05 09:28:00 np0005546420.localdomain sudo[159314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:00 np0005546420.localdomain python3.9[159316]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764926879.6517103-1299-10479953166050/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:00 np0005546420.localdomain sudo[159314]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:00 np0005546420.localdomain sudo[159360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcoejjpartvalekswnnkzwzkdqpbzdby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926879.6517103-1299-10479953166050/AnsiballZ_systemd.py
Dec 05 09:28:00 np0005546420.localdomain sudo[159360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63397 DF PROTO=TCP SPT=34938 DPT=9102 SEQ=4042649705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC42590000000001030307) 
Dec 05 09:28:00 np0005546420.localdomain python3.9[159362]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:28:00 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:28:00 np0005546420.localdomain systemd-rc-local-generator[159388]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:28:00 np0005546420.localdomain systemd-sysv-generator[159392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:28:01 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:28:01 np0005546420.localdomain sudo[159360]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:01 np0005546420.localdomain sudo[159441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pvrijliavcbexadmvvgzoagudggfrnxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926879.6517103-1299-10479953166050/AnsiballZ_systemd.py
Dec 05 09:28:01 np0005546420.localdomain sudo[159441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:01 np0005546420.localdomain python3.9[159443]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:28:01 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:28:02 np0005546420.localdomain systemd-sysv-generator[159475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:28:02 np0005546420.localdomain systemd-rc-local-generator[159469]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:28:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:28:02 np0005546420.localdomain systemd[1]: Starting ovn_metadata_agent container...
Dec 05 09:28:02 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:28:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7936c6dedfb2e3155d0ac7e68586c92126bdd01011491b14293c815e65fd1a6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 05 09:28:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7936c6dedfb2e3155d0ac7e68586c92126bdd01011491b14293c815e65fd1a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 09:28:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:28:02 np0005546420.localdomain podman[159484]: 2025-12-05 09:28:02.409240755 +0000 UTC m=+0.136081625 container init e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: + sudo -E kolla_set_configs
Dec 05 09:28:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:28:02 np0005546420.localdomain podman[159484]: 2025-12-05 09:28:02.447445456 +0000 UTC m=+0.174286326 container start e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:28:02 np0005546420.localdomain edpm-start-podman-container[159484]: ovn_metadata_agent
Dec 05 09:28:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42909 DF PROTO=TCP SPT=33484 DPT=9882 SEQ=1445759321 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC49DA0000000001030307) 
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Validating config file
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Copying service configuration files
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Writing out command to execute
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 05 09:28:02 np0005546420.localdomain podman[159506]: 2025-12-05 09:28:02.522106631 +0000 UTC m=+0.072555549 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: ++ cat /run_command
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: + CMD=neutron-ovn-metadata-agent
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: + ARGS=
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: + sudo kolla_copy_cacerts
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: + [[ ! -n '' ]]
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: + . kolla_extend_start
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: Running command: 'neutron-ovn-metadata-agent'
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: + umask 0022
Dec 05 09:28:02 np0005546420.localdomain ovn_metadata_agent[159498]: + exec neutron-ovn-metadata-agent
Dec 05 09:28:02 np0005546420.localdomain podman[159506]: 2025-12-05 09:28:02.605722175 +0000 UTC m=+0.156171083 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 09:28:02 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:28:02 np0005546420.localdomain edpm-start-podman-container[159483]: Creating additional drop-in dependency for "ovn_metadata_agent" (e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0)
Dec 05 09:28:02 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:28:02 np0005546420.localdomain systemd-rc-local-generator[159572]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:28:02 np0005546420.localdomain systemd-sysv-generator[159578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:28:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:28:02 np0005546420.localdomain systemd[1]: Started ovn_metadata_agent container.
Dec 05 09:28:02 np0005546420.localdomain sudo[159441]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:03 np0005546420.localdomain sshd[154091]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:28:03 np0005546420.localdomain systemd[1]: session-51.scope: Deactivated successfully.
Dec 05 09:28:03 np0005546420.localdomain systemd[1]: session-51.scope: Consumed 32.116s CPU time.
Dec 05 09:28:03 np0005546420.localdomain systemd-logind[762]: Session 51 logged out. Waiting for processes to exit.
Dec 05 09:28:03 np0005546420.localdomain systemd-logind[762]: Removed session 51.
Dec 05 09:28:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7746 DF PROTO=TCP SPT=36840 DPT=9105 SEQ=3750590727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC4FD90000000001030307) 
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.040 159503 INFO neutron.common.config [-] Logging enabled!
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.041 159503 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.041 159503 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.041 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.041 159503 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.041 159503 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.042 159503 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.042 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.042 159503 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.042 159503 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.042 159503 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.042 159503 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.042 159503 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.042 159503 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.042 159503 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.042 159503 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.043 159503 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.043 159503 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.043 159503 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.043 159503 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.043 159503 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.043 159503 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.043 159503 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.043 159503 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.043 159503 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.043 159503 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.044 159503 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.044 159503 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.044 159503 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.044 159503 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.044 159503 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.044 159503 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.044 159503 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.044 159503 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.044 159503 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.044 159503 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.045 159503 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.045 159503 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.045 159503 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.045 159503 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.045 159503 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.045 159503 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.045 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.045 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.045 159503 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.045 159503 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.046 159503 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.046 159503 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.046 159503 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.046 159503 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.046 159503 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.046 159503 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.046 159503 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.046 159503 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.046 159503 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.046 159503 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.046 159503 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.047 159503 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.047 159503 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.047 159503 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.047 159503 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.047 159503 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.047 159503 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.047 159503 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.047 159503 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.047 159503 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.048 159503 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.048 159503 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.048 159503 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.048 159503 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.048 159503 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.048 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.048 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.048 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.048 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.049 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.049 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.049 159503 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.049 159503 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.049 159503 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.049 159503 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.049 159503 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.049 159503 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.049 159503 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.049 159503 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.050 159503 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.050 159503 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.050 159503 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.050 159503 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.050 159503 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.050 159503 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.050 159503 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.050 159503 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.050 159503 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.050 159503 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.051 159503 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.051 159503 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.051 159503 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.051 159503 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.051 159503 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.051 159503 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.051 159503 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.051 159503 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.051 159503 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.051 159503 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.051 159503 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.052 159503 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.052 159503 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.052 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.052 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.052 159503 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.052 159503 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.052 159503 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.052 159503 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.052 159503 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.052 159503 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.053 159503 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.053 159503 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.053 159503 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.053 159503 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.053 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.053 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.053 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.053 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.053 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.053 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.054 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.054 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.054 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.054 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.054 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.054 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.054 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.054 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.054 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.054 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.055 159503 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.055 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.055 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.055 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.055 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.055 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.055 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.055 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.056 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.056 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.056 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.056 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.056 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.056 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.056 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.056 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.056 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.056 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.057 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.057 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.057 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.057 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.057 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.057 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.057 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.057 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.057 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.057 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.057 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.058 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.058 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.058 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.058 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.058 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.058 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.058 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.058 159503 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.058 159503 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.058 159503 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.058 159503 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.059 159503 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.059 159503 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.059 159503 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.059 159503 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.059 159503 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.059 159503 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.059 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.059 159503 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.059 159503 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.059 159503 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.060 159503 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.060 159503 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.060 159503 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.060 159503 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.060 159503 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.060 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.060 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.060 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.060 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.060 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.061 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.061 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.061 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.061 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.061 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.061 159503 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.061 159503 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.061 159503 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.061 159503 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.061 159503 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.061 159503 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.062 159503 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.062 159503 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.062 159503 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.062 159503 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.062 159503 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.062 159503 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.062 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.062 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.062 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.062 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.063 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.063 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.063 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.063 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.063 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.063 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.063 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.063 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.063 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.063 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.063 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.064 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.064 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.064 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.064 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.064 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.064 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.064 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.064 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.064 159503 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.064 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.065 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.065 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.065 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.065 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.065 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.065 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.065 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.065 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.065 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.065 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.066 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.066 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.066 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.066 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.066 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.066 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.066 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.066 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.066 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.066 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.067 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.067 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.067 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.067 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.067 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.067 159503 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.067 159503 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.067 159503 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.067 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.067 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.068 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.068 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.068 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.068 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.068 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.068 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.068 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.068 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.068 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.069 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.069 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.069 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.069 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.069 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.069 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.069 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.069 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.069 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.070 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.070 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.070 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.070 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.070 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.070 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.070 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.070 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.070 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.071 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.071 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.071 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.071 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.071 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.071 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.071 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.071 159503 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.071 159503 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.079 159503 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.080 159503 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.080 159503 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.080 159503 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.080 159503 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.094 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c2157608-8f70-44ef-883c-3db22f367c76 (UUID: c2157608-8f70-44ef-883c-3db22f367c76) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.109 159503 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.109 159503 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.109 159503 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.109 159503 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.112 159503 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.116 159503 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.123 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c2157608-8f70-44ef-883c-3db22f367c76'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], external_ids={'neutron:ovn-metadata-id': '7460456d-613e-5f20-a98a-72e9648ca1b5', 'neutron:ovn-metadata-sb-cfg': '1'}, name=c2157608-8f70-44ef-883c-3db22f367c76, nb_cfg_timestamp=1764926822768, nb_cfg=3) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.124 159503 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fed93e1fb50>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.124 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.125 159503 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.125 159503 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.125 159503 INFO oslo_service.service [-] Starting 1 workers
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.127 159503 DEBUG oslo_service.service [-] Started child 159604 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.129 159503 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpp9p394xm/privsep.sock']
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.131 159604 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-8296605'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.160 159604 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.161 159604 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.161 159604 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.164 159604 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.165 159604 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.179 159604 INFO eventlet.wsgi.server [-] (159604) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.717 159503 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.718 159503 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpp9p394xm/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.615 159609 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.621 159609 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.625 159609 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.625 159609 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159609
Dec 05 09:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:04.722 159609 DEBUG oslo.privsep.daemon [-] privsep: reply[98086f19-e72e-4e4c-b459-9279b4448e68]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.151 159609 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.151 159609 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.151 159609 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.598 159609 DEBUG oslo.privsep.daemon [-] privsep: reply[777512d1-efd0-4260-8d2a-59f7234ccbd6]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.601 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, column=external_ids, values=({'neutron:ovn-metadata-id': '7460456d-613e-5f20-a98a-72e9648ca1b5'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.602 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.603 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.612 159503 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.613 159503 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.613 159503 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.613 159503 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.613 159503 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.613 159503 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.614 159503 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.614 159503 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.614 159503 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.614 159503 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.615 159503 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.615 159503 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.615 159503 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.615 159503 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.615 159503 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.616 159503 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.616 159503 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.616 159503 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.616 159503 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.616 159503 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.617 159503 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.617 159503 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.617 159503 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.617 159503 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.618 159503 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.618 159503 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.618 159503 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.618 159503 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.619 159503 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.619 159503 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.619 159503 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.619 159503 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.619 159503 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.620 159503 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.620 159503 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.620 159503 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.620 159503 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.621 159503 DEBUG oslo_service.service [-] host                           = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.621 159503 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.621 159503 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.621 159503 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.622 159503 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.622 159503 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.622 159503 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.622 159503 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.622 159503 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.623 159503 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.623 159503 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.623 159503 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.623 159503 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.623 159503 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.624 159503 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.624 159503 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.624 159503 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.624 159503 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.624 159503 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.625 159503 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.625 159503 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.625 159503 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.625 159503 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.625 159503 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.626 159503 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.626 159503 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.626 159503 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.626 159503 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.627 159503 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.627 159503 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.627 159503 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.627 159503 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.627 159503 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.628 159503 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.628 159503 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.628 159503 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.628 159503 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.628 159503 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.629 159503 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.629 159503 DEBUG oslo_service.service [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.629 159503 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.629 159503 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.629 159503 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.630 159503 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.630 159503 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.630 159503 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.630 159503 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.631 159503 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.631 159503 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.631 159503 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.631 159503 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.631 159503 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.632 159503 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.632 159503 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.632 159503 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.632 159503 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.632 159503 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.633 159503 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.633 159503 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.633 159503 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.633 159503 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.633 159503 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.634 159503 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.634 159503 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.634 159503 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.634 159503 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.634 159503 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.635 159503 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.635 159503 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.635 159503 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.635 159503 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.636 159503 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.636 159503 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.636 159503 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.636 159503 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.637 159503 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.637 159503 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.637 159503 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.637 159503 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.638 159503 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.638 159503 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.638 159503 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.638 159503 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.638 159503 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.639 159503 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.639 159503 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.639 159503 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.639 159503 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.640 159503 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.640 159503 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.640 159503 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.640 159503 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.640 159503 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.641 159503 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.641 159503 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.641 159503 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.642 159503 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.642 159503 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.642 159503 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.642 159503 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.643 159503 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.643 159503 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.643 159503 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.643 159503 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.643 159503 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.644 159503 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.644 159503 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.644 159503 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.644 159503 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.644 159503 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.645 159503 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.645 159503 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.645 159503 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.645 159503 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.645 159503 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.646 159503 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.646 159503 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.646 159503 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.646 159503 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.646 159503 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.647 159503 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.647 159503 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.647 159503 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.647 159503 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.647 159503 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.648 159503 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.648 159503 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.648 159503 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.648 159503 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.648 159503 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.649 159503 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.649 159503 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.649 159503 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.649 159503 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.649 159503 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.650 159503 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.650 159503 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.650 159503 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.650 159503 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.651 159503 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.651 159503 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.651 159503 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.651 159503 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.651 159503 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.652 159503 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.652 159503 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.652 159503 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.652 159503 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.653 159503 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.653 159503 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.653 159503 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.653 159503 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.653 159503 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.654 159503 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.654 159503 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.654 159503 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.654 159503 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.654 159503 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.655 159503 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.655 159503 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.655 159503 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.655 159503 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.655 159503 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.656 159503 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.656 159503 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.656 159503 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.656 159503 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.656 159503 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.657 159503 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.657 159503 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.657 159503 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.657 159503 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.657 159503 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.657 159503 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.657 159503 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.658 159503 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.658 159503 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.658 159503 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.658 159503 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.658 159503 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.658 159503 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.658 159503 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.659 159503 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.659 159503 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.659 159503 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.659 159503 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.659 159503 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.659 159503 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.659 159503 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.659 159503 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.660 159503 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.660 159503 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.660 159503 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.660 159503 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.660 159503 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.660 159503 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.660 159503 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.661 159503 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.661 159503 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.661 159503 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.661 159503 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.661 159503 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.661 159503 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.661 159503 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.662 159503 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.662 159503 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.662 159503 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.662 159503 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.662 159503 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.662 159503 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.662 159503 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.663 159503 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.663 159503 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.663 159503 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.663 159503 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.663 159503 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.663 159503 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.663 159503 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.664 159503 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.664 159503 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.664 159503 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.664 159503 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.664 159503 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.664 159503 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.664 159503 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.665 159503 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.665 159503 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.665 159503 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.665 159503 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.665 159503 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.665 159503 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.665 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.666 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.666 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.666 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.666 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.666 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.666 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.666 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.667 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.667 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.667 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.667 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.667 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.667 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.667 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.668 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.668 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.668 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.668 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.668 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.668 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.668 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.669 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.669 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.669 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.669 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.669 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.669 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.669 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.670 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.670 159503 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.670 159503 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.670 159503 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.670 159503 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.670 159503 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:28:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:28:05.671 159503 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 09:28:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45193 DF PROTO=TCP SPT=47618 DPT=9101 SEQ=2670646874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC5BD90000000001030307) 
Dec 05 09:28:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51997 DF PROTO=TCP SPT=37618 DPT=9100 SEQ=704802090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC6A190000000001030307) 
Dec 05 09:28:12 np0005546420.localdomain sshd[159614]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:28:12 np0005546420.localdomain sshd[159614]: Accepted publickey for zuul from 192.168.122.30 port 41058 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:28:12 np0005546420.localdomain systemd-logind[762]: New session 52 of user zuul.
Dec 05 09:28:12 np0005546420.localdomain systemd[1]: Started Session 52 of User zuul.
Dec 05 09:28:12 np0005546420.localdomain sshd[159614]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:28:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51998 DF PROTO=TCP SPT=37618 DPT=9100 SEQ=704802090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC72190000000001030307) 
Dec 05 09:28:13 np0005546420.localdomain python3.9[159707]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:28:14 np0005546420.localdomain sudo[159801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzdwxizfsiicmgmbzlzvumgzzcwzngvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926894.184946-63-101449903542737/AnsiballZ_command.py
Dec 05 09:28:14 np0005546420.localdomain sudo[159801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:14 np0005546420.localdomain python3.9[159803]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:28:14 np0005546420.localdomain sudo[159801]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:15 np0005546420.localdomain sudo[159906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dsfqcketbpfwzqjwiuhbiyakzegqtfkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926895.141315-87-181080075503288/AnsiballZ_command.py
Dec 05 09:28:15 np0005546420.localdomain sudo[159906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:15 np0005546420.localdomain python3.9[159908]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:28:15 np0005546420.localdomain systemd[1]: libpod-280fc05a076c2b76634d8f2eb6427fde96de83699a63efeba89f5ad45b6d7211.scope: Deactivated successfully.
Dec 05 09:28:15 np0005546420.localdomain podman[159909]: 2025-12-05 09:28:15.660732087 +0000 UTC m=+0.075613077 container died 280fc05a076c2b76634d8f2eb6427fde96de83699a63efeba89f5ad45b6d7211 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public)
Dec 05 09:28:15 np0005546420.localdomain systemd[1]: tmp-crun.oBof1P.mount: Deactivated successfully.
Dec 05 09:28:15 np0005546420.localdomain podman[159909]: 2025-12-05 09:28:15.697801962 +0000 UTC m=+0.112682922 container cleanup 280fc05a076c2b76634d8f2eb6427fde96de83699a63efeba89f5ad45b6d7211 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12)
Dec 05 09:28:15 np0005546420.localdomain sudo[159906]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:15 np0005546420.localdomain podman[159923]: 2025-12-05 09:28:15.744881538 +0000 UTC m=+0.075958638 container remove 280fc05a076c2b76634d8f2eb6427fde96de83699a63efeba89f5ad45b6d7211 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Dec 05 09:28:15 np0005546420.localdomain systemd[1]: libpod-conmon-280fc05a076c2b76634d8f2eb6427fde96de83699a63efeba89f5ad45b6d7211.scope: Deactivated successfully.
Dec 05 09:28:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4ea8e4909d423a2f774d29952169dc80a392dd27e19e796f4f6462b620f27970-merged.mount: Deactivated successfully.
Dec 05 09:28:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-280fc05a076c2b76634d8f2eb6427fde96de83699a63efeba89f5ad45b6d7211-userdata-shm.mount: Deactivated successfully.
Dec 05 09:28:16 np0005546420.localdomain sudo[160029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-holvuytsqmlreskecfkrbpcvtwmywlcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926896.0898108-117-43199485749467/AnsiballZ_systemd_service.py
Dec 05 09:28:16 np0005546420.localdomain sudo[160029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51999 DF PROTO=TCP SPT=37618 DPT=9100 SEQ=704802090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC81D90000000001030307) 
Dec 05 09:28:16 np0005546420.localdomain python3.9[160031]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:28:16 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:28:17 np0005546420.localdomain systemd-sysv-generator[160062]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:28:17 np0005546420.localdomain systemd-rc-local-generator[160055]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:28:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:28:17 np0005546420.localdomain sudo[160029]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:18 np0005546420.localdomain python3.9[160158]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:28:18 np0005546420.localdomain network[160175]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:28:18 np0005546420.localdomain network[160176]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:28:18 np0005546420.localdomain network[160177]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:28:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23534 DF PROTO=TCP SPT=50974 DPT=9105 SEQ=723746591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC89D10000000001030307) 
Dec 05 09:28:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:28:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23536 DF PROTO=TCP SPT=50974 DPT=9105 SEQ=723746591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABC95DA0000000001030307) 
Dec 05 09:28:23 np0005546420.localdomain sudo[160377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jidasqdtdsbrsfykqolbbadorlflnkxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926903.4550965-174-111741427313731/AnsiballZ_systemd_service.py
Dec 05 09:28:23 np0005546420.localdomain sudo[160377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:24 np0005546420.localdomain python3.9[160379]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:28:24 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:28:24 np0005546420.localdomain systemd-sysv-generator[160408]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:28:24 np0005546420.localdomain systemd-rc-local-generator[160403]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:28:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:28:24 np0005546420.localdomain systemd[1]: Stopped target tripleo_nova_libvirt.target.
Dec 05 09:28:24 np0005546420.localdomain sudo[160377]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:24 np0005546420.localdomain sudo[160508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvighwkssdynywpsbfgcbjbfxsyhwtph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926904.4618433-174-217022460273601/AnsiballZ_systemd_service.py
Dec 05 09:28:24 np0005546420.localdomain sudo[160508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52000 DF PROTO=TCP SPT=37618 DPT=9100 SEQ=704802090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABCA1DA0000000001030307) 
Dec 05 09:28:25 np0005546420.localdomain python3.9[160510]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:28:25 np0005546420.localdomain sudo[160508]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:25 np0005546420.localdomain sudo[160601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ituwgmmkunjsqyagqtfgxtzslnrxmnmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926905.1621559-174-214996012205300/AnsiballZ_systemd_service.py
Dec 05 09:28:25 np0005546420.localdomain sudo[160601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:25 np0005546420.localdomain python3.9[160603]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:28:25 np0005546420.localdomain sudo[160601]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:25 np0005546420.localdomain sudo[160634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:28:25 np0005546420.localdomain sudo[160634]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:28:25 np0005546420.localdomain sudo[160634]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:26 np0005546420.localdomain sudo[160679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:28:26 np0005546420.localdomain sudo[160679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:28:26 np0005546420.localdomain sudo[160724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omngrcfyozupgofigdymsyjrcgvpfnav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926905.8723574-174-151914813228182/AnsiballZ_systemd_service.py
Dec 05 09:28:26 np0005546420.localdomain sudo[160724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:26 np0005546420.localdomain python3.9[160726]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:28:26 np0005546420.localdomain sudo[160724]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:26 np0005546420.localdomain sudo[160679]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:26 np0005546420.localdomain sudo[160850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qoazrtffelsoemggehdldrvbjbccgrhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926906.5394683-174-238987740953080/AnsiballZ_systemd_service.py
Dec 05 09:28:26 np0005546420.localdomain sudo[160850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:28:26 np0005546420.localdomain podman[160852]: 2025-12-05 09:28:26.852662908 +0000 UTC m=+0.065941626 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:28:26 np0005546420.localdomain podman[160852]: 2025-12-05 09:28:26.921473635 +0000 UTC m=+0.134752353 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Dec 05 09:28:26 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:28:27 np0005546420.localdomain python3.9[160853]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:28:27 np0005546420.localdomain sudo[160850]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:27 np0005546420.localdomain sudo[160931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:28:27 np0005546420.localdomain sudo[160931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:28:27 np0005546420.localdomain sudo[160931]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:27 np0005546420.localdomain sudo[160984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dmreteviipvdxvgqgvxepwscemjjbcop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926907.2334266-174-7588846593458/AnsiballZ_systemd_service.py
Dec 05 09:28:27 np0005546420.localdomain sudo[160984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:27 np0005546420.localdomain python3.9[160986]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:28:27 np0005546420.localdomain sudo[160984]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:28 np0005546420.localdomain sudo[161077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eoyyfogfovzmrdthuefiwyrozgjcruoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926907.962837-174-144234696731358/AnsiballZ_systemd_service.py
Dec 05 09:28:28 np0005546420.localdomain sudo[161077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:28 np0005546420.localdomain python3.9[161079]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:28:28 np0005546420.localdomain sudo[161077]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54451 DF PROTO=TCP SPT=53336 DPT=9102 SEQ=1782546477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABCB7590000000001030307) 
Dec 05 09:28:30 np0005546420.localdomain sudo[161170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vholtcfbtayzzytsuuzwdblxmvpgpfqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926910.3012753-330-229838297845033/AnsiballZ_file.py
Dec 05 09:28:30 np0005546420.localdomain sudo[161170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:30 np0005546420.localdomain python3.9[161172]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:30 np0005546420.localdomain sudo[161170]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:31 np0005546420.localdomain sudo[161262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oekycutezsdtrnekvosmrbxozcihritv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926911.097689-330-227951225339444/AnsiballZ_file.py
Dec 05 09:28:31 np0005546420.localdomain sudo[161262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:32 np0005546420.localdomain python3.9[161264]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:32 np0005546420.localdomain sudo[161262]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:32 np0005546420.localdomain sudo[161354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmbtwlvcxvkwvhpfsrhzxlzzdcftlmsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926912.1475604-330-128895203965096/AnsiballZ_file.py
Dec 05 09:28:32 np0005546420.localdomain sudo[161354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:32 np0005546420.localdomain python3.9[161356]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:32 np0005546420.localdomain sudo[161354]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29270 DF PROTO=TCP SPT=41402 DPT=9882 SEQ=3517614446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABCBFD90000000001030307) 
Dec 05 09:28:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:28:33 np0005546420.localdomain systemd[1]: tmp-crun.Con5Vl.mount: Deactivated successfully.
Dec 05 09:28:33 np0005546420.localdomain podman[161416]: 2025-12-05 09:28:33.511935874 +0000 UTC m=+0.086226077 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:28:33 np0005546420.localdomain podman[161416]: 2025-12-05 09:28:33.547323021 +0000 UTC m=+0.121613234 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 09:28:33 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:28:33 np0005546420.localdomain sudo[161465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wtegcintrgnlmycwjxjrvxzcqlzuczjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926912.751339-330-57645219217947/AnsiballZ_file.py
Dec 05 09:28:33 np0005546420.localdomain sudo[161465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:33 np0005546420.localdomain python3.9[161467]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:33 np0005546420.localdomain sudo[161465]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:34 np0005546420.localdomain sudo[161557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yolmbcuzkskohehmbwlmwbwzisjlwlyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926913.964171-330-171099141642537/AnsiballZ_file.py
Dec 05 09:28:34 np0005546420.localdomain sudo[161557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23538 DF PROTO=TCP SPT=50974 DPT=9105 SEQ=723746591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABCC5DA0000000001030307) 
Dec 05 09:28:34 np0005546420.localdomain python3.9[161559]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:34 np0005546420.localdomain sudo[161557]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:34 np0005546420.localdomain sudo[161649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-syrepfotgdtfqywaczlixkjcurpiuchc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926914.5387654-330-92388265412343/AnsiballZ_file.py
Dec 05 09:28:34 np0005546420.localdomain sudo[161649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:35 np0005546420.localdomain python3.9[161651]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:35 np0005546420.localdomain sudo[161649]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:35 np0005546420.localdomain sudo[161741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jzbjikiwdqtvcozypmwxhzgyzvdnprpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926915.1298146-330-108728900530277/AnsiballZ_file.py
Dec 05 09:28:35 np0005546420.localdomain sudo[161741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:35 np0005546420.localdomain python3.9[161743]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:35 np0005546420.localdomain sudo[161741]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:36 np0005546420.localdomain sudo[161833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ockbxjlxjvuvztopnzmlqgxumekqcjys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926915.7650046-480-194985228596741/AnsiballZ_file.py
Dec 05 09:28:36 np0005546420.localdomain sudo[161833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:36 np0005546420.localdomain python3.9[161835]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:36 np0005546420.localdomain sudo[161833]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:36 np0005546420.localdomain sudo[161925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrpqmqabmvxcicwujpckwtgjlgfjvbxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926916.357778-480-78977275469993/AnsiballZ_file.py
Dec 05 09:28:36 np0005546420.localdomain sudo[161925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26347 DF PROTO=TCP SPT=39096 DPT=9101 SEQ=1311005087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABCCFD90000000001030307) 
Dec 05 09:28:36 np0005546420.localdomain python3.9[161927]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:36 np0005546420.localdomain sudo[161925]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:37 np0005546420.localdomain sudo[162017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iitcztwtoyhnzrlgvmrrvaztlceuadff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926916.9388704-480-144803593887710/AnsiballZ_file.py
Dec 05 09:28:37 np0005546420.localdomain sudo[162017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:37 np0005546420.localdomain python3.9[162019]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:37 np0005546420.localdomain sudo[162017]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:37 np0005546420.localdomain sudo[162109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-arffvvhwxrnoafliumagxbvtbphswnkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926917.5095532-480-79978993545842/AnsiballZ_file.py
Dec 05 09:28:37 np0005546420.localdomain sudo[162109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:37 np0005546420.localdomain python3.9[162111]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:37 np0005546420.localdomain sudo[162109]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:38 np0005546420.localdomain sudo[162201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fcpdogsykeezrscthvfwzagrexdztwta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926918.0547676-480-262589832278571/AnsiballZ_file.py
Dec 05 09:28:38 np0005546420.localdomain sudo[162201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:38 np0005546420.localdomain python3.9[162203]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:38 np0005546420.localdomain sudo[162201]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:39 np0005546420.localdomain sudo[162293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vikwpskqdeirlslbdkgrqrzzktlrpxus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926918.71293-480-67113710927623/AnsiballZ_file.py
Dec 05 09:28:39 np0005546420.localdomain sudo[162293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:39 np0005546420.localdomain python3.9[162295]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:39 np0005546420.localdomain sudo[162293]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:39 np0005546420.localdomain sudo[162385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ntnltgqkdthajidryuryzmxehlmomnwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926919.3643808-480-238714029540901/AnsiballZ_file.py
Dec 05 09:28:39 np0005546420.localdomain sudo[162385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:39 np0005546420.localdomain python3.9[162387]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:28:39 np0005546420.localdomain sudo[162385]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:40 np0005546420.localdomain sudo[162477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhvtvmyjelukxemlxujjaltmcijciatc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926920.1831825-633-17947405815426/AnsiballZ_command.py
Dec 05 09:28:40 np0005546420.localdomain sudo[162477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:40 np0005546420.localdomain python3.9[162479]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:28:40 np0005546420.localdomain sudo[162477]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9257 DF PROTO=TCP SPT=33912 DPT=9100 SEQ=3622678297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABCDF590000000001030307) 
Dec 05 09:28:41 np0005546420.localdomain python3.9[162571]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:28:42 np0005546420.localdomain sudo[162661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mwciyqgthvevrglyzipkivwbpmwtuojn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926922.26153-687-93990758378646/AnsiballZ_systemd_service.py
Dec 05 09:28:42 np0005546420.localdomain sudo[162661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9258 DF PROTO=TCP SPT=33912 DPT=9100 SEQ=3622678297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABCE75A0000000001030307) 
Dec 05 09:28:42 np0005546420.localdomain python3.9[162663]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:28:42 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:28:42 np0005546420.localdomain systemd-rc-local-generator[162684]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:28:42 np0005546420.localdomain systemd-sysv-generator[162689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:28:43 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:28:43 np0005546420.localdomain sudo[162661]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:43 np0005546420.localdomain sudo[162788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-klwzzfergpfglvbnvrozqxyixevrhrsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926923.3643587-711-271227876894762/AnsiballZ_command.py
Dec 05 09:28:43 np0005546420.localdomain sudo[162788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:43 np0005546420.localdomain python3.9[162790]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:28:43 np0005546420.localdomain sudo[162788]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:44 np0005546420.localdomain sudo[162881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bsfiaphsccbqrbtpdhepbjjwgpmbknlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926923.924702-711-175505243704875/AnsiballZ_command.py
Dec 05 09:28:44 np0005546420.localdomain sudo[162881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:44 np0005546420.localdomain python3.9[162883]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:28:45 np0005546420.localdomain sudo[162881]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:45 np0005546420.localdomain sudo[162974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mudbqrjqarzwsamqkggbtxurwzhrtsmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926925.5068805-711-237830876854166/AnsiballZ_command.py
Dec 05 09:28:45 np0005546420.localdomain sudo[162974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:45 np0005546420.localdomain python3.9[162976]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:28:45 np0005546420.localdomain sudo[162974]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:46 np0005546420.localdomain sudo[163067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbatqyidfdynnkwigyjfmrkarkucmcdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926926.0931218-711-45351588669854/AnsiballZ_command.py
Dec 05 09:28:46 np0005546420.localdomain sudo[163067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:46 np0005546420.localdomain python3.9[163069]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:28:46 np0005546420.localdomain sudo[163067]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9259 DF PROTO=TCP SPT=33912 DPT=9100 SEQ=3622678297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABCF7190000000001030307) 
Dec 05 09:28:46 np0005546420.localdomain sudo[163160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hxngvvqhmjdeaanwhtbtoubblcnsobxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926926.6371956-711-138735676126388/AnsiballZ_command.py
Dec 05 09:28:46 np0005546420.localdomain sudo[163160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:47 np0005546420.localdomain python3.9[163162]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:28:47 np0005546420.localdomain sudo[163160]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:47 np0005546420.localdomain sudo[163253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eziyrgeneeqzeiuohiegmxsqsorcieyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926927.198012-711-32576175575418/AnsiballZ_command.py
Dec 05 09:28:47 np0005546420.localdomain sudo[163253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:47 np0005546420.localdomain python3.9[163255]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:28:47 np0005546420.localdomain sudo[163253]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:47 np0005546420.localdomain sudo[163346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zxvshefhvptdomuzwcmxcnuzllqxfvbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926927.7559495-711-279347377157439/AnsiballZ_command.py
Dec 05 09:28:47 np0005546420.localdomain sudo[163346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:48 np0005546420.localdomain python3.9[163348]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:28:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2687 DF PROTO=TCP SPT=32996 DPT=9105 SEQ=2529182534 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABCFF000000000001030307) 
Dec 05 09:28:49 np0005546420.localdomain sudo[163346]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:50 np0005546420.localdomain sudo[163439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tpdjmodmavfdswaqgnrrvetnjvzfekeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926929.7772121-873-98265045919553/AnsiballZ_getent.py
Dec 05 09:28:50 np0005546420.localdomain sudo[163439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:50 np0005546420.localdomain python3.9[163441]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 05 09:28:50 np0005546420.localdomain sudo[163439]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:51 np0005546420.localdomain sudo[163532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxmrfotpzzweemfzbhhayeywnttgacof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926930.5422482-897-256532096720219/AnsiballZ_group.py
Dec 05 09:28:51 np0005546420.localdomain sudo[163532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:51 np0005546420.localdomain python3.9[163534]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 09:28:51 np0005546420.localdomain groupadd[163535]: group added to /etc/group: name=libvirt, GID=42473
Dec 05 09:28:51 np0005546420.localdomain groupadd[163535]: group added to /etc/gshadow: name=libvirt
Dec 05 09:28:51 np0005546420.localdomain groupadd[163535]: new group: name=libvirt, GID=42473
Dec 05 09:28:51 np0005546420.localdomain sudo[163532]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2689 DF PROTO=TCP SPT=32996 DPT=9105 SEQ=2529182534 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD0B190000000001030307) 
Dec 05 09:28:53 np0005546420.localdomain sudo[163630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pldppxfkkoojshygvtcfsgaxcjpsvdeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926932.2051897-921-188370711893181/AnsiballZ_user.py
Dec 05 09:28:53 np0005546420.localdomain sudo[163630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:53 np0005546420.localdomain python3.9[163632]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546420.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 09:28:53 np0005546420.localdomain useradd[163634]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 05 09:28:53 np0005546420.localdomain sudo[163630]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:54 np0005546420.localdomain sudo[163730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hodrexqwmooexnzocncmiccvajqbiljt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926933.9029686-954-125612309051706/AnsiballZ_setup.py
Dec 05 09:28:54 np0005546420.localdomain sudo[163730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:54 np0005546420.localdomain python3.9[163732]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:28:54 np0005546420.localdomain sudo[163730]: pam_unix(sudo:session): session closed for user root
Dec 05 09:28:55 np0005546420.localdomain sudo[163784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uomjasuqimzninxfirdscwwbwhsgqgjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764926933.9029686-954-125612309051706/AnsiballZ_dnf.py
Dec 05 09:28:55 np0005546420.localdomain sudo[163784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:28:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9260 DF PROTO=TCP SPT=33912 DPT=9100 SEQ=3622678297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD17DA0000000001030307) 
Dec 05 09:28:55 np0005546420.localdomain python3.9[163786]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:28:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:28:57 np0005546420.localdomain podman[163789]: 2025-12-05 09:28:57.51427691 +0000 UTC m=+0.081964025 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 09:28:57 np0005546420.localdomain podman[163789]: 2025-12-05 09:28:57.570442212 +0000 UTC m=+0.138129427 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 09:28:57 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:29:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34053 DF PROTO=TCP SPT=56690 DPT=9102 SEQ=890873848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD2C9A0000000001030307) 
Dec 05 09:29:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49720 DF PROTO=TCP SPT=49720 DPT=9882 SEQ=267739323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD33DA0000000001030307) 
Dec 05 09:29:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:29:04.073 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:29:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:29:04.074 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:29:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:29:04.074 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:29:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:29:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2691 DF PROTO=TCP SPT=32996 DPT=9105 SEQ=2529182534 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD3BDB0000000001030307) 
Dec 05 09:29:04 np0005546420.localdomain podman[163883]: 2025-12-05 09:29:04.524317239 +0000 UTC m=+0.102144610 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true)
Dec 05 09:29:04 np0005546420.localdomain podman[163883]: 2025-12-05 09:29:04.559402018 +0000 UTC m=+0.137229359 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 05 09:29:04 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:29:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61841 DF PROTO=TCP SPT=52914 DPT=9101 SEQ=2995973099 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD45D90000000001030307) 
Dec 05 09:29:11 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9586 DF PROTO=TCP SPT=45968 DPT=9100 SEQ=3483926932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD54590000000001030307) 
Dec 05 09:29:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9587 DF PROTO=TCP SPT=45968 DPT=9100 SEQ=3483926932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD5C5A0000000001030307) 
Dec 05 09:29:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9588 DF PROTO=TCP SPT=45968 DPT=9100 SEQ=3483926932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD6C190000000001030307) 
Dec 05 09:29:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49721 DF PROTO=TCP SPT=49720 DPT=9882 SEQ=267739323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD73D90000000001030307) 
Dec 05 09:29:21 np0005546420.localdomain kernel: SELinux:  Converting 2747 SID table entries...
Dec 05 09:29:21 np0005546420.localdomain kernel: SELinux:  Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped).
Dec 05 09:29:21 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:29:21 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 09:29:21 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:29:21 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:29:21 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:29:21 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:29:21 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:29:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21041 DF PROTO=TCP SPT=55156 DPT=9105 SEQ=4217955074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD80190000000001030307) 
Dec 05 09:29:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9589 DF PROTO=TCP SPT=45968 DPT=9100 SEQ=3483926932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABD8BD90000000001030307) 
Dec 05 09:29:27 np0005546420.localdomain sudo[164917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:29:27 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=19 res=1
Dec 05 09:29:27 np0005546420.localdomain sudo[164917]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:29:27 np0005546420.localdomain sudo[164917]: pam_unix(sudo:session): session closed for user root
Dec 05 09:29:27 np0005546420.localdomain sudo[164935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:29:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:29:27 np0005546420.localdomain sudo[164935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:29:27 np0005546420.localdomain podman[164952]: 2025-12-05 09:29:27.793026601 +0000 UTC m=+0.083463311 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:29:27 np0005546420.localdomain podman[164952]: 2025-12-05 09:29:27.831596028 +0000 UTC m=+0.122032798 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:29:27 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:29:28 np0005546420.localdomain sudo[164935]: pam_unix(sudo:session): session closed for user root
Dec 05 09:29:28 np0005546420.localdomain sudo[165008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:29:28 np0005546420.localdomain sudo[165008]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:29:28 np0005546420.localdomain sudo[165008]: pam_unix(sudo:session): session closed for user root
Dec 05 09:29:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55769 DF PROTO=TCP SPT=38502 DPT=9102 SEQ=845480029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABDA1D90000000001030307) 
Dec 05 09:29:31 np0005546420.localdomain kernel: SELinux:  Converting 2750 SID table entries...
Dec 05 09:29:31 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:29:31 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 09:29:31 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:29:31 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:29:31 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:29:31 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:29:31 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:29:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1753 DF PROTO=TCP SPT=49476 DPT=9882 SEQ=843353175 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABDA9D90000000001030307) 
Dec 05 09:29:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21043 DF PROTO=TCP SPT=55156 DPT=9105 SEQ=4217955074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABDAFDA0000000001030307) 
Dec 05 09:29:35 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=20 res=1
Dec 05 09:29:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:29:35 np0005546420.localdomain systemd[1]: tmp-crun.vBrkyo.mount: Deactivated successfully.
Dec 05 09:29:35 np0005546420.localdomain podman[165034]: 2025-12-05 09:29:35.573476758 +0000 UTC m=+0.140689202 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Dec 05 09:29:35 np0005546420.localdomain podman[165034]: 2025-12-05 09:29:35.582228214 +0000 UTC m=+0.149440628 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 09:29:35 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:29:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21412 DF PROTO=TCP SPT=47558 DPT=9101 SEQ=4047068564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABDBBD90000000001030307) 
Dec 05 09:29:40 np0005546420.localdomain kernel: SELinux:  Converting 2750 SID table entries...
Dec 05 09:29:40 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:29:40 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 09:29:40 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:29:40 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:29:40 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:29:40 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:29:40 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:29:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34250 DF PROTO=TCP SPT=46720 DPT=9100 SEQ=2577080724 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABDC9990000000001030307) 
Dec 05 09:29:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34251 DF PROTO=TCP SPT=46720 DPT=9100 SEQ=2577080724 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABDD1990000000001030307) 
Dec 05 09:29:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34252 DF PROTO=TCP SPT=46720 DPT=9100 SEQ=2577080724 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABDE1590000000001030307) 
Dec 05 09:29:48 np0005546420.localdomain kernel: SELinux:  Converting 2750 SID table entries...
Dec 05 09:29:48 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:29:48 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 09:29:48 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:29:48 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:29:48 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:29:48 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:29:48 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:29:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55538 DF PROTO=TCP SPT=46272 DPT=9105 SEQ=2466656992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABDE9610000000001030307) 
Dec 05 09:29:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55540 DF PROTO=TCP SPT=46272 DPT=9105 SEQ=2466656992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABDF5590000000001030307) 
Dec 05 09:29:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55771 DF PROTO=TCP SPT=38502 DPT=9102 SEQ=845480029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE01D90000000001030307) 
Dec 05 09:29:58 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=22 res=1
Dec 05 09:29:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:29:58 np0005546420.localdomain systemd[1]: tmp-crun.3WGxM3.mount: Deactivated successfully.
Dec 05 09:29:58 np0005546420.localdomain podman[165071]: 2025-12-05 09:29:58.543260895 +0000 UTC m=+0.107952561 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller)
Dec 05 09:29:58 np0005546420.localdomain podman[165071]: 2025-12-05 09:29:58.584353449 +0000 UTC m=+0.149045125 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 05 09:29:58 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:29:59 np0005546420.localdomain kernel: SELinux:  Converting 2750 SID table entries...
Dec 05 09:29:59 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:29:59 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 09:29:59 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:29:59 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:29:59 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:29:59 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:29:59 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:30:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61645 DF PROTO=TCP SPT=50644 DPT=9102 SEQ=4181811588 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE17190000000001030307) 
Dec 05 09:30:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14428 DF PROTO=TCP SPT=43654 DPT=9882 SEQ=1989384471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE1FD90000000001030307) 
Dec 05 09:30:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:30:04.074 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:30:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:30:04.076 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:30:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:30:04.076 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:30:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55542 DF PROTO=TCP SPT=46272 DPT=9105 SEQ=2466656992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE25DA0000000001030307) 
Dec 05 09:30:06 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=23 res=1
Dec 05 09:30:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:30:06 np0005546420.localdomain podman[165106]: 2025-12-05 09:30:06.602619265 +0000 UTC m=+0.164093369 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 05 09:30:06 np0005546420.localdomain podman[165106]: 2025-12-05 09:30:06.634946702 +0000 UTC m=+0.196420806 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 05 09:30:06 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:30:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22868 DF PROTO=TCP SPT=42170 DPT=9101 SEQ=3688426324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE2FD90000000001030307) 
Dec 05 09:30:07 np0005546420.localdomain kernel: SELinux:  Converting 2750 SID table entries...
Dec 05 09:30:07 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:30:07 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 09:30:07 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:30:07 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:30:07 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:30:07 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:30:07 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:30:08 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:30:08 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=24 res=1
Dec 05 09:30:08 np0005546420.localdomain systemd-rc-local-generator[165161]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:30:08 np0005546420.localdomain systemd-sysv-generator[165164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:30:08 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:30:08 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:30:08 np0005546420.localdomain systemd-rc-local-generator[165198]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:30:08 np0005546420.localdomain systemd-sysv-generator[165201]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:30:08 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:30:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38312 DF PROTO=TCP SPT=60716 DPT=9100 SEQ=4095330527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE3ED90000000001030307) 
Dec 05 09:30:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38313 DF PROTO=TCP SPT=60716 DPT=9100 SEQ=4095330527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE46D90000000001030307) 
Dec 05 09:30:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38314 DF PROTO=TCP SPT=60716 DPT=9100 SEQ=4095330527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE56990000000001030307) 
Dec 05 09:30:17 np0005546420.localdomain kernel: SELinux:  Converting 2751 SID table entries...
Dec 05 09:30:17 np0005546420.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 09:30:17 np0005546420.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 05 09:30:17 np0005546420.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 09:30:17 np0005546420.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 05 09:30:17 np0005546420.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 09:30:17 np0005546420.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 09:30:17 np0005546420.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 09:30:18 np0005546420.localdomain groupadd[165224]: group added to /etc/group: name=clevis, GID=985
Dec 05 09:30:18 np0005546420.localdomain groupadd[165224]: group added to /etc/gshadow: name=clevis
Dec 05 09:30:18 np0005546420.localdomain groupadd[165224]: new group: name=clevis, GID=985
Dec 05 09:30:18 np0005546420.localdomain useradd[165231]: new user: name=clevis, UID=985, GID=985, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 05 09:30:18 np0005546420.localdomain usermod[165241]: add 'clevis' to group 'tss'
Dec 05 09:30:18 np0005546420.localdomain usermod[165241]: add 'clevis' to shadow group 'tss'
Dec 05 09:30:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40425 DF PROTO=TCP SPT=54030 DPT=9105 SEQ=781809347 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE5E900000000001030307) 
Dec 05 09:30:21 np0005546420.localdomain groupadd[165263]: group added to /etc/group: name=dnsmasq, GID=984
Dec 05 09:30:21 np0005546420.localdomain groupadd[165263]: group added to /etc/gshadow: name=dnsmasq
Dec 05 09:30:21 np0005546420.localdomain groupadd[165263]: new group: name=dnsmasq, GID=984
Dec 05 09:30:21 np0005546420.localdomain useradd[165270]: new user: name=dnsmasq, UID=984, GID=984, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 05 09:30:21 np0005546420.localdomain dbus-broker-launch[744]: Noticed file-system modification, trigger reload.
Dec 05 09:30:21 np0005546420.localdomain dbus-broker-launch[750]: avc:  op=load_policy lsm=selinux seqno=25 res=1
Dec 05 09:30:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40427 DF PROTO=TCP SPT=54030 DPT=9105 SEQ=781809347 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE6A9A0000000001030307) 
Dec 05 09:30:23 np0005546420.localdomain polkitd[1032]: Reloading rules
Dec 05 09:30:23 np0005546420.localdomain polkitd[1032]: Collecting garbage unconditionally...
Dec 05 09:30:23 np0005546420.localdomain polkitd[1032]: Loading rules from directory /etc/polkit-1/rules.d
Dec 05 09:30:23 np0005546420.localdomain polkitd[1032]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 05 09:30:23 np0005546420.localdomain polkitd[1032]: Finished loading, compiling and executing 5 rules
Dec 05 09:30:23 np0005546420.localdomain polkitd[1032]: Reloading rules
Dec 05 09:30:23 np0005546420.localdomain polkitd[1032]: Collecting garbage unconditionally...
Dec 05 09:30:23 np0005546420.localdomain polkitd[1032]: Loading rules from directory /etc/polkit-1/rules.d
Dec 05 09:30:23 np0005546420.localdomain polkitd[1032]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 05 09:30:23 np0005546420.localdomain polkitd[1032]: Finished loading, compiling and executing 5 rules
Dec 05 09:30:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38315 DF PROTO=TCP SPT=60716 DPT=9100 SEQ=4095330527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE77D90000000001030307) 
Dec 05 09:30:29 np0005546420.localdomain sudo[165452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:30:29 np0005546420.localdomain sudo[165452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:30:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:30:29 np0005546420.localdomain sudo[165452]: pam_unix(sudo:session): session closed for user root
Dec 05 09:30:29 np0005546420.localdomain sudo[165476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:30:29 np0005546420.localdomain sudo[165476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:30:29 np0005546420.localdomain systemd[1]: tmp-crun.xQbZWJ.mount: Deactivated successfully.
Dec 05 09:30:29 np0005546420.localdomain podman[165470]: 2025-12-05 09:30:29.455264262 +0000 UTC m=+0.319530584 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:30:29 np0005546420.localdomain podman[165470]: 2025-12-05 09:30:29.485000039 +0000 UTC m=+0.349266311 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:30:29 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:30:29 np0005546420.localdomain sudo[165476]: pam_unix(sudo:session): session closed for user root
Dec 05 09:30:30 np0005546420.localdomain sudo[165542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:30:30 np0005546420.localdomain sudo[165542]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:30:30 np0005546420.localdomain sudo[165542]: pam_unix(sudo:session): session closed for user root
Dec 05 09:30:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57219 DF PROTO=TCP SPT=44440 DPT=9102 SEQ=1955314604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE8C190000000001030307) 
Dec 05 09:30:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1965 DF PROTO=TCP SPT=56942 DPT=9882 SEQ=19790414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE93D90000000001030307) 
Dec 05 09:30:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40429 DF PROTO=TCP SPT=54030 DPT=9105 SEQ=781809347 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABE99D90000000001030307) 
Dec 05 09:30:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13053 DF PROTO=TCP SPT=46814 DPT=9101 SEQ=3291813677 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABEA5D90000000001030307) 
Dec 05 09:30:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:30:37 np0005546420.localdomain podman[166521]: 2025-12-05 09:30:37.504568596 +0000 UTC m=+0.080734714 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 09:30:37 np0005546420.localdomain podman[166521]: 2025-12-05 09:30:37.541328793 +0000 UTC m=+0.117494861 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 09:30:37 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:30:38 np0005546420.localdomain sshd[167528]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:30:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36477 DF PROTO=TCP SPT=45354 DPT=9100 SEQ=1410424965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABEB4190000000001030307) 
Dec 05 09:30:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36478 DF PROTO=TCP SPT=45354 DPT=9100 SEQ=1410424965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABEBC190000000001030307) 
Dec 05 09:30:43 np0005546420.localdomain sshd[167528]: Connection reset by authenticating user root 91.202.233.33 port 60666 [preauth]
Dec 05 09:30:44 np0005546420.localdomain sshd[172060]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:30:45 np0005546420.localdomain sshd[172060]: Invalid user ubuntu from 91.202.233.33 port 63306
Dec 05 09:30:45 np0005546420.localdomain sshd[172060]: Connection reset by invalid user ubuntu 91.202.233.33 port 63306 [preauth]
Dec 05 09:30:46 np0005546420.localdomain sshd[173887]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:30:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36479 DF PROTO=TCP SPT=45354 DPT=9100 SEQ=1410424965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABECBD90000000001030307) 
Dec 05 09:30:48 np0005546420.localdomain sshd[173887]: Connection reset by authenticating user root 91.202.233.33 port 63318 [preauth]
Dec 05 09:30:48 np0005546420.localdomain sshd[175710]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:30:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56260 DF PROTO=TCP SPT=43418 DPT=9105 SEQ=2297619939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABED3C10000000001030307) 
Dec 05 09:30:51 np0005546420.localdomain sshd[175710]: Connection reset by authenticating user root 91.202.233.33 port 63326 [preauth]
Dec 05 09:30:51 np0005546420.localdomain sshd[178157]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:30:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56262 DF PROTO=TCP SPT=43418 DPT=9105 SEQ=2297619939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABEDFDA0000000001030307) 
Dec 05 09:30:53 np0005546420.localdomain sshd[178157]: Connection reset by authenticating user root 91.202.233.33 port 57180 [preauth]
Dec 05 09:30:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36480 DF PROTO=TCP SPT=45354 DPT=9100 SEQ=1410424965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABEEBD90000000001030307) 
Dec 05 09:30:58 np0005546420.localdomain groupadd[182382]: group added to /etc/group: name=ceph, GID=167
Dec 05 09:30:58 np0005546420.localdomain groupadd[182382]: group added to /etc/gshadow: name=ceph
Dec 05 09:30:58 np0005546420.localdomain groupadd[182382]: new group: name=ceph, GID=167
Dec 05 09:30:58 np0005546420.localdomain useradd[182388]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 05 09:31:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:31:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59874 DF PROTO=TCP SPT=53094 DPT=9102 SEQ=3525159368 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF01590000000001030307) 
Dec 05 09:31:00 np0005546420.localdomain systemd[1]: tmp-crun.1Fy0Z1.mount: Deactivated successfully.
Dec 05 09:31:00 np0005546420.localdomain podman[182488]: 2025-12-05 09:31:00.917093806 +0000 UTC m=+0.486493307 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 09:31:00 np0005546420.localdomain podman[182488]: 2025-12-05 09:31:00.956421502 +0000 UTC m=+0.525821043 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:31:00 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:31:02 np0005546420.localdomain sshd[119494]: Received signal 15; terminating.
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: sshd.service: Consumed 1.620s CPU time, read 32.0K from disk, written 4.0K to disk.
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 05 09:31:02 np0005546420.localdomain sshd[183141]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:31:02 np0005546420.localdomain sshd[183141]: Server listening on 0.0.0.0 port 22.
Dec 05 09:31:02 np0005546420.localdomain sshd[183141]: Server listening on :: port 22.
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31106 DF PROTO=TCP SPT=60422 DPT=9882 SEQ=2253065704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF09D90000000001030307) 
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 05 09:31:03 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:31:04 np0005546420.localdomain systemd-rc-local-generator[183369]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:31:04 np0005546420.localdomain systemd-sysv-generator[183372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:31:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:31:04.075 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:31:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:31:04.076 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:31:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:31:04.076 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56264 DF PROTO=TCP SPT=43418 DPT=9105 SEQ=2297619939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF0FDA0000000001030307) 
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 09:31:04 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:31:04 np0005546420.localdomain auditd[708]: Error receiving audit netlink packet (No buffer space available)
Dec 05 09:31:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33328 DF PROTO=TCP SPT=39970 DPT=9101 SEQ=2306940090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF19D90000000001030307) 
Dec 05 09:31:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:31:08 np0005546420.localdomain podman[189313]: 2025-12-05 09:31:08.070447583 +0000 UTC m=+0.148922827 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 05 09:31:08 np0005546420.localdomain podman[189313]: 2025-12-05 09:31:08.109274683 +0000 UTC m=+0.187749917 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:31:08 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:31:08 np0005546420.localdomain sudo[163784]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25020 DF PROTO=TCP SPT=41230 DPT=9100 SEQ=74649372 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF29190000000001030307) 
Dec 05 09:31:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25021 DF PROTO=TCP SPT=41230 DPT=9100 SEQ=74649372 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF311A0000000001030307) 
Dec 05 09:31:15 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 09:31:15 np0005546420.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 05 09:31:15 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Consumed 12.421s CPU time.
Dec 05 09:31:15 np0005546420.localdomain systemd[1]: run-rfb909c67606041a4877cc060c1dcea6a.service: Deactivated successfully.
Dec 05 09:31:15 np0005546420.localdomain systemd[1]: run-r14dad46203fd44caaf8653f8c5afdea0.service: Deactivated successfully.
Dec 05 09:31:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25022 DF PROTO=TCP SPT=41230 DPT=9100 SEQ=74649372 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF40D90000000001030307) 
Dec 05 09:31:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16275 DF PROTO=TCP SPT=55094 DPT=9105 SEQ=2396350752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF48F10000000001030307) 
Dec 05 09:31:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16277 DF PROTO=TCP SPT=55094 DPT=9105 SEQ=2396350752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF54D90000000001030307) 
Dec 05 09:31:23 np0005546420.localdomain sudo[192479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwxbkkvgvaonewekazdivxwnbsbbivmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927083.0444882-990-30562770890119/AnsiballZ_systemd.py
Dec 05 09:31:23 np0005546420.localdomain sudo[192479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:24 np0005546420.localdomain python3.9[192481]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:31:24 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:31:24 np0005546420.localdomain systemd-rc-local-generator[192509]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:31:24 np0005546420.localdomain systemd-sysv-generator[192513]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:31:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:31:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:24 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:24 np0005546420.localdomain sudo[192479]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:24 np0005546420.localdomain sudo[192627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkflergnyqnpurlosgcknhrkfiwtihxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927084.504241-990-202089630655426/AnsiballZ_systemd.py
Dec 05 09:31:24 np0005546420.localdomain sudo[192627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59876 DF PROTO=TCP SPT=53094 DPT=9102 SEQ=3525159368 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF61D90000000001030307) 
Dec 05 09:31:25 np0005546420.localdomain python3.9[192629]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:31:25 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:31:25 np0005546420.localdomain systemd-rc-local-generator[192659]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:31:25 np0005546420.localdomain systemd-sysv-generator[192662]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:31:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:31:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:25 np0005546420.localdomain sudo[192627]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:26 np0005546420.localdomain sudo[192776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhblumkxhgictmjlmqbqvqmbfozlkkgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927086.496345-990-272336491916491/AnsiballZ_systemd.py
Dec 05 09:31:26 np0005546420.localdomain sudo[192776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:27 np0005546420.localdomain python3.9[192778]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:31:27 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:31:27 np0005546420.localdomain systemd-rc-local-generator[192806]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:31:27 np0005546420.localdomain systemd-sysv-generator[192810]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:31:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:31:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:27 np0005546420.localdomain sudo[192776]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:27 np0005546420.localdomain sudo[192925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-antpxoxgtahoohyjdkzonszglueuthvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927087.7061691-990-247411295924419/AnsiballZ_systemd.py
Dec 05 09:31:27 np0005546420.localdomain sudo[192925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:28 np0005546420.localdomain python3.9[192927]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:31:28 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:31:28 np0005546420.localdomain systemd-rc-local-generator[192956]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:31:28 np0005546420.localdomain systemd-sysv-generator[192959]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:31:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:31:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:28 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:28 np0005546420.localdomain sudo[192925]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:29 np0005546420.localdomain sudo[193074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-raepbfxeipgjyqmlgcfsmhhxebonhhan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927088.9114463-1077-21565682704970/AnsiballZ_systemd.py
Dec 05 09:31:29 np0005546420.localdomain sudo[193074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:29 np0005546420.localdomain python3.9[193076]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:29 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:31:29 np0005546420.localdomain systemd-sysv-generator[193108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:31:29 np0005546420.localdomain systemd-rc-local-generator[193103]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:31:29 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:29 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:31:29 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:29 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:29 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:29 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:29 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:29 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:29 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:29 np0005546420.localdomain sudo[193074]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:30 np0005546420.localdomain sudo[193223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vaetzxjwlnqrweeithkcsgipfkcuqytg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927089.9248765-1077-134173377581394/AnsiballZ_systemd.py
Dec 05 09:31:30 np0005546420.localdomain sudo[193223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:30 np0005546420.localdomain python3.9[193225]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:30 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:31:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63434 DF PROTO=TCP SPT=36616 DPT=9102 SEQ=3889258043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF76990000000001030307) 
Dec 05 09:31:30 np0005546420.localdomain systemd-rc-local-generator[193271]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:31:30 np0005546420.localdomain systemd-sysv-generator[193274]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:31:30 np0005546420.localdomain sudo[193230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:31:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:31:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:30 np0005546420.localdomain sudo[193230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:31:30 np0005546420.localdomain sudo[193230]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:30 np0005546420.localdomain sudo[193223]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:30 np0005546420.localdomain sudo[193283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:31:30 np0005546420.localdomain sudo[193283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:31:31 np0005546420.localdomain sudo[193410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yijervbiyufzqxwcfnjprigwtrizbbsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927090.9809823-1077-181549449273455/AnsiballZ_systemd.py
Dec 05 09:31:31 np0005546420.localdomain sudo[193410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:31:31 np0005546420.localdomain podman[193424]: 2025-12-05 09:31:31.329712084 +0000 UTC m=+0.078489918 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:31:31 np0005546420.localdomain podman[193424]: 2025-12-05 09:31:31.389298497 +0000 UTC m=+0.138076331 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:31:31 np0005546420.localdomain sudo[193283]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:31 np0005546420.localdomain python3.9[193425]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:31:31 np0005546420.localdomain systemd-sysv-generator[193500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:31:31 np0005546420.localdomain systemd-rc-local-generator[193494]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:31 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:31 np0005546420.localdomain sudo[193410]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:32 np0005546420.localdomain sudo[193565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:31:32 np0005546420.localdomain sudo[193565]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:31:32 np0005546420.localdomain sudo[193565]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:32 np0005546420.localdomain sudo[193632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdxndkfksnnxwhrmzqrrjesepzwkhvsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927092.0358763-1077-280937973177397/AnsiballZ_systemd.py
Dec 05 09:31:32 np0005546420.localdomain sudo[193632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38753 DF PROTO=TCP SPT=57664 DPT=9882 SEQ=3102529862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF7DD90000000001030307) 
Dec 05 09:31:32 np0005546420.localdomain python3.9[193634]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:32 np0005546420.localdomain sudo[193632]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:33 np0005546420.localdomain sudo[193745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-feeeeynzrlupihljuiwqqrohtgqoxrkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927092.8174474-1077-49818659806150/AnsiballZ_systemd.py
Dec 05 09:31:33 np0005546420.localdomain sudo[193745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:33 np0005546420.localdomain python3.9[193747]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16279 DF PROTO=TCP SPT=55094 DPT=9105 SEQ=2396350752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF85DA0000000001030307) 
Dec 05 09:31:34 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:31:34 np0005546420.localdomain systemd-rc-local-generator[193778]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:31:34 np0005546420.localdomain systemd-sysv-generator[193781]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:31:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:31:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:34 np0005546420.localdomain sudo[193745]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62779 DF PROTO=TCP SPT=41350 DPT=9101 SEQ=1085094337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF8FDA0000000001030307) 
Dec 05 09:31:37 np0005546420.localdomain sudo[193894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tukclzmbgfgegmzjrdfudjpmgpxregwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927096.4503746-1185-112408823324882/AnsiballZ_systemd.py
Dec 05 09:31:37 np0005546420.localdomain sudo[193894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:37 np0005546420.localdomain python3.9[193896]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:31:37 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:31:37 np0005546420.localdomain systemd-rc-local-generator[193922]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:31:37 np0005546420.localdomain systemd-sysv-generator[193926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:31:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:31:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:31:38 np0005546420.localdomain sudo[193894]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:31:38 np0005546420.localdomain sudo[194044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wrkyjqlxgtbitaeyaxmgcikpczrtczeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927098.1901557-1209-175917539752628/AnsiballZ_systemd.py
Dec 05 09:31:38 np0005546420.localdomain sudo[194044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:38 np0005546420.localdomain podman[194038]: 2025-12-05 09:31:38.513210794 +0000 UTC m=+0.082777992 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 09:31:38 np0005546420.localdomain podman[194038]: 2025-12-05 09:31:38.547448642 +0000 UTC m=+0.117015820 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 09:31:38 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:31:38 np0005546420.localdomain python3.9[194057]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:38 np0005546420.localdomain sudo[194044]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:39 np0005546420.localdomain sudo[194174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dbjnduohdzkisrilebcmzfmcrlgwmxbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927098.9920168-1209-19101027164412/AnsiballZ_systemd.py
Dec 05 09:31:39 np0005546420.localdomain sudo[194174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:39 np0005546420.localdomain python3.9[194176]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:39 np0005546420.localdomain sudo[194174]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:40 np0005546420.localdomain sudo[194287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrwybrnmomednxnciocybohsxxevsucd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927099.8386822-1209-36635550300302/AnsiballZ_systemd.py
Dec 05 09:31:40 np0005546420.localdomain sudo[194287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:40 np0005546420.localdomain python3.9[194289]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22824 DF PROTO=TCP SPT=39948 DPT=9100 SEQ=360899280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABF9E590000000001030307) 
Dec 05 09:31:41 np0005546420.localdomain sudo[194287]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:41 np0005546420.localdomain sudo[194400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zypurxoqpsljzrvedrobgiitbhfaqxsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927101.5906582-1209-163231495441321/AnsiballZ_systemd.py
Dec 05 09:31:41 np0005546420.localdomain sudo[194400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:42 np0005546420.localdomain python3.9[194402]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:42 np0005546420.localdomain sudo[194400]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:42 np0005546420.localdomain sudo[194513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdzlppfnximasoetcsieevizaxggcixn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927102.4168396-1209-38751752701626/AnsiballZ_systemd.py
Dec 05 09:31:42 np0005546420.localdomain sudo[194513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22825 DF PROTO=TCP SPT=39948 DPT=9100 SEQ=360899280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABFA65A0000000001030307) 
Dec 05 09:31:43 np0005546420.localdomain python3.9[194515]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:44 np0005546420.localdomain sudo[194513]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:44 np0005546420.localdomain sudo[194626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yywhogwtrfzbhaqgvyobqvbcexxfsmye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927104.24827-1209-278986792677840/AnsiballZ_systemd.py
Dec 05 09:31:44 np0005546420.localdomain sudo[194626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:44 np0005546420.localdomain python3.9[194628]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:44 np0005546420.localdomain sudo[194626]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:46 np0005546420.localdomain sudo[194739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhupwfdrofcfndiyjcmulylugxbvbpnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927105.943013-1209-228456980818043/AnsiballZ_systemd.py
Dec 05 09:31:46 np0005546420.localdomain sudo[194739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:46 np0005546420.localdomain python3.9[194741]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22826 DF PROTO=TCP SPT=39948 DPT=9100 SEQ=360899280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABFB6190000000001030307) 
Dec 05 09:31:48 np0005546420.localdomain sudo[194739]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38754 DF PROTO=TCP SPT=57664 DPT=9882 SEQ=3102529862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABFBDDA0000000001030307) 
Dec 05 09:31:49 np0005546420.localdomain sudo[194852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxycebfjqpilfvdkulebvbuzdhyntsdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927108.754569-1209-111446878712468/AnsiballZ_systemd.py
Dec 05 09:31:49 np0005546420.localdomain sudo[194852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:49 np0005546420.localdomain python3.9[194854]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:49 np0005546420.localdomain sudo[194852]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:49 np0005546420.localdomain sudo[194965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ipymdydobmfrbuyrnficxjhgycltebcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927109.504298-1209-201004392477883/AnsiballZ_systemd.py
Dec 05 09:31:49 np0005546420.localdomain sudo[194965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:50 np0005546420.localdomain python3.9[194967]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:50 np0005546420.localdomain sudo[194965]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:50 np0005546420.localdomain sudo[195078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aldvlhhhxcyjinifqzntrklzybiizvpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927110.3384962-1209-146353482322397/AnsiballZ_systemd.py
Dec 05 09:31:50 np0005546420.localdomain sudo[195078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:50 np0005546420.localdomain python3.9[195080]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:50 np0005546420.localdomain sudo[195078]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:51 np0005546420.localdomain sudo[195191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chmzdvxuqtwwrqavaxdcwwhnrtddwilj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927111.1071115-1209-178241625131476/AnsiballZ_systemd.py
Dec 05 09:31:51 np0005546420.localdomain sudo[195191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:51 np0005546420.localdomain python3.9[195193]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:51 np0005546420.localdomain sudo[195191]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24228 DF PROTO=TCP SPT=47392 DPT=9105 SEQ=1363011017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABFCA1A0000000001030307) 
Dec 05 09:31:52 np0005546420.localdomain sudo[195304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twygdstbqpczxtfaljnyadtnoosatzxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927111.8840098-1209-29699985483783/AnsiballZ_systemd.py
Dec 05 09:31:52 np0005546420.localdomain sudo[195304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:52 np0005546420.localdomain python3.9[195306]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:52 np0005546420.localdomain sudo[195304]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:52 np0005546420.localdomain sudo[195417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xzfnjuaadqvzndzxnogylkzrmiefirjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927112.6572216-1209-256757156390165/AnsiballZ_systemd.py
Dec 05 09:31:52 np0005546420.localdomain sudo[195417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:53 np0005546420.localdomain python3.9[195419]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:53 np0005546420.localdomain sudo[195417]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:53 np0005546420.localdomain sudo[195530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-puhulyjwdkmiuhwmzfikomuqgsjrpgjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927113.4032452-1209-111950165957037/AnsiballZ_systemd.py
Dec 05 09:31:53 np0005546420.localdomain sudo[195530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:54 np0005546420.localdomain python3.9[195532]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 09:31:54 np0005546420.localdomain sudo[195530]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:54 np0005546420.localdomain sudo[195643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fugunoyhqbeuvytypzirefslcfkjwrwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927114.5221214-1515-111289487730993/AnsiballZ_file.py
Dec 05 09:31:54 np0005546420.localdomain sudo[195643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22827 DF PROTO=TCP SPT=39948 DPT=9100 SEQ=360899280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABFD5D90000000001030307) 
Dec 05 09:31:55 np0005546420.localdomain python3.9[195645]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:31:55 np0005546420.localdomain sudo[195643]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:55 np0005546420.localdomain sudo[195753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sswtavsflrcveucdpiewqqxfguvxxmnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927115.2014904-1515-27648159555812/AnsiballZ_file.py
Dec 05 09:31:55 np0005546420.localdomain sudo[195753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:55 np0005546420.localdomain python3.9[195755]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:31:55 np0005546420.localdomain sudo[195753]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:56 np0005546420.localdomain sudo[195863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdutwyalvlsnjwfpqpgdrnkzszussdjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927115.7834957-1515-113910055368164/AnsiballZ_file.py
Dec 05 09:31:56 np0005546420.localdomain sudo[195863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:56 np0005546420.localdomain python3.9[195865]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:31:56 np0005546420.localdomain sudo[195863]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:56 np0005546420.localdomain sudo[195973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-atidiscaovguwphcsbhobedbkefnaepl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927116.375487-1515-149735233969761/AnsiballZ_file.py
Dec 05 09:31:56 np0005546420.localdomain sudo[195973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:56 np0005546420.localdomain python3.9[195975]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:31:56 np0005546420.localdomain sudo[195973]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:57 np0005546420.localdomain sudo[196083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mygzhotszzokqlcawvivbtijysossvvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927117.506805-1515-39347165960426/AnsiballZ_file.py
Dec 05 09:31:57 np0005546420.localdomain sudo[196083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:57 np0005546420.localdomain python3.9[196085]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:31:58 np0005546420.localdomain sudo[196083]: pam_unix(sudo:session): session closed for user root
Dec 05 09:31:58 np0005546420.localdomain sudo[196193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjtnwkvyzdifxxgvjtnfnudtttnwyden ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927118.121671-1515-238786691032483/AnsiballZ_file.py
Dec 05 09:31:58 np0005546420.localdomain sudo[196193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:31:58 np0005546420.localdomain python3.9[196195]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:31:58 np0005546420.localdomain sudo[196193]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:00 np0005546420.localdomain sudo[196303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmshvbyiwuxfvjffzdhzqvcwexmiapwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927118.8133867-1644-262895432734178/AnsiballZ_stat.py
Dec 05 09:32:00 np0005546420.localdomain sudo[196303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:00 np0005546420.localdomain python3.9[196305]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:00 np0005546420.localdomain sudo[196303]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1653 DF PROTO=TCP SPT=54570 DPT=9102 SEQ=3146644629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABFEBDA0000000001030307) 
Dec 05 09:32:00 np0005546420.localdomain sudo[196393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-khsiwihvixjefhusbtebtsbzsrmshhvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927118.8133867-1644-262895432734178/AnsiballZ_copy.py
Dec 05 09:32:00 np0005546420.localdomain sudo[196393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:01 np0005546420.localdomain python3.9[196395]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927118.8133867-1644-262895432734178/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:01 np0005546420.localdomain sudo[196393]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:01 np0005546420.localdomain sudo[196503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wiidpllhaxwvgsycejwlfkwgcwjkcxat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927121.1716719-1644-93339789155193/AnsiballZ_stat.py
Dec 05 09:32:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:32:01 np0005546420.localdomain sudo[196503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:01 np0005546420.localdomain podman[196505]: 2025-12-05 09:32:01.575840663 +0000 UTC m=+0.093285846 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 05 09:32:01 np0005546420.localdomain podman[196505]: 2025-12-05 09:32:01.615428341 +0000 UTC m=+0.132873594 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 09:32:01 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:32:01 np0005546420.localdomain python3.9[196506]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:01 np0005546420.localdomain sudo[196503]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:02 np0005546420.localdomain sudo[196619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tnrjslfjrcpdqvjtzlkywatzirpovxkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927121.1716719-1644-93339789155193/AnsiballZ_copy.py
Dec 05 09:32:02 np0005546420.localdomain sudo[196619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:02 np0005546420.localdomain python3.9[196621]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927121.1716719-1644-93339789155193/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:02 np0005546420.localdomain sudo[196619]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63950 DF PROTO=TCP SPT=46162 DPT=9882 SEQ=532355386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABFF3D90000000001030307) 
Dec 05 09:32:02 np0005546420.localdomain sudo[196729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cwgmmzxrlblovykewpodzvokdhporyak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927122.3910248-1644-17092933864792/AnsiballZ_stat.py
Dec 05 09:32:02 np0005546420.localdomain sudo[196729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:02 np0005546420.localdomain python3.9[196731]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:02 np0005546420.localdomain sudo[196729]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:03 np0005546420.localdomain sudo[196819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhhrbqklbanuejevbzhnrceqtyenztug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927122.3910248-1644-17092933864792/AnsiballZ_copy.py
Dec 05 09:32:03 np0005546420.localdomain sudo[196819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:03 np0005546420.localdomain python3.9[196821]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927122.3910248-1644-17092933864792/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:03 np0005546420.localdomain sudo[196819]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:03 np0005546420.localdomain sudo[196929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mbykccsqacrduxzxftpirbsqlkeiprno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927123.5199444-1644-179942565847271/AnsiballZ_stat.py
Dec 05 09:32:03 np0005546420.localdomain sudo[196929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:03 np0005546420.localdomain python3.9[196931]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:04 np0005546420.localdomain sudo[196929]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:32:04.077 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:32:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:32:04.077 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:32:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:32:04.077 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:32:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24230 DF PROTO=TCP SPT=47392 DPT=9105 SEQ=1363011017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ABFF9D90000000001030307) 
Dec 05 09:32:04 np0005546420.localdomain sudo[197019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zuvsueevtvkprndeabxmostnebgedlev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927123.5199444-1644-179942565847271/AnsiballZ_copy.py
Dec 05 09:32:04 np0005546420.localdomain sudo[197019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:04 np0005546420.localdomain python3.9[197021]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927123.5199444-1644-179942565847271/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:04 np0005546420.localdomain sudo[197019]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:04 np0005546420.localdomain sudo[197129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtlseybuxyegtsfneycgqadjmwrwowrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927124.7131135-1644-91021144174433/AnsiballZ_stat.py
Dec 05 09:32:05 np0005546420.localdomain sudo[197129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:05 np0005546420.localdomain python3.9[197131]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:05 np0005546420.localdomain sudo[197129]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:05 np0005546420.localdomain sudo[197219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-agoaiamklwhelqarobanhutmjlpsomuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927124.7131135-1644-91021144174433/AnsiballZ_copy.py
Dec 05 09:32:05 np0005546420.localdomain sudo[197219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:05 np0005546420.localdomain python3.9[197221]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927124.7131135-1644-91021144174433/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:05 np0005546420.localdomain sudo[197219]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:32:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6000.1 total, 600.0 interval
                                                          Cumulative writes: 5715 writes, 25K keys, 5715 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5715 writes, 734 syncs, 7.79 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee52179610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55ee521782d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 05 09:32:06 np0005546420.localdomain sudo[197329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwhcvojwfvaocwvyectratwzwncjtyln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927125.928656-1644-76113138708073/AnsiballZ_stat.py
Dec 05 09:32:06 np0005546420.localdomain sudo[197329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:06 np0005546420.localdomain python3.9[197331]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:06 np0005546420.localdomain sudo[197329]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:06 np0005546420.localdomain sudo[197419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oqvsfnwrgksfziqfmydmbfsofiwopxwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927125.928656-1644-76113138708073/AnsiballZ_copy.py
Dec 05 09:32:06 np0005546420.localdomain sudo[197419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:06 np0005546420.localdomain python3.9[197421]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927125.928656-1644-76113138708073/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:06 np0005546420.localdomain sudo[197419]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7206 DF PROTO=TCP SPT=55522 DPT=9101 SEQ=4102418541 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC005D90000000001030307) 
Dec 05 09:32:07 np0005546420.localdomain sudo[197529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vghbhaxzzxzhwaevpwzxsrvgsasyfeze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927127.0613067-1644-89679466181681/AnsiballZ_stat.py
Dec 05 09:32:07 np0005546420.localdomain sudo[197529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:07 np0005546420.localdomain python3.9[197531]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:07 np0005546420.localdomain sudo[197529]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:07 np0005546420.localdomain sudo[197617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxacxnvdganuizamivbcearguzjoftrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927127.0613067-1644-89679466181681/AnsiballZ_copy.py
Dec 05 09:32:07 np0005546420.localdomain sudo[197617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:08 np0005546420.localdomain python3.9[197619]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927127.0613067-1644-89679466181681/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:08 np0005546420.localdomain sudo[197617]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:08 np0005546420.localdomain sudo[197727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-igowzddpybpdyscmddsnnwgutpwlhtua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927128.1978307-1644-248449612549894/AnsiballZ_stat.py
Dec 05 09:32:08 np0005546420.localdomain sudo[197727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:08 np0005546420.localdomain python3.9[197729]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:08 np0005546420.localdomain sudo[197727]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:32:09 np0005546420.localdomain podman[197732]: 2025-12-05 09:32:09.515717843 +0000 UTC m=+0.089682318 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 09:32:09 np0005546420.localdomain podman[197732]: 2025-12-05 09:32:09.525510212 +0000 UTC m=+0.099474717 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 09:32:09 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:32:09 np0005546420.localdomain sudo[197835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ztdmvbebmmhfwclpenmgypkrigezqiqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927128.1978307-1644-248449612549894/AnsiballZ_copy.py
Dec 05 09:32:09 np0005546420.localdomain sudo[197835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:09 np0005546420.localdomain python3.9[197837]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927128.1978307-1644-248449612549894/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:09 np0005546420.localdomain sudo[197835]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:10 np0005546420.localdomain sudo[197945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdiicxquzfqioeyjagotziblwfyycrow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927130.1708927-1986-78034953389853/AnsiballZ_file.py
Dec 05 09:32:10 np0005546420.localdomain sudo[197945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:10 np0005546420.localdomain python3.9[197947]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:10 np0005546420.localdomain sudo[197945]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16222 DF PROTO=TCP SPT=39002 DPT=9100 SEQ=1706771675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC013990000000001030307) 
Dec 05 09:32:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:32:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6000.1 total, 600.0 interval
                                                          Cumulative writes: 4690 writes, 21K keys, 4690 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4690 writes, 584 syncs, 8.03 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d843610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.025       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56455d8422d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 05 09:32:11 np0005546420.localdomain sudo[198055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grpidwummxzgaarqvxmwdwmpxfrthrwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927131.2811253-2010-230371023760601/AnsiballZ_file.py
Dec 05 09:32:11 np0005546420.localdomain sudo[198055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:11 np0005546420.localdomain python3.9[198057]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:11 np0005546420.localdomain sudo[198055]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:12 np0005546420.localdomain sudo[198165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmevttxvnrxchpplfxitojoxdrrzpxkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927131.8765633-2010-60615656620624/AnsiballZ_file.py
Dec 05 09:32:12 np0005546420.localdomain sudo[198165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:12 np0005546420.localdomain python3.9[198167]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:12 np0005546420.localdomain sudo[198165]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:12 np0005546420.localdomain sudo[198275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrxfvamlbbpxpydjgkgsvyvatevpbzjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927132.5001557-2010-243225431517697/AnsiballZ_file.py
Dec 05 09:32:12 np0005546420.localdomain sudo[198275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16223 DF PROTO=TCP SPT=39002 DPT=9100 SEQ=1706771675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC01B9A0000000001030307) 
Dec 05 09:32:13 np0005546420.localdomain python3.9[198277]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:13 np0005546420.localdomain sudo[198275]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:13 np0005546420.localdomain sudo[198385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-owhwxxrnvrkkzrfpbyutfrcxuawkpxde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927133.1452892-2010-115397819390257/AnsiballZ_file.py
Dec 05 09:32:13 np0005546420.localdomain sudo[198385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:13 np0005546420.localdomain python3.9[198387]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:13 np0005546420.localdomain sudo[198385]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:13 np0005546420.localdomain sudo[198495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhggjtaaqsrvnkksuoepxtyyexidnjde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927133.7423327-2010-207350996547712/AnsiballZ_file.py
Dec 05 09:32:13 np0005546420.localdomain sudo[198495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:14 np0005546420.localdomain python3.9[198497]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:14 np0005546420.localdomain sudo[198495]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:14 np0005546420.localdomain sudo[198605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jhjyykvcnsuppnxgrdyofgjjmsogaujw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927134.3432574-2010-122521433031354/AnsiballZ_file.py
Dec 05 09:32:14 np0005546420.localdomain sudo[198605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:14 np0005546420.localdomain python3.9[198607]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:14 np0005546420.localdomain sudo[198605]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:15 np0005546420.localdomain sudo[198715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htvcxtqdxkyowphkhxalgutocwyxxbre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927134.9254806-2010-216529830668550/AnsiballZ_file.py
Dec 05 09:32:15 np0005546420.localdomain sudo[198715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:15 np0005546420.localdomain python3.9[198717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:15 np0005546420.localdomain sudo[198715]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:15 np0005546420.localdomain sudo[198825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lipvbmctwquxukiuirsflayjwzgixfre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927135.5411232-2010-35257205015940/AnsiballZ_file.py
Dec 05 09:32:15 np0005546420.localdomain sudo[198825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:15 np0005546420.localdomain python3.9[198827]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:15 np0005546420.localdomain sudo[198825]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:16 np0005546420.localdomain sudo[198935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-graefwnbamvblarwntpghnrrnhzslisy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927136.0959835-2010-27911729896390/AnsiballZ_file.py
Dec 05 09:32:16 np0005546420.localdomain sudo[198935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:16 np0005546420.localdomain python3.9[198937]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:16 np0005546420.localdomain sudo[198935]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16224 DF PROTO=TCP SPT=39002 DPT=9100 SEQ=1706771675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC02B590000000001030307) 
Dec 05 09:32:16 np0005546420.localdomain sudo[199045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpyidezduebgjefabkcreultelaszggu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927136.7315145-2010-33937734520867/AnsiballZ_file.py
Dec 05 09:32:17 np0005546420.localdomain sudo[199045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:17 np0005546420.localdomain python3.9[199047]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:17 np0005546420.localdomain sudo[199045]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:17 np0005546420.localdomain sudo[199155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wvmqoiukcqcbulmpmkdqtwgukdwqxhld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927137.3554711-2010-31168016701812/AnsiballZ_file.py
Dec 05 09:32:17 np0005546420.localdomain sudo[199155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:17 np0005546420.localdomain python3.9[199157]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:17 np0005546420.localdomain sudo[199155]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:18 np0005546420.localdomain sudo[199265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kqruuldklhkljggbdktacsqyhrvybjnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927137.9615815-2010-207926881308801/AnsiballZ_file.py
Dec 05 09:32:18 np0005546420.localdomain sudo[199265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:18 np0005546420.localdomain python3.9[199267]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:18 np0005546420.localdomain sudo[199265]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18003 DF PROTO=TCP SPT=59308 DPT=9105 SEQ=3960100878 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC033510000000001030307) 
Dec 05 09:32:18 np0005546420.localdomain sudo[199375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvxrfcxglkmcctuwlgboarmbwkpgxvpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927138.612593-2010-259460177665823/AnsiballZ_file.py
Dec 05 09:32:18 np0005546420.localdomain sudo[199375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:19 np0005546420.localdomain python3.9[199377]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:19 np0005546420.localdomain sudo[199375]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:19 np0005546420.localdomain sudo[199485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zukntqvfvajrwmodfqhkypdyaqungcrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927139.2274456-2010-32432824232818/AnsiballZ_file.py
Dec 05 09:32:19 np0005546420.localdomain sudo[199485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:19 np0005546420.localdomain python3.9[199487]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:19 np0005546420.localdomain sudo[199485]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:20 np0005546420.localdomain sudo[199595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rwmhhjqaueprrnjrpwcsyxqownkifcdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927140.6622417-2307-239552478048199/AnsiballZ_stat.py
Dec 05 09:32:20 np0005546420.localdomain sudo[199595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:21 np0005546420.localdomain python3.9[199597]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:21 np0005546420.localdomain sudo[199595]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:21 np0005546420.localdomain sudo[199683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htrgmzydubmgrczsneoicjqrxxdmejuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927140.6622417-2307-239552478048199/AnsiballZ_copy.py
Dec 05 09:32:21 np0005546420.localdomain sudo[199683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:21 np0005546420.localdomain python3.9[199685]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927140.6622417-2307-239552478048199/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:21 np0005546420.localdomain sudo[199683]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18005 DF PROTO=TCP SPT=59308 DPT=9105 SEQ=3960100878 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC03F590000000001030307) 
Dec 05 09:32:22 np0005546420.localdomain sudo[199793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upfqbgdzxjfqvefpuoijlfstohbhqlrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927142.2379308-2307-206509426336953/AnsiballZ_stat.py
Dec 05 09:32:22 np0005546420.localdomain sudo[199793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:22 np0005546420.localdomain python3.9[199795]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:22 np0005546420.localdomain sudo[199793]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:22 np0005546420.localdomain sudo[199881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-quplrzklzozbpudipbhcvzdljjblxyhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927142.2379308-2307-206509426336953/AnsiballZ_copy.py
Dec 05 09:32:22 np0005546420.localdomain sudo[199881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:23 np0005546420.localdomain python3.9[199883]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927142.2379308-2307-206509426336953/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:23 np0005546420.localdomain sudo[199881]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:23 np0005546420.localdomain sudo[199991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jnzphlsyadiqhcouhavffpqkejqiiehh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927143.3209677-2307-3713594076218/AnsiballZ_stat.py
Dec 05 09:32:23 np0005546420.localdomain sudo[199991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:23 np0005546420.localdomain python3.9[199993]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:23 np0005546420.localdomain sudo[199991]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:24 np0005546420.localdomain sudo[200079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ugsjmdhrtiaiwnnwcjmldhxctztdeggq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927143.3209677-2307-3713594076218/AnsiballZ_copy.py
Dec 05 09:32:24 np0005546420.localdomain sudo[200079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:24 np0005546420.localdomain python3.9[200081]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927143.3209677-2307-3713594076218/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:24 np0005546420.localdomain sudo[200079]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:24 np0005546420.localdomain sudo[200189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gegqnzebmtzqhijxaaymmxnkpnvwartm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927144.4311147-2307-25780626823288/AnsiballZ_stat.py
Dec 05 09:32:24 np0005546420.localdomain sudo[200189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:24 np0005546420.localdomain python3.9[200191]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:24 np0005546420.localdomain sudo[200189]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1655 DF PROTO=TCP SPT=54570 DPT=9102 SEQ=3146644629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC04BD90000000001030307) 
Dec 05 09:32:25 np0005546420.localdomain sudo[200277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpqlllbvwwxgvmpwczdsssxmmqgtuqxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927144.4311147-2307-25780626823288/AnsiballZ_copy.py
Dec 05 09:32:25 np0005546420.localdomain sudo[200277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:25 np0005546420.localdomain python3.9[200279]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927144.4311147-2307-25780626823288/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:25 np0005546420.localdomain sudo[200277]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:26 np0005546420.localdomain sudo[200387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-otkdemouwpzxpjwtzszchfcqltloduuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927145.8352787-2307-157840891407564/AnsiballZ_stat.py
Dec 05 09:32:26 np0005546420.localdomain sudo[200387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:26 np0005546420.localdomain python3.9[200389]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:26 np0005546420.localdomain sudo[200387]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:26 np0005546420.localdomain sudo[200475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kfgoxqxpuliipwyjyodqstldjrkqlqgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927145.8352787-2307-157840891407564/AnsiballZ_copy.py
Dec 05 09:32:26 np0005546420.localdomain sudo[200475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:26 np0005546420.localdomain python3.9[200477]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927145.8352787-2307-157840891407564/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:26 np0005546420.localdomain sudo[200475]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:27 np0005546420.localdomain sudo[200585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjbkgnwaelupciwrhfpezrhsxgrodnbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927147.0018609-2307-258811591248609/AnsiballZ_stat.py
Dec 05 09:32:27 np0005546420.localdomain sudo[200585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:27 np0005546420.localdomain python3.9[200587]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:27 np0005546420.localdomain sudo[200585]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:27 np0005546420.localdomain sudo[200673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikyjzetvfqrxtyffztfnowhgrnnhxpzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927147.0018609-2307-258811591248609/AnsiballZ_copy.py
Dec 05 09:32:27 np0005546420.localdomain sudo[200673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:28 np0005546420.localdomain python3.9[200675]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927147.0018609-2307-258811591248609/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:28 np0005546420.localdomain sudo[200673]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:28 np0005546420.localdomain sudo[200783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aemqrhvhxubasxnuicbapblmaooqeibu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927148.206381-2307-138343595793355/AnsiballZ_stat.py
Dec 05 09:32:28 np0005546420.localdomain sudo[200783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:28 np0005546420.localdomain python3.9[200785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:28 np0005546420.localdomain sudo[200783]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:29 np0005546420.localdomain sudo[200871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kezinqzpbdbswspskcbzoijbaqurlmbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927148.206381-2307-138343595793355/AnsiballZ_copy.py
Dec 05 09:32:29 np0005546420.localdomain sudo[200871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:29 np0005546420.localdomain python3.9[200873]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927148.206381-2307-138343595793355/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:29 np0005546420.localdomain sudo[200871]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:29 np0005546420.localdomain sudo[200981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwzoaryggvpltswuvlraknbtzjpqenql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927149.3296597-2307-82658852711741/AnsiballZ_stat.py
Dec 05 09:32:29 np0005546420.localdomain sudo[200981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:29 np0005546420.localdomain python3.9[200983]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:29 np0005546420.localdomain sudo[200981]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:30 np0005546420.localdomain sudo[201069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbibowrdnibazblwmwltwmtjgghchkpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927149.3296597-2307-82658852711741/AnsiballZ_copy.py
Dec 05 09:32:30 np0005546420.localdomain sudo[201069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:30 np0005546420.localdomain python3.9[201071]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927149.3296597-2307-82658852711741/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:30 np0005546420.localdomain sudo[201069]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59598 DF PROTO=TCP SPT=57560 DPT=9102 SEQ=3164392932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC060D90000000001030307) 
Dec 05 09:32:31 np0005546420.localdomain sudo[201179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqoriwwibbgjuxxidmjwgsqtcudjngos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927150.4266055-2307-189616081049377/AnsiballZ_stat.py
Dec 05 09:32:31 np0005546420.localdomain sudo[201179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:31 np0005546420.localdomain python3.9[201181]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:31 np0005546420.localdomain sudo[201179]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:31 np0005546420.localdomain sudo[201267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sgpprhfmitqpkiujtftolnpauclcequn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927150.4266055-2307-189616081049377/AnsiballZ_copy.py
Dec 05 09:32:31 np0005546420.localdomain sudo[201267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:32:32 np0005546420.localdomain podman[201270]: 2025-12-05 09:32:32.04880817 +0000 UTC m=+0.071788771 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:32:32 np0005546420.localdomain podman[201270]: 2025-12-05 09:32:32.129160152 +0000 UTC m=+0.152140693 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:32:32 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:32:32 np0005546420.localdomain python3.9[201269]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927150.4266055-2307-189616081049377/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:32 np0005546420.localdomain sudo[201267]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:32 np0005546420.localdomain sudo[201328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:32:32 np0005546420.localdomain sudo[201328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:32:32 np0005546420.localdomain sudo[201328]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:32 np0005546420.localdomain sudo[201369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:32:32 np0005546420.localdomain sudo[201369]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:32:32 np0005546420.localdomain sudo[201438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxqdkovprwxfxbjcpftjkageukamvxog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927152.3130887-2307-220891567040074/AnsiballZ_stat.py
Dec 05 09:32:32 np0005546420.localdomain sudo[201438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:32 np0005546420.localdomain python3.9[201440]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:32 np0005546420.localdomain sudo[201438]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24377 DF PROTO=TCP SPT=32944 DPT=9882 SEQ=701779963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC069D90000000001030307) 
Dec 05 09:32:33 np0005546420.localdomain sudo[201369]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:33 np0005546420.localdomain sudo[201557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gsyyntzpxmpjqdkrnfpxpopstgotuetg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927152.3130887-2307-220891567040074/AnsiballZ_copy.py
Dec 05 09:32:33 np0005546420.localdomain sudo[201557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:33 np0005546420.localdomain python3.9[201559]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927152.3130887-2307-220891567040074/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:33 np0005546420.localdomain sudo[201557]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:33 np0005546420.localdomain sudo[201593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:32:33 np0005546420.localdomain sudo[201593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:32:33 np0005546420.localdomain sudo[201593]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:34 np0005546420.localdomain sudo[201685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wimzmnwzmrehwjxhftnlpwiadgjcwrrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927153.852117-2307-1522484414808/AnsiballZ_stat.py
Dec 05 09:32:34 np0005546420.localdomain sudo[201685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:34 np0005546420.localdomain python3.9[201687]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:34 np0005546420.localdomain sudo[201685]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18007 DF PROTO=TCP SPT=59308 DPT=9105 SEQ=3960100878 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC06FD90000000001030307) 
Dec 05 09:32:34 np0005546420.localdomain sudo[201773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izhuqwygwjxdnmtaxysofxnnwyhzwjaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927153.852117-2307-1522484414808/AnsiballZ_copy.py
Dec 05 09:32:34 np0005546420.localdomain sudo[201773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:34 np0005546420.localdomain python3.9[201775]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927153.852117-2307-1522484414808/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:34 np0005546420.localdomain sudo[201773]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:35 np0005546420.localdomain sudo[201883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rpjciichqbcsznyowzhputucfmomsohh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927154.976555-2307-62403487114128/AnsiballZ_stat.py
Dec 05 09:32:35 np0005546420.localdomain sudo[201883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:35 np0005546420.localdomain python3.9[201885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:35 np0005546420.localdomain sudo[201883]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:35 np0005546420.localdomain sudo[201971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvfkdgldgymbyiamftbtxoikiidfuxjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927154.976555-2307-62403487114128/AnsiballZ_copy.py
Dec 05 09:32:35 np0005546420.localdomain sudo[201971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:35 np0005546420.localdomain python3.9[201973]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927154.976555-2307-62403487114128/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:35 np0005546420.localdomain sudo[201971]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:36 np0005546420.localdomain sudo[202081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdvnwvcmexevhrfcteknatznxxandlli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927156.088377-2307-155975096524577/AnsiballZ_stat.py
Dec 05 09:32:36 np0005546420.localdomain sudo[202081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:36 np0005546420.localdomain python3.9[202083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:36 np0005546420.localdomain sudo[202081]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:36 np0005546420.localdomain sudo[202169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hxtztfvifokavgmqadbtxjkctvvregnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927156.088377-2307-155975096524577/AnsiballZ_copy.py
Dec 05 09:32:36 np0005546420.localdomain sudo[202169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9114 DF PROTO=TCP SPT=38492 DPT=9101 SEQ=1330459467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC079D90000000001030307) 
Dec 05 09:32:37 np0005546420.localdomain python3.9[202171]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927156.088377-2307-155975096524577/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:37 np0005546420.localdomain sudo[202169]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:37 np0005546420.localdomain sudo[202279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhhiidckxkkyieaxhohflovftcpnjhqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927157.1558747-2307-174493368426859/AnsiballZ_stat.py
Dec 05 09:32:37 np0005546420.localdomain sudo[202279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:37 np0005546420.localdomain python3.9[202281]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:37 np0005546420.localdomain sudo[202279]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:37 np0005546420.localdomain sudo[202367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlrozuktsggsatjtgicysibgobjbtzoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927157.1558747-2307-174493368426859/AnsiballZ_copy.py
Dec 05 09:32:37 np0005546420.localdomain sudo[202367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:38 np0005546420.localdomain python3.9[202369]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927157.1558747-2307-174493368426859/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:38 np0005546420.localdomain sudo[202367]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:38 np0005546420.localdomain python3.9[202477]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                                            ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:32:39 np0005546420.localdomain sudo[202588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ouhabivzffzgmbyptsnsxhvvpkziypjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927159.1612947-2925-220093159795737/AnsiballZ_seboolean.py
Dec 05 09:32:39 np0005546420.localdomain sudo[202588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:39 np0005546420.localdomain python3.9[202590]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 05 09:32:39 np0005546420.localdomain sudo[202588]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:32:40 np0005546420.localdomain podman[202646]: 2025-12-05 09:32:40.498859965 +0000 UTC m=+0.078877407 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 09:32:40 np0005546420.localdomain podman[202646]: 2025-12-05 09:32:40.53343261 +0000 UTC m=+0.113449992 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 09:32:40 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:32:40 np0005546420.localdomain sudo[202716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hfxujodrgvbhpuisgnlhqyupnlsslcqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927160.343836-2955-19427416566608/AnsiballZ_systemd.py
Dec 05 09:32:40 np0005546420.localdomain sudo[202716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10983 DF PROTO=TCP SPT=47366 DPT=9100 SEQ=2280836205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC088D90000000001030307) 
Dec 05 09:32:40 np0005546420.localdomain python3.9[202718]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:32:40 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:32:41 np0005546420.localdomain systemd-sysv-generator[202749]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:32:41 np0005546420.localdomain systemd-rc-local-generator[202743]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: Starting libvirt logging daemon socket...
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: Listening on libvirt logging daemon socket.
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: Starting libvirt logging daemon admin socket...
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: Starting libvirt logging daemon...
Dec 05 09:32:41 np0005546420.localdomain systemd[1]: Started libvirt logging daemon.
Dec 05 09:32:41 np0005546420.localdomain sudo[202716]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:41 np0005546420.localdomain sudo[202868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nyplffikbkpyfsdfoiclkltikzmvgsdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927161.5147934-2955-272542208810007/AnsiballZ_systemd.py
Dec 05 09:32:41 np0005546420.localdomain sudo[202868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:42 np0005546420.localdomain python3.9[202870]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:32:42 np0005546420.localdomain systemd-rc-local-generator[202894]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:32:42 np0005546420.localdomain systemd-sysv-generator[202899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: Starting libvirt nodedev daemon socket...
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 05 09:32:42 np0005546420.localdomain systemd[1]: Started libvirt nodedev daemon.
Dec 05 09:32:42 np0005546420.localdomain sudo[202868]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10984 DF PROTO=TCP SPT=47366 DPT=9100 SEQ=2280836205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC090DA0000000001030307) 
Dec 05 09:32:43 np0005546420.localdomain systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 05 09:32:43 np0005546420.localdomain sudo[203043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wgculsgmdoagacvavwitesqhkfavozjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927162.569909-2955-179453032734011/AnsiballZ_systemd.py
Dec 05 09:32:43 np0005546420.localdomain sudo[203043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:43 np0005546420.localdomain systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 05 09:32:43 np0005546420.localdomain systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 05 09:32:43 np0005546420.localdomain systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 05 09:32:43 np0005546420.localdomain python3.9[203045]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:32:43 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:32:43 np0005546420.localdomain systemd-rc-local-generator[203071]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:32:43 np0005546420.localdomain systemd-sysv-generator[203077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: Starting libvirt proxy daemon socket...
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: Listening on libvirt proxy daemon socket.
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 05 09:32:44 np0005546420.localdomain systemd[1]: Started libvirt proxy daemon.
Dec 05 09:32:44 np0005546420.localdomain sudo[203043]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:44 np0005546420.localdomain sudo[203222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gktawjlvtawlgezofiithqmyegzhmnso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927164.421566-2955-189025606394366/AnsiballZ_systemd.py
Dec 05 09:32:44 np0005546420.localdomain sudo[203222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:44 np0005546420.localdomain setroubleshoot[203006]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 8240eaf5-e88e-46ed-be5d-8c9df0622022
Dec 05 09:32:44 np0005546420.localdomain setroubleshoot[203006]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                                 
                                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                                 
                                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                                 Do
                                                                 
                                                                 Turn on full auditing
                                                                 # auditctl -w /etc/shadow -p w
                                                                 Try to recreate AVC. Then execute
                                                                 # ausearch -m avc -ts recent
                                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                                 otherwise report as a bugzilla.
                                                                 
                                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                                 
                                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                                 # semodule -X 300 -i my-virtlogd.pp
                                                                 
Dec 05 09:32:44 np0005546420.localdomain setroubleshoot[203006]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 8240eaf5-e88e-46ed-be5d-8c9df0622022
Dec 05 09:32:44 np0005546420.localdomain setroubleshoot[203006]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                                 
                                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                                 
                                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                                 Do
                                                                 
                                                                 Turn on full auditing
                                                                 # auditctl -w /etc/shadow -p w
                                                                 Try to recreate AVC. Then execute
                                                                 # ausearch -m avc -ts recent
                                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                                 otherwise report as a bugzilla.
                                                                 
                                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                                 
                                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                                 # semodule -X 300 -i my-virtlogd.pp
                                                                 
Dec 05 09:32:45 np0005546420.localdomain python3.9[203224]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:32:45 np0005546420.localdomain systemd-sysv-generator[203251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:32:45 np0005546420.localdomain systemd-rc-local-generator[203247]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Listening on libvirt locking daemon socket.
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Starting libvirt QEMU daemon socket...
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 05 09:32:45 np0005546420.localdomain systemd[1]: Started libvirt QEMU daemon.
Dec 05 09:32:45 np0005546420.localdomain sudo[203222]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:45 np0005546420.localdomain sudo[203397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wkatboubenffsgxzmtviyoawsmexcsml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927165.5730774-2955-33441615050893/AnsiballZ_systemd.py
Dec 05 09:32:45 np0005546420.localdomain sudo[203397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:46 np0005546420.localdomain python3.9[203399]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:32:46 np0005546420.localdomain systemd-sysv-generator[203430]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:32:46 np0005546420.localdomain systemd-rc-local-generator[203427]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: Starting libvirt secret daemon socket...
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: Listening on libvirt secret daemon socket.
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: Starting libvirt secret daemon admin socket...
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 05 09:32:46 np0005546420.localdomain systemd[1]: Started libvirt secret daemon.
Dec 05 09:32:46 np0005546420.localdomain sudo[203397]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10985 DF PROTO=TCP SPT=47366 DPT=9100 SEQ=2280836205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC0A0990000000001030307) 
Dec 05 09:32:47 np0005546420.localdomain sudo[203568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tzdvuppdxoouyrgaybywyvonbvmngrjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927167.0453706-3066-238506724608465/AnsiballZ_file.py
Dec 05 09:32:47 np0005546420.localdomain sudo[203568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:47 np0005546420.localdomain python3.9[203570]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:47 np0005546420.localdomain sudo[203568]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:47 np0005546420.localdomain sudo[203678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jebcfqlszjimiqpchezwfirxmesojkjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927167.7103438-3090-200607901187299/AnsiballZ_find.py
Dec 05 09:32:48 np0005546420.localdomain sudo[203678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:48 np0005546420.localdomain python3.9[203680]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:32:48 np0005546420.localdomain sudo[203678]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:48 np0005546420.localdomain sudo[203788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hwlllfdzwovppvpdvyyshaayttbyvihv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927168.4130037-3114-154359256240065/AnsiballZ_command.py
Dec 05 09:32:48 np0005546420.localdomain sudo[203788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52736 DF PROTO=TCP SPT=43740 DPT=9105 SEQ=318564113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC0A8810000000001030307) 
Dec 05 09:32:48 np0005546420.localdomain python3.9[203790]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                                            echo ceph
                                                            awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:32:48 np0005546420.localdomain sudo[203788]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:49 np0005546420.localdomain python3.9[203902]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:32:50 np0005546420.localdomain python3.9[204010]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:50 np0005546420.localdomain python3.9[204096]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927169.9856093-3171-168377094355523/.source.xml follow=False _original_basename=secret.xml.j2 checksum=70808ffe10e7f01d2f96ff948de5899db3cbf084 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:51 np0005546420.localdomain sudo[204204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdodfyxghlrhdyudmkatjwdihvjyolcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927171.1560392-3216-205485636542289/AnsiballZ_command.py
Dec 05 09:32:51 np0005546420.localdomain sudo[204204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:51 np0005546420.localdomain python3.9[204206]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
                                                            virsh secret-define --file /tmp/secret.xml
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:32:51 np0005546420.localdomain polkitd[1032]: Registered Authentication Agent for unix-process:204208:1028980 (system bus name :1.2843 [pkttyagent --process 204208 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Dec 05 09:32:51 np0005546420.localdomain polkitd[1032]: Unregistered Authentication Agent for unix-process:204208:1028980 (system bus name :1.2843, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Dec 05 09:32:51 np0005546420.localdomain polkitd[1032]: Registered Authentication Agent for unix-process:204207:1028980 (system bus name :1.2844 [pkttyagent --process 204207 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Dec 05 09:32:51 np0005546420.localdomain polkitd[1032]: Unregistered Authentication Agent for unix-process:204207:1028980 (system bus name :1.2844, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Dec 05 09:32:51 np0005546420.localdomain sudo[204204]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52738 DF PROTO=TCP SPT=43740 DPT=9105 SEQ=318564113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC0B4990000000001030307) 
Dec 05 09:32:52 np0005546420.localdomain python3.9[204326]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:53 np0005546420.localdomain sudo[204434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grsuhysjmuyvarbnpiqgxhtyjreqdvuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927173.206865-3264-40386605950436/AnsiballZ_command.py
Dec 05 09:32:53 np0005546420.localdomain sudo[204434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:53 np0005546420.localdomain sudo[204434]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:54 np0005546420.localdomain sudo[204545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ycselppbaxqfprwvnbbtrecbwmvyqacc ; FSID=79feddb1-4bfc-557f-83b9-0d57c9f66c1b KEY=AQBSjzJpAAAAABAA9vx62xurlc+sDq10LiR30Q== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927174.4479191-3288-96864306613250/AnsiballZ_command.py
Dec 05 09:32:54 np0005546420.localdomain sudo[204545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:54 np0005546420.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 05 09:32:54 np0005546420.localdomain systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 05 09:32:55 np0005546420.localdomain polkitd[1032]: Registered Authentication Agent for unix-process:204549:1029316 (system bus name :1.2847 [pkttyagent --process 204549 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Dec 05 09:32:55 np0005546420.localdomain polkitd[1032]: Unregistered Authentication Agent for unix-process:204549:1029316 (system bus name :1.2847, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Dec 05 09:32:55 np0005546420.localdomain sudo[204545]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10986 DF PROTO=TCP SPT=47366 DPT=9100 SEQ=2280836205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC0C1D90000000001030307) 
Dec 05 09:32:55 np0005546420.localdomain sudo[204662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jqntiuhetbtgjkcqmefklffhkgomdzqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927175.2338731-3312-6188089817059/AnsiballZ_copy.py
Dec 05 09:32:55 np0005546420.localdomain sudo[204662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:55 np0005546420.localdomain python3.9[204664]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:55 np0005546420.localdomain sudo[204662]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:56 np0005546420.localdomain sudo[204772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qiifomzrbtdjsnlbndwjugburgxkdnwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927175.9273744-3336-39013924477668/AnsiballZ_stat.py
Dec 05 09:32:56 np0005546420.localdomain sudo[204772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:56 np0005546420.localdomain python3.9[204774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:56 np0005546420.localdomain sudo[204772]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:56 np0005546420.localdomain sudo[204860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jlkqvifltrknzfgelwoysqojjnfodpxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927175.9273744-3336-39013924477668/AnsiballZ_copy.py
Dec 05 09:32:56 np0005546420.localdomain sudo[204860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:56 np0005546420.localdomain python3.9[204862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927175.9273744-3336-39013924477668/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:56 np0005546420.localdomain sudo[204860]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:57 np0005546420.localdomain sudo[204970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nbwjvcmxgrrjgzhafmfimbxetvmvrqbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927177.245665-3384-163098830738164/AnsiballZ_file.py
Dec 05 09:32:57 np0005546420.localdomain sudo[204970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:57 np0005546420.localdomain python3.9[204972]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:57 np0005546420.localdomain sudo[204970]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:58 np0005546420.localdomain sudo[205080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zsxdojvlhfgiohrtnvsovvhlgwesprwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927177.9434562-3408-120356687828351/AnsiballZ_stat.py
Dec 05 09:32:58 np0005546420.localdomain sudo[205080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:58 np0005546420.localdomain python3.9[205082]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:58 np0005546420.localdomain sudo[205080]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:58 np0005546420.localdomain sudo[205137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxntqedjkkjfvalwwudnqdfykncruorz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927177.9434562-3408-120356687828351/AnsiballZ_file.py
Dec 05 09:32:58 np0005546420.localdomain sudo[205137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:58 np0005546420.localdomain python3.9[205139]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:32:58 np0005546420.localdomain sudo[205137]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:59 np0005546420.localdomain sudo[205247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhyifjeqchlrowrfbzdolcxrzhyownjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927179.1081178-3444-236430072916707/AnsiballZ_stat.py
Dec 05 09:32:59 np0005546420.localdomain sudo[205247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:32:59 np0005546420.localdomain python3.9[205249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:32:59 np0005546420.localdomain sudo[205247]: pam_unix(sudo:session): session closed for user root
Dec 05 09:32:59 np0005546420.localdomain sudo[205304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mzxzrfhwdioekzhzeilajshkundtzlap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927179.1081178-3444-236430072916707/AnsiballZ_file.py
Dec 05 09:32:59 np0005546420.localdomain sudo[205304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:00 np0005546420.localdomain python3.9[205306]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3w77svqw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:00 np0005546420.localdomain sudo[205304]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5327 DF PROTO=TCP SPT=41758 DPT=9102 SEQ=2070240297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC0D6190000000001030307) 
Dec 05 09:33:00 np0005546420.localdomain sudo[205414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zznaynrbulvfcoozkhduhkhlntolntsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927180.2859414-3480-115368817526590/AnsiballZ_stat.py
Dec 05 09:33:00 np0005546420.localdomain sudo[205414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:00 np0005546420.localdomain python3.9[205416]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:33:00 np0005546420.localdomain sudo[205414]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:01 np0005546420.localdomain sudo[205471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ggplluwoaegxpvjnyntjdwxxuoyveaau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927180.2859414-3480-115368817526590/AnsiballZ_file.py
Dec 05 09:33:01 np0005546420.localdomain sudo[205471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:01 np0005546420.localdomain python3.9[205473]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:01 np0005546420.localdomain sudo[205471]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:01 np0005546420.localdomain sudo[205581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-miyhtcexzmuuggryfxalueikgubstrhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927181.5013993-3519-237491134702124/AnsiballZ_command.py
Dec 05 09:33:01 np0005546420.localdomain sudo[205581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:01 np0005546420.localdomain python3.9[205583]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:33:01 np0005546420.localdomain sudo[205581]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:33:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39323 DF PROTO=TCP SPT=53318 DPT=9882 SEQ=1189042568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC0DDDA0000000001030307) 
Dec 05 09:33:02 np0005546420.localdomain podman[205640]: 2025-12-05 09:33:02.522988599 +0000 UTC m=+0.091620124 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:33:02 np0005546420.localdomain podman[205640]: 2025-12-05 09:33:02.583441978 +0000 UTC m=+0.152073473 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 05 09:33:02 np0005546420.localdomain sudo[205715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-insmcwhfykalgvwokfvdicxetxlmxhwx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927182.219744-3543-185094716372347/AnsiballZ_edpm_nftables_from_files.py
Dec 05 09:33:02 np0005546420.localdomain sudo[205715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:02 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:33:02 np0005546420.localdomain python3[205718]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 09:33:02 np0005546420.localdomain sudo[205715]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:03 np0005546420.localdomain sudo[205826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vemikgrkttfwbqcuehpnkbmvqlhiljpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927183.1689837-3567-238344316567679/AnsiballZ_stat.py
Dec 05 09:33:03 np0005546420.localdomain sudo[205826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:03 np0005546420.localdomain python3.9[205828]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:33:03 np0005546420.localdomain sudo[205826]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:03 np0005546420.localdomain sudo[205883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bpzqqjzamvfjcbesaonusjirasgblbvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927183.1689837-3567-238344316567679/AnsiballZ_file.py
Dec 05 09:33:03 np0005546420.localdomain sudo[205883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52740 DF PROTO=TCP SPT=43740 DPT=9105 SEQ=318564113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC0E3DA0000000001030307) 
Dec 05 09:33:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:33:04.078 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:33:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:33:04.078 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:33:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:33:04.079 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:33:04 np0005546420.localdomain python3.9[205885]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:04 np0005546420.localdomain sudo[205883]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:05 np0005546420.localdomain sudo[205993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-stpqunhodrwhilkguhybctfymuuaqvfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927184.3201962-3603-261624147829064/AnsiballZ_stat.py
Dec 05 09:33:05 np0005546420.localdomain sudo[205993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:05 np0005546420.localdomain python3.9[205995]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:33:05 np0005546420.localdomain sudo[205993]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:05 np0005546420.localdomain sudo[206050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dapwjtiefqbubmtuerniedozggnjurmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927184.3201962-3603-261624147829064/AnsiballZ_file.py
Dec 05 09:33:05 np0005546420.localdomain sudo[206050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:05 np0005546420.localdomain python3.9[206052]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:05 np0005546420.localdomain sudo[206050]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:06 np0005546420.localdomain sudo[206160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wvqlegheiivckdobkrrshdarjtwtqxne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927185.8427753-3639-83831088506216/AnsiballZ_stat.py
Dec 05 09:33:06 np0005546420.localdomain sudo[206160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:06 np0005546420.localdomain python3.9[206162]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:33:06 np0005546420.localdomain sudo[206160]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:06 np0005546420.localdomain sudo[206217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nppcsxqyynlwxjueogxlpsjhmfddzkup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927185.8427753-3639-83831088506216/AnsiballZ_file.py
Dec 05 09:33:06 np0005546420.localdomain sudo[206217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:06 np0005546420.localdomain python3.9[206219]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:06 np0005546420.localdomain sudo[206217]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60163 DF PROTO=TCP SPT=51414 DPT=9101 SEQ=507684524 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC0EFDA0000000001030307) 
Dec 05 09:33:07 np0005546420.localdomain sudo[206327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cswclzxubbrgkpjpjayltmwbnowzkmvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927187.0194998-3675-138425723519722/AnsiballZ_stat.py
Dec 05 09:33:07 np0005546420.localdomain sudo[206327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:07 np0005546420.localdomain python3.9[206329]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:33:07 np0005546420.localdomain sudo[206327]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:07 np0005546420.localdomain sudo[206384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ugtatbojuyvobqjofefzdbrkfnvcawlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927187.0194998-3675-138425723519722/AnsiballZ_file.py
Dec 05 09:33:07 np0005546420.localdomain sudo[206384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:08 np0005546420.localdomain python3.9[206386]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:08 np0005546420.localdomain sudo[206384]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:08 np0005546420.localdomain sudo[206494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fvjzltyaxdusxfwfdnfnfchlmsrvxghz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927188.2707324-3711-38579410281794/AnsiballZ_stat.py
Dec 05 09:33:08 np0005546420.localdomain sudo[206494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:08 np0005546420.localdomain python3.9[206496]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:33:09 np0005546420.localdomain sudo[206494]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:09 np0005546420.localdomain sudo[206584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cailjtbyjrcjzutytwpozrfsrvcwbfzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927188.2707324-3711-38579410281794/AnsiballZ_copy.py
Dec 05 09:33:09 np0005546420.localdomain sudo[206584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:09 np0005546420.localdomain python3.9[206586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927188.2707324-3711-38579410281794/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:09 np0005546420.localdomain sudo[206584]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:10 np0005546420.localdomain sudo[206694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bmobtqkbifnqmysndflznqyxhmifnpte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927189.7642434-3756-49384946417987/AnsiballZ_file.py
Dec 05 09:33:10 np0005546420.localdomain sudo[206694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:10 np0005546420.localdomain python3.9[206696]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:10 np0005546420.localdomain sudo[206694]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:10 np0005546420.localdomain sudo[206804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgbfawzsjqhxmcpnrusvpcvsijtcoedv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927190.4230895-3780-50568595166182/AnsiballZ_command.py
Dec 05 09:33:10 np0005546420.localdomain sudo[206804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:33:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6081 DF PROTO=TCP SPT=56110 DPT=9100 SEQ=619495593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC0FDD90000000001030307) 
Dec 05 09:33:10 np0005546420.localdomain podman[206807]: 2025-12-05 09:33:10.813048584 +0000 UTC m=+0.098312252 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:33:10 np0005546420.localdomain python3.9[206806]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:33:11 np0005546420.localdomain podman[206807]: 2025-12-05 09:33:11.608424031 +0000 UTC m=+0.893687649 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:33:11 np0005546420.localdomain sudo[206804]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:11 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:33:12 np0005546420.localdomain sudo[206935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wqzaxcdofiyqpawcvnnmxmirbjoteknw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927191.816865-3804-263477676291807/AnsiballZ_blockinfile.py
Dec 05 09:33:12 np0005546420.localdomain sudo[206935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:12 np0005546420.localdomain python3.9[206937]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:12 np0005546420.localdomain sudo[206935]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6082 DF PROTO=TCP SPT=56110 DPT=9100 SEQ=619495593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC105D90000000001030307) 
Dec 05 09:33:13 np0005546420.localdomain sudo[207045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-imwzxlytexepbkjegzagkhbvreyzpici ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927192.7568092-3831-71071364639604/AnsiballZ_command.py
Dec 05 09:33:13 np0005546420.localdomain sudo[207045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:14 np0005546420.localdomain python3.9[207047]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:33:14 np0005546420.localdomain sudo[207045]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:14 np0005546420.localdomain sudo[207156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jrtkbepenziosnhzqadkewnaezkgmrrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927194.2534308-3855-143334759039914/AnsiballZ_stat.py
Dec 05 09:33:14 np0005546420.localdomain sudo[207156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:14 np0005546420.localdomain python3.9[207158]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:33:14 np0005546420.localdomain sudo[207156]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:15 np0005546420.localdomain sudo[207268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-llbhpnhbehfibduzkjhorfyufhukvfpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927195.3409755-3880-216605702033510/AnsiballZ_command.py
Dec 05 09:33:15 np0005546420.localdomain sudo[207268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:15 np0005546420.localdomain python3.9[207270]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:33:15 np0005546420.localdomain sudo[207268]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:16 np0005546420.localdomain sudo[207381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hlcqpbdsmgkjwjcwewtzlpaoswknrpnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927196.0520115-3903-33318287732481/AnsiballZ_file.py
Dec 05 09:33:16 np0005546420.localdomain sudo[207381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:16 np0005546420.localdomain python3.9[207383]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:16 np0005546420.localdomain sudo[207381]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6083 DF PROTO=TCP SPT=56110 DPT=9100 SEQ=619495593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC115990000000001030307) 
Dec 05 09:33:17 np0005546420.localdomain sudo[207491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qrvavkagxomhinamtpyhahqohbslpdnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927196.7424543-3927-171361912012734/AnsiballZ_stat.py
Dec 05 09:33:17 np0005546420.localdomain sudo[207491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:17 np0005546420.localdomain python3.9[207493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:33:17 np0005546420.localdomain sudo[207491]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:17 np0005546420.localdomain sudo[207579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lwhtowhowdbkfaxafbjmkuztwvneiivu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927196.7424543-3927-171361912012734/AnsiballZ_copy.py
Dec 05 09:33:17 np0005546420.localdomain sudo[207579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:17 np0005546420.localdomain python3.9[207581]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927196.7424543-3927-171361912012734/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:17 np0005546420.localdomain sudo[207579]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:18 np0005546420.localdomain sudo[207689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dfwtpzjkubsvtwebjdkfqqpfkrxxrmbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927197.9570224-3972-272242795254529/AnsiballZ_stat.py
Dec 05 09:33:18 np0005546420.localdomain sudo[207689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:18 np0005546420.localdomain python3.9[207691]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:33:18 np0005546420.localdomain sudo[207689]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:18 np0005546420.localdomain sudo[207777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aloharugldsjvkzmvxyjapvqqkscfknz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927197.9570224-3972-272242795254529/AnsiballZ_copy.py
Dec 05 09:33:18 np0005546420.localdomain sudo[207777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4834 DF PROTO=TCP SPT=34212 DPT=9105 SEQ=2197554621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC11DB10000000001030307) 
Dec 05 09:33:18 np0005546420.localdomain python3.9[207779]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927197.9570224-3972-272242795254529/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:18 np0005546420.localdomain sudo[207777]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:19 np0005546420.localdomain sudo[207887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pgdirmqvasbzkcjzruuigxldnjgainao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927199.1431847-4017-89270818272031/AnsiballZ_stat.py
Dec 05 09:33:19 np0005546420.localdomain sudo[207887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:19 np0005546420.localdomain python3.9[207889]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:33:19 np0005546420.localdomain sudo[207887]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:19 np0005546420.localdomain sudo[207975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iqvrqjwgvbfxyeaiwvlwzuwhbidrogcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927199.1431847-4017-89270818272031/AnsiballZ_copy.py
Dec 05 09:33:19 np0005546420.localdomain sudo[207975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:20 np0005546420.localdomain python3.9[207977]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927199.1431847-4017-89270818272031/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:20 np0005546420.localdomain sudo[207975]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:20 np0005546420.localdomain sudo[208085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxfauxjfknyxcizdhnxurccxiylbslxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927200.327264-4062-243681447153173/AnsiballZ_systemd.py
Dec 05 09:33:20 np0005546420.localdomain sudo[208085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:20 np0005546420.localdomain python3.9[208087]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:33:20 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:33:21 np0005546420.localdomain systemd-rc-local-generator[208109]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:33:21 np0005546420.localdomain systemd-sysv-generator[208114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:33:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:33:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:21 np0005546420.localdomain systemd[1]: Reached target edpm_libvirt.target.
Dec 05 09:33:21 np0005546420.localdomain sudo[208085]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:21 np0005546420.localdomain sudo[208234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xatpjiqgpkmgtudldjonwqitridyoigi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927201.5688636-4086-72650941152079/AnsiballZ_systemd.py
Dec 05 09:33:21 np0005546420.localdomain sudo[208234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4836 DF PROTO=TCP SPT=34212 DPT=9105 SEQ=2197554621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC129D90000000001030307) 
Dec 05 09:33:22 np0005546420.localdomain python3.9[208236]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:33:22 np0005546420.localdomain systemd-rc-local-generator[208257]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:33:22 np0005546420.localdomain systemd-sysv-generator[208260]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:33:22 np0005546420.localdomain systemd-rc-local-generator[208297]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:33:22 np0005546420.localdomain systemd-sysv-generator[208301]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:22 np0005546420.localdomain sudo[208234]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:23 np0005546420.localdomain sshd[159614]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:33:23 np0005546420.localdomain systemd[1]: session-52.scope: Deactivated successfully.
Dec 05 09:33:23 np0005546420.localdomain systemd[1]: session-52.scope: Consumed 3min 40.557s CPU time.
Dec 05 09:33:23 np0005546420.localdomain systemd-logind[762]: Session 52 logged out. Waiting for processes to exit.
Dec 05 09:33:23 np0005546420.localdomain systemd-logind[762]: Removed session 52.
Dec 05 09:33:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6084 DF PROTO=TCP SPT=56110 DPT=9100 SEQ=619495593 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC135D90000000001030307) 
Dec 05 09:33:28 np0005546420.localdomain sshd[208327]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:33:28 np0005546420.localdomain sshd[208327]: Accepted publickey for zuul from 192.168.122.30 port 40700 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:33:28 np0005546420.localdomain systemd-logind[762]: New session 53 of user zuul.
Dec 05 09:33:28 np0005546420.localdomain systemd[1]: Started Session 53 of User zuul.
Dec 05 09:33:28 np0005546420.localdomain sshd[208327]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:33:29 np0005546420.localdomain python3.9[208438]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:33:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62800 DF PROTO=TCP SPT=57934 DPT=9102 SEQ=4227311911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC14B5A0000000001030307) 
Dec 05 09:33:31 np0005546420.localdomain python3.9[208550]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:33:31 np0005546420.localdomain network[208567]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:33:31 np0005546420.localdomain network[208568]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:33:31 np0005546420.localdomain network[208569]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:33:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:33:32 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:33:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45578 DF PROTO=TCP SPT=47384 DPT=9882 SEQ=17324909 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC153DA0000000001030307) 
Dec 05 09:33:32 np0005546420.localdomain systemd[1]: tmp-crun.581btg.mount: Deactivated successfully.
Dec 05 09:33:32 np0005546420.localdomain podman[208611]: 2025-12-05 09:33:32.747164348 +0000 UTC m=+0.095729908 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:33:32 np0005546420.localdomain podman[208611]: 2025-12-05 09:33:32.791557903 +0000 UTC m=+0.140123463 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:33:32 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:33:34 np0005546420.localdomain sudo[208679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:33:34 np0005546420.localdomain sudo[208679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:33:34 np0005546420.localdomain sudo[208679]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:34 np0005546420.localdomain sudo[208697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:33:34 np0005546420.localdomain sudo[208697]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:33:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4838 DF PROTO=TCP SPT=34212 DPT=9105 SEQ=2197554621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC159D90000000001030307) 
Dec 05 09:33:34 np0005546420.localdomain sudo[208697]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59364 DF PROTO=TCP SPT=36826 DPT=9101 SEQ=1619589974 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC163D90000000001030307) 
Dec 05 09:33:37 np0005546420.localdomain sudo[208801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:33:37 np0005546420.localdomain sudo[208801]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:33:37 np0005546420.localdomain sudo[208801]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:39 np0005546420.localdomain sudo[208910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lefzpseffigglqbqowakwdkroyfufojk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927218.8161185-102-26720482800374/AnsiballZ_setup.py
Dec 05 09:33:39 np0005546420.localdomain sudo[208910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:39 np0005546420.localdomain python3.9[208912]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:33:39 np0005546420.localdomain sudo[208910]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:40 np0005546420.localdomain sudo[208973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxlbuvsdxcxucplmuhvearlyaxwzzsaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927218.8161185-102-26720482800374/AnsiballZ_dnf.py
Dec 05 09:33:40 np0005546420.localdomain sudo[208973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:40 np0005546420.localdomain python3.9[208975]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:33:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10929 DF PROTO=TCP SPT=35452 DPT=9100 SEQ=3714877377 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC1731A0000000001030307) 
Dec 05 09:33:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:33:42 np0005546420.localdomain podman[208978]: 2025-12-05 09:33:42.505782358 +0000 UTC m=+0.082226429 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec 05 09:33:42 np0005546420.localdomain podman[208978]: 2025-12-05 09:33:42.538342247 +0000 UTC m=+0.114786378 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 09:33:42 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:33:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10930 DF PROTO=TCP SPT=35452 DPT=9100 SEQ=3714877377 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC17B190000000001030307) 
Dec 05 09:33:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10931 DF PROTO=TCP SPT=35452 DPT=9100 SEQ=3714877377 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC18ADA0000000001030307) 
Dec 05 09:33:48 np0005546420.localdomain sudo[208973]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23566 DF PROTO=TCP SPT=45372 DPT=9105 SEQ=354642032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC192E10000000001030307) 
Dec 05 09:33:49 np0005546420.localdomain sudo[209102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfbccfpldefgldbbuwxichlzhjoyamru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927228.6903682-138-157237877502533/AnsiballZ_stat.py
Dec 05 09:33:49 np0005546420.localdomain sudo[209102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:49 np0005546420.localdomain python3.9[209104]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:33:49 np0005546420.localdomain sudo[209102]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:49 np0005546420.localdomain sudo[209214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eapdlddpwudeaoblvjdpwgaunmdijpmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927229.4685256-162-54787300107209/AnsiballZ_copy.py
Dec 05 09:33:49 np0005546420.localdomain sudo[209214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:50 np0005546420.localdomain python3.9[209216]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:50 np0005546420.localdomain sudo[209214]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:50 np0005546420.localdomain sudo[209324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sevflllohpeffoharkkhkjphtiaaskrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927230.28373-186-230810254873057/AnsiballZ_command.py
Dec 05 09:33:50 np0005546420.localdomain sudo[209324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:50 np0005546420.localdomain python3.9[209326]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:33:50 np0005546420.localdomain sudo[209324]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:51 np0005546420.localdomain sudo[209435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-budwgezbvkvsihejpepcjjdxgobvnwfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927231.088937-210-201428534946775/AnsiballZ_command.py
Dec 05 09:33:51 np0005546420.localdomain sudo[209435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:51 np0005546420.localdomain python3.9[209437]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:33:51 np0005546420.localdomain sudo[209435]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23568 DF PROTO=TCP SPT=45372 DPT=9105 SEQ=354642032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC19ED90000000001030307) 
Dec 05 09:33:52 np0005546420.localdomain sudo[209546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvuskqnnxtaqqjaamofbjscrtkasgwtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927231.77592-234-49288583047350/AnsiballZ_command.py
Dec 05 09:33:52 np0005546420.localdomain sudo[209546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:52 np0005546420.localdomain python3.9[209548]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:33:52 np0005546420.localdomain sudo[209546]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:52 np0005546420.localdomain sudo[209657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-euksqmyyjkvrxmazgzqcatdzhzxgormr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927232.5623145-261-204028427500476/AnsiballZ_stat.py
Dec 05 09:33:52 np0005546420.localdomain sudo[209657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:53 np0005546420.localdomain python3.9[209659]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:33:53 np0005546420.localdomain sudo[209657]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:54 np0005546420.localdomain sudo[209769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sknhwyjkybszxgrenqyncwismnbdraet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927233.8528717-294-61363003406166/AnsiballZ_lineinfile.py
Dec 05 09:33:54 np0005546420.localdomain sudo[209769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:54 np0005546420.localdomain python3.9[209771]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:33:54 np0005546420.localdomain sudo[209769]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62802 DF PROTO=TCP SPT=57934 DPT=9102 SEQ=4227311911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC1ABD90000000001030307) 
Dec 05 09:33:56 np0005546420.localdomain sudo[209879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ebxkrrffxooromocbuhmnsinclkcugbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927235.3485096-321-227656059852958/AnsiballZ_systemd_service.py
Dec 05 09:33:56 np0005546420.localdomain sudo[209879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:56 np0005546420.localdomain python3.9[209881]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:33:56 np0005546420.localdomain systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 05 09:33:56 np0005546420.localdomain sudo[209879]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:58 np0005546420.localdomain sudo[209993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lzhuppmkycdnwkuqffoyvvuakomaiufl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927237.721482-345-26079806506673/AnsiballZ_systemd_service.py
Dec 05 09:33:58 np0005546420.localdomain sudo[209993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:58 np0005546420.localdomain python3.9[209995]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:33:58 np0005546420.localdomain systemd-sysv-generator[210027]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:33:58 np0005546420.localdomain systemd-rc-local-generator[210024]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: Starting Open-iSCSI...
Dec 05 09:33:58 np0005546420.localdomain iscsid[210036]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Dec 05 09:33:58 np0005546420.localdomain iscsid[210036]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Dec 05 09:33:58 np0005546420.localdomain iscsid[210036]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Dec 05 09:33:58 np0005546420.localdomain iscsid[210036]: If using hardware iscsi like qla4xxx this message can be ignored.
Dec 05 09:33:58 np0005546420.localdomain iscsid[210036]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Dec 05 09:33:58 np0005546420.localdomain iscsid[210036]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Dec 05 09:33:58 np0005546420.localdomain iscsid[210036]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: Started Open-iSCSI.
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 05 09:33:58 np0005546420.localdomain systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 05 09:33:58 np0005546420.localdomain sudo[209993]: pam_unix(sudo:session): session closed for user root
Dec 05 09:33:59 np0005546420.localdomain sudo[210145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdghmwvhitmdtbvdeosgykyfwxmdrxgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927239.3476639-378-249303531597635/AnsiballZ_service_facts.py
Dec 05 09:33:59 np0005546420.localdomain sudo[210145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:33:59 np0005546420.localdomain python3.9[210147]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:33:59 np0005546420.localdomain network[210164]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:33:59 np0005546420.localdomain network[210165]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:33:59 np0005546420.localdomain network[210166]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:34:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35434 DF PROTO=TCP SPT=56578 DPT=9102 SEQ=3610630842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC1C0990000000001030307) 
Dec 05 09:34:00 np0005546420.localdomain systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 05 09:34:00 np0005546420.localdomain systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 05 09:34:01 np0005546420.localdomain systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service.
Dec 05 09:34:01 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 10952846-a29f-4e33-8742-189d40b18736
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 10952846-a29f-4e33-8742-189d40b18736
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 10952846-a29f-4e33-8742-189d40b18736
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 10952846-a29f-4e33-8742-189d40b18736
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 10952846-a29f-4e33-8742-189d40b18736
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 10952846-a29f-4e33-8742-189d40b18736
Dec 05 09:34:02 np0005546420.localdomain setroubleshoot[210180]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 05 09:34:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32427 DF PROTO=TCP SPT=37668 DPT=9882 SEQ=621340040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC1C7D90000000001030307) 
Dec 05 09:34:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:34:02 np0005546420.localdomain podman[210292]: 2025-12-05 09:34:02.962018439 +0000 UTC m=+0.109529157 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 09:34:03 np0005546420.localdomain podman[210292]: 2025-12-05 09:34:03.011573598 +0000 UTC m=+0.159084396 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:34:03 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:34:03 np0005546420.localdomain sudo[210145]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:34:04.079 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:34:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:34:04.080 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:34:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:34:04.080 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:34:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23570 DF PROTO=TCP SPT=45372 DPT=9105 SEQ=354642032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC1CFDA0000000001030307) 
Dec 05 09:34:06 np0005546420.localdomain sudo[210438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdjrmmcxfymgfwmfkfkcviaibcmmaqrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927246.1661224-408-141196713006713/AnsiballZ_file.py
Dec 05 09:34:06 np0005546420.localdomain sudo[210438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:06 np0005546420.localdomain python3.9[210440]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 09:34:06 np0005546420.localdomain sudo[210438]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24477 DF PROTO=TCP SPT=59484 DPT=9101 SEQ=2272966468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC1D9DA0000000001030307) 
Dec 05 09:34:07 np0005546420.localdomain sudo[210548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efgbvlhpakuiqojcbaxzvtwdoquieklw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927247.0057862-432-64329798295228/AnsiballZ_modprobe.py
Dec 05 09:34:07 np0005546420.localdomain sudo[210548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:07 np0005546420.localdomain python3.9[210550]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 05 09:34:07 np0005546420.localdomain sudo[210548]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:08 np0005546420.localdomain sudo[210662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eijhxohqxxzwvghprsiqtclrjgizzmsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927247.8890448-456-189322771427378/AnsiballZ_stat.py
Dec 05 09:34:08 np0005546420.localdomain sudo[210662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:08 np0005546420.localdomain python3.9[210664]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:34:08 np0005546420.localdomain sudo[210662]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:08 np0005546420.localdomain sudo[210750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpzszfogblopqoniybdiikfymzpdhzbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927247.8890448-456-189322771427378/AnsiballZ_copy.py
Dec 05 09:34:08 np0005546420.localdomain sudo[210750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:09 np0005546420.localdomain python3.9[210752]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927247.8890448-456-189322771427378/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:09 np0005546420.localdomain sudo[210750]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:09 np0005546420.localdomain sudo[210860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djgzlckyqlpgpkiecnbfjsyjnwyjdcgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927249.339463-504-226489969511755/AnsiballZ_lineinfile.py
Dec 05 09:34:09 np0005546420.localdomain sudo[210860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:09 np0005546420.localdomain python3.9[210862]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:09 np0005546420.localdomain sudo[210860]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:10 np0005546420.localdomain sudo[210970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kljuqkwyuqjkfabfwyuoagghvbtranqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927249.995106-528-180259020425180/AnsiballZ_systemd.py
Dec 05 09:34:10 np0005546420.localdomain sudo[210970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64254 DF PROTO=TCP SPT=59470 DPT=9100 SEQ=14240005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC1E85A0000000001030307) 
Dec 05 09:34:10 np0005546420.localdomain python3.9[210972]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:34:10 np0005546420.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 05 09:34:10 np0005546420.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 05 09:34:10 np0005546420.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 05 09:34:11 np0005546420.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 05 09:34:11 np0005546420.localdomain systemd-modules-load[210976]: Module 'msr' is built in
Dec 05 09:34:11 np0005546420.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 05 09:34:11 np0005546420.localdomain sudo[210970]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:11 np0005546420.localdomain sudo[211084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ppxeblrltihxjbvngdrvexucdvwohyzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927251.4694972-552-242421966848533/AnsiballZ_file.py
Dec 05 09:34:11 np0005546420.localdomain sudo[211084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:11 np0005546420.localdomain python3.9[211086]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:34:11 np0005546420.localdomain sudo[211084]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:12 np0005546420.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully.
Dec 05 09:34:12 np0005546420.localdomain systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 05 09:34:12 np0005546420.localdomain sudo[211194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rkahxevcwognlgitsgiavgxejzfffpey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927252.2787676-579-229763859208287/AnsiballZ_stat.py
Dec 05 09:34:12 np0005546420.localdomain sudo[211194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:12 np0005546420.localdomain python3.9[211196]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:34:12 np0005546420.localdomain sudo[211194]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64255 DF PROTO=TCP SPT=59470 DPT=9100 SEQ=14240005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC1F0590000000001030307) 
Dec 05 09:34:13 np0005546420.localdomain sudo[211304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zucvjeqjikpobpklgpfcugjttfclzfvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927253.049528-606-99923704291873/AnsiballZ_stat.py
Dec 05 09:34:13 np0005546420.localdomain sudo[211304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:34:13 np0005546420.localdomain systemd[1]: tmp-crun.MLS28T.mount: Deactivated successfully.
Dec 05 09:34:13 np0005546420.localdomain podman[211307]: 2025-12-05 09:34:13.452539996 +0000 UTC m=+0.094512447 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 09:34:13 np0005546420.localdomain podman[211307]: 2025-12-05 09:34:13.460828835 +0000 UTC m=+0.102801306 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 09:34:13 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:34:13 np0005546420.localdomain python3.9[211306]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:34:13 np0005546420.localdomain sudo[211304]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:14 np0005546420.localdomain sudo[211432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mndxyygfkmrywpsvokqpolaescuekesd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927254.248207-630-137200576061202/AnsiballZ_stat.py
Dec 05 09:34:14 np0005546420.localdomain sudo[211432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:14 np0005546420.localdomain python3.9[211434]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:34:14 np0005546420.localdomain sudo[211432]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:15 np0005546420.localdomain sudo[211520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcqhhdnjlkufefpbdaytwvvlgquyggvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927254.248207-630-137200576061202/AnsiballZ_copy.py
Dec 05 09:34:15 np0005546420.localdomain sudo[211520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:15 np0005546420.localdomain python3.9[211522]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927254.248207-630-137200576061202/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:15 np0005546420.localdomain sudo[211520]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:15 np0005546420.localdomain sudo[211630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xzgdutkfzqrjzzbwsddsufuuevfmbriv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927255.525883-675-127926541021573/AnsiballZ_command.py
Dec 05 09:34:15 np0005546420.localdomain sudo[211630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:16 np0005546420.localdomain python3.9[211632]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:34:16 np0005546420.localdomain sudo[211630]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:16 np0005546420.localdomain sudo[211741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpubpadfgefsquiwlpvgwfnwgrfcxvwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927256.3707056-699-75670902738166/AnsiballZ_lineinfile.py
Dec 05 09:34:16 np0005546420.localdomain sudo[211741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64256 DF PROTO=TCP SPT=59470 DPT=9100 SEQ=14240005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC2001A0000000001030307) 
Dec 05 09:34:16 np0005546420.localdomain python3.9[211743]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:16 np0005546420.localdomain sudo[211741]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:17 np0005546420.localdomain sudo[211851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dambbrbahqkngkoyktxzaswobwrkzbmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927257.0880587-723-176846401018621/AnsiballZ_replace.py
Dec 05 09:34:17 np0005546420.localdomain sudo[211851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:17 np0005546420.localdomain python3.9[211853]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:17 np0005546420.localdomain sudo[211851]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:18 np0005546420.localdomain sudo[211961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dahmxnvswtfrwsqgzlvgpsixengocefl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927257.961519-747-140529373967442/AnsiballZ_replace.py
Dec 05 09:34:18 np0005546420.localdomain sudo[211961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:18 np0005546420.localdomain python3.9[211963]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:18 np0005546420.localdomain sudo[211961]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32428 DF PROTO=TCP SPT=37668 DPT=9882 SEQ=621340040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC207DA0000000001030307) 
Dec 05 09:34:19 np0005546420.localdomain sudo[212071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fytshkmgwfnirekoicqypjjjwkuajogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927258.876237-774-91449532985948/AnsiballZ_lineinfile.py
Dec 05 09:34:19 np0005546420.localdomain sudo[212071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:19 np0005546420.localdomain python3.9[212073]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:19 np0005546420.localdomain sudo[212071]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:19 np0005546420.localdomain sudo[212181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhfdpxkxucqsrflpsaiviyqnsfftuuyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927259.4970746-774-185873795081383/AnsiballZ_lineinfile.py
Dec 05 09:34:19 np0005546420.localdomain sudo[212181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:19 np0005546420.localdomain python3.9[212183]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:20 np0005546420.localdomain sudo[212181]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:20 np0005546420.localdomain sudo[212291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dtmkvowqpznjywilbcypgpojsuxvvtma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927260.1215234-774-163633403299820/AnsiballZ_lineinfile.py
Dec 05 09:34:20 np0005546420.localdomain sudo[212291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:20 np0005546420.localdomain python3.9[212293]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:20 np0005546420.localdomain sudo[212291]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:21 np0005546420.localdomain sudo[212401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cceqwwpcxvkyyignvevbjhlicjfsluew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927260.720178-774-254401100149032/AnsiballZ_lineinfile.py
Dec 05 09:34:21 np0005546420.localdomain sudo[212401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:21 np0005546420.localdomain python3.9[212403]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:21 np0005546420.localdomain sudo[212401]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:21 np0005546420.localdomain sudo[212511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gouwprfnzqyebbhoshabhmjueawniayz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927261.4741883-861-197875994251911/AnsiballZ_stat.py
Dec 05 09:34:21 np0005546420.localdomain sudo[212511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36300 DF PROTO=TCP SPT=35862 DPT=9105 SEQ=1849388407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC214190000000001030307) 
Dec 05 09:34:21 np0005546420.localdomain python3.9[212513]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:34:21 np0005546420.localdomain sudo[212511]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:22 np0005546420.localdomain sudo[212623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aigmqicnkpkxcitcaftrsshdawsbswet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927262.145227-885-132647552957365/AnsiballZ_file.py
Dec 05 09:34:22 np0005546420.localdomain sudo[212623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:22 np0005546420.localdomain python3.9[212625]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:22 np0005546420.localdomain sudo[212623]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:23 np0005546420.localdomain sudo[212733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pznpbrmksmiwmalpniyxxmwmrartvlii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927262.953959-912-63831594467256/AnsiballZ_file.py
Dec 05 09:34:23 np0005546420.localdomain sudo[212733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:23 np0005546420.localdomain python3.9[212735]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:34:23 np0005546420.localdomain sudo[212733]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:23 np0005546420.localdomain sudo[212843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nmquxvfxwkiieuhfenrujkdhvveeedch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927263.6838627-937-12180409507950/AnsiballZ_stat.py
Dec 05 09:34:23 np0005546420.localdomain sudo[212843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:24 np0005546420.localdomain python3.9[212845]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:34:24 np0005546420.localdomain sudo[212843]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:24 np0005546420.localdomain sudo[212900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wmymaswkdisuyrkkawgufbztkgztjywi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927263.6838627-937-12180409507950/AnsiballZ_file.py
Dec 05 09:34:24 np0005546420.localdomain sudo[212900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:24 np0005546420.localdomain python3.9[212902]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:34:24 np0005546420.localdomain sudo[212900]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64257 DF PROTO=TCP SPT=59470 DPT=9100 SEQ=14240005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC21FD90000000001030307) 
Dec 05 09:34:25 np0005546420.localdomain sudo[213010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cipxpgvomdhdzuqkzbztkenypunsnjqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927265.2185023-937-204787899087402/AnsiballZ_stat.py
Dec 05 09:34:25 np0005546420.localdomain sudo[213010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:25 np0005546420.localdomain python3.9[213012]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:34:25 np0005546420.localdomain sudo[213010]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:25 np0005546420.localdomain sudo[213067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fsylongxvkijzgothfspktkxosvdacec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927265.2185023-937-204787899087402/AnsiballZ_file.py
Dec 05 09:34:25 np0005546420.localdomain sudo[213067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:26 np0005546420.localdomain python3.9[213069]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:34:26 np0005546420.localdomain sudo[213067]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:26 np0005546420.localdomain sudo[213177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxpvcmbxeupffaecgxlksuynflzbeghv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927266.4765995-1005-240557919607522/AnsiballZ_file.py
Dec 05 09:34:26 np0005546420.localdomain sudo[213177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:26 np0005546420.localdomain python3.9[213179]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:26 np0005546420.localdomain sudo[213177]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:27 np0005546420.localdomain sudo[213287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktqbnsezlaqszcgwlscnsmhykxlgtbne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927267.4590228-1029-182566548058390/AnsiballZ_stat.py
Dec 05 09:34:27 np0005546420.localdomain sudo[213287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:27 np0005546420.localdomain python3.9[213289]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:34:28 np0005546420.localdomain sudo[213287]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:28 np0005546420.localdomain sudo[213344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eokjxmkryveitppuhcmnylttlxzwwkqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927267.4590228-1029-182566548058390/AnsiballZ_file.py
Dec 05 09:34:28 np0005546420.localdomain sudo[213344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:28 np0005546420.localdomain python3.9[213346]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:28 np0005546420.localdomain sudo[213344]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:28 np0005546420.localdomain sudo[213454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-keibxvagfqzjbclovsfypkhqgnmkjrwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927268.7102282-1065-112628277334463/AnsiballZ_stat.py
Dec 05 09:34:28 np0005546420.localdomain sudo[213454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:29 np0005546420.localdomain python3.9[213456]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:34:29 np0005546420.localdomain sudo[213454]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:29 np0005546420.localdomain sudo[213511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vxhzzlxtuqrwoqvanrbenkgbwxliqzbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927268.7102282-1065-112628277334463/AnsiballZ_file.py
Dec 05 09:34:29 np0005546420.localdomain sudo[213511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:29 np0005546420.localdomain python3.9[213513]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:29 np0005546420.localdomain sudo[213511]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:30 np0005546420.localdomain sudo[213621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yfjrcyvpiznsxitwllnkyoqbwjnajnob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927269.8169274-1101-43259557839850/AnsiballZ_systemd.py
Dec 05 09:34:30 np0005546420.localdomain sudo[213621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:30 np0005546420.localdomain python3.9[213623]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:34:30 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:34:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8764 DF PROTO=TCP SPT=34372 DPT=9102 SEQ=2946643415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC235990000000001030307) 
Dec 05 09:34:30 np0005546420.localdomain systemd-rc-local-generator[213641]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:34:30 np0005546420.localdomain systemd-sysv-generator[213648]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:34:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:34:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:30 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:30 np0005546420.localdomain sudo[213621]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:31 np0005546420.localdomain sudo[213769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dunxkdfyrpelytduhtvtqimkjuhebeae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927271.017492-1125-274322884359791/AnsiballZ_stat.py
Dec 05 09:34:31 np0005546420.localdomain sudo[213769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:31 np0005546420.localdomain python3.9[213771]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:34:31 np0005546420.localdomain sudo[213769]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:31 np0005546420.localdomain sudo[213826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfztqounjkwssgqnxkyvymlnpwsshkxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927271.017492-1125-274322884359791/AnsiballZ_file.py
Dec 05 09:34:31 np0005546420.localdomain sudo[213826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:32 np0005546420.localdomain python3.9[213828]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:32 np0005546420.localdomain sudo[213826]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:32 np0005546420.localdomain sudo[213936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjzgridmminijyoxmvzhclilaunzxqqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927272.1893904-1161-157191953463855/AnsiballZ_stat.py
Dec 05 09:34:32 np0005546420.localdomain sudo[213936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22461 DF PROTO=TCP SPT=53542 DPT=9882 SEQ=1764650056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC23DD90000000001030307) 
Dec 05 09:34:32 np0005546420.localdomain python3.9[213938]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:34:32 np0005546420.localdomain sudo[213936]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:32 np0005546420.localdomain sudo[213993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exohxzvbhmrmdzedhvyidtqdadhuupsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927272.1893904-1161-157191953463855/AnsiballZ_file.py
Dec 05 09:34:32 np0005546420.localdomain sudo[213993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:33 np0005546420.localdomain python3.9[213995]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:33 np0005546420.localdomain sudo[213993]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:34:33 np0005546420.localdomain podman[214052]: 2025-12-05 09:34:33.528976329 +0000 UTC m=+0.086620943 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 05 09:34:33 np0005546420.localdomain sudo[214125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-frkgqksqliavnjcddrfskuqtogfjcfrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927273.3298037-1197-68613831506171/AnsiballZ_systemd.py
Dec 05 09:34:33 np0005546420.localdomain sudo[214125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:33 np0005546420.localdomain podman[214052]: 2025-12-05 09:34:33.616800726 +0000 UTC m=+0.174445300 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller)
Dec 05 09:34:33 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:34:33 np0005546420.localdomain python3.9[214130]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:34:33 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:34:34 np0005546420.localdomain systemd-sysv-generator[214157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:34:34 np0005546420.localdomain systemd-rc-local-generator[214152]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36302 DF PROTO=TCP SPT=35862 DPT=9105 SEQ=1849388407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC243D90000000001030307) 
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 09:34:34 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 09:34:34 np0005546420.localdomain sudo[214125]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:35 np0005546420.localdomain sudo[214280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pukuzvofdofzwxvebxvxujhlhlfjxodf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927274.745879-1227-89616745970903/AnsiballZ_file.py
Dec 05 09:34:35 np0005546420.localdomain sudo[214280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:35 np0005546420.localdomain python3.9[214282]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:34:35 np0005546420.localdomain sudo[214280]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:36 np0005546420.localdomain sudo[214391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mutdaeogbfuwmtdhpqjdclelpllqsvbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927275.755932-1251-212418872245405/AnsiballZ_stat.py
Dec 05 09:34:36 np0005546420.localdomain sudo[214391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:36 np0005546420.localdomain python3.9[214393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:34:36 np0005546420.localdomain sudo[214391]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:36 np0005546420.localdomain sudo[214479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-slnltkwawwgsjjyqmzpyktapwhhgfiws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927275.755932-1251-212418872245405/AnsiballZ_copy.py
Dec 05 09:34:36 np0005546420.localdomain sudo[214479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:36 np0005546420.localdomain python3.9[214481]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927275.755932-1251-212418872245405/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:34:36 np0005546420.localdomain sudo[214479]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:37 np0005546420.localdomain sudo[214499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:34:37 np0005546420.localdomain sudo[214499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:34:37 np0005546420.localdomain sudo[214499]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61383 DF PROTO=TCP SPT=50830 DPT=9101 SEQ=91794894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC24FD90000000001030307) 
Dec 05 09:34:37 np0005546420.localdomain sudo[214517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:34:37 np0005546420.localdomain sudo[214517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:34:37 np0005546420.localdomain sudo[214638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmiprenopnkfvywlguvcurbnakudolkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927277.4502234-1303-113736976385259/AnsiballZ_file.py
Dec 05 09:34:37 np0005546420.localdomain sudo[214638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:37 np0005546420.localdomain python3.9[214640]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:34:37 np0005546420.localdomain sudo[214638]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:37 np0005546420.localdomain sudo[214517]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:38 np0005546420.localdomain sudo[214765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjcigrxhmhwtvpiagogtcfpemstqxwje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927278.1543198-1326-142192909101566/AnsiballZ_stat.py
Dec 05 09:34:38 np0005546420.localdomain sudo[214765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:38 np0005546420.localdomain python3.9[214767]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:34:38 np0005546420.localdomain sudo[214765]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:38 np0005546420.localdomain sudo[214809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:34:38 np0005546420.localdomain sudo[214809]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:34:38 np0005546420.localdomain sudo[214809]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:39 np0005546420.localdomain sudo[214871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wqnkadoqdfbtcubppbgdjcwuxfjmwfri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927278.1543198-1326-142192909101566/AnsiballZ_copy.py
Dec 05 09:34:39 np0005546420.localdomain sudo[214871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:39 np0005546420.localdomain python3.9[214873]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927278.1543198-1326-142192909101566/.source.json _original_basename=.21ehpjxg follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:39 np0005546420.localdomain sudo[214871]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:39 np0005546420.localdomain sudo[214981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bwwauyiglqzkmfxkzzqbsdvkccrelcwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927279.457695-1371-206714215050854/AnsiballZ_file.py
Dec 05 09:34:39 np0005546420.localdomain sudo[214981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:39 np0005546420.localdomain python3.9[214983]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:39 np0005546420.localdomain sudo[214981]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:40 np0005546420.localdomain sudo[215091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqpxqnztbpgolucerzsdqkbcsghznaqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927280.1613996-1395-246002905236945/AnsiballZ_stat.py
Dec 05 09:34:40 np0005546420.localdomain sudo[215091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:40 np0005546420.localdomain sudo[215091]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45030 DF PROTO=TCP SPT=51844 DPT=9100 SEQ=3078278956 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC25D9A0000000001030307) 
Dec 05 09:34:40 np0005546420.localdomain sudo[215179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfhlshzprqexphzjkagaabpotmyhuaem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927280.1613996-1395-246002905236945/AnsiballZ_copy.py
Dec 05 09:34:40 np0005546420.localdomain sudo[215179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:41 np0005546420.localdomain sudo[215179]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:41 np0005546420.localdomain sudo[215289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylsnuyyyhemsehmxulfsbrallwvokftn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927281.5603294-1446-167983498961206/AnsiballZ_container_config_data.py
Dec 05 09:34:41 np0005546420.localdomain sudo[215289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:42 np0005546420.localdomain python3.9[215291]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 05 09:34:42 np0005546420.localdomain sudo[215289]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:42 np0005546420.localdomain systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 05 09:34:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45031 DF PROTO=TCP SPT=51844 DPT=9100 SEQ=3078278956 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC265990000000001030307) 
Dec 05 09:34:42 np0005546420.localdomain sudo[215400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ljhyorpftfadepnlnhdftovjwwoqiynd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927282.4218488-1473-174737218505657/AnsiballZ_container_config_hash.py
Dec 05 09:34:42 np0005546420.localdomain sudo[215400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:43 np0005546420.localdomain python3.9[215402]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:34:43 np0005546420.localdomain sudo[215400]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:43 np0005546420.localdomain sudo[215510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-shltzqnytbwqcjksbnwrczjkhjxeiaaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927283.3636327-1500-89424811408004/AnsiballZ_podman_container_info.py
Dec 05 09:34:43 np0005546420.localdomain sudo[215510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:34:43 np0005546420.localdomain podman[215513]: 2025-12-05 09:34:43.949782557 +0000 UTC m=+0.092664569 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 09:34:43 np0005546420.localdomain podman[215513]: 2025-12-05 09:34:43.981204869 +0000 UTC m=+0.124086941 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:34:43 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:34:44 np0005546420.localdomain python3.9[215512]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 09:34:44 np0005546420.localdomain systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 05 09:34:44 np0005546420.localdomain sudo[215510]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45032 DF PROTO=TCP SPT=51844 DPT=9100 SEQ=3078278956 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC275590000000001030307) 
Dec 05 09:34:48 np0005546420.localdomain sudo[215665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ezngjmwmoxzldcfytthtlzlezvunqabc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927287.8343298-1539-219568541542462/AnsiballZ_edpm_container_manage.py
Dec 05 09:34:48 np0005546420.localdomain sudo[215665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:48 np0005546420.localdomain python3[215667]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:34:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28644 DF PROTO=TCP SPT=45576 DPT=9105 SEQ=865549996 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC27D400000000001030307) 
Dec 05 09:34:50 np0005546420.localdomain podman[215682]: 2025-12-05 09:34:48.827708244 +0000 UTC m=+0.050086560 image pull  quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 05 09:34:50 np0005546420.localdomain podman[215730]: 
Dec 05 09:34:50 np0005546420.localdomain podman[215730]: 2025-12-05 09:34:50.69515437 +0000 UTC m=+0.091410312 container create 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 09:34:50 np0005546420.localdomain podman[215730]: 2025-12-05 09:34:50.653918201 +0000 UTC m=+0.050174133 image pull  quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 05 09:34:50 np0005546420.localdomain python3[215667]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 05 09:34:50 np0005546420.localdomain sudo[215665]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:51 np0005546420.localdomain sudo[215876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfdhaxhtngsiuiykrjkyjtwdajeixvav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927291.1051712-1563-176061942916/AnsiballZ_stat.py
Dec 05 09:34:51 np0005546420.localdomain sudo[215876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:51 np0005546420.localdomain python3.9[215878]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:34:51 np0005546420.localdomain sudo[215876]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28646 DF PROTO=TCP SPT=45576 DPT=9105 SEQ=865549996 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC289590000000001030307) 
Dec 05 09:34:52 np0005546420.localdomain sudo[215988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efydlluzbvkzxruhzklrytsdslblkvlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927291.9774766-1590-56068365695561/AnsiballZ_file.py
Dec 05 09:34:52 np0005546420.localdomain sudo[215988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:52 np0005546420.localdomain python3.9[215990]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:52 np0005546420.localdomain sudo[215988]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:52 np0005546420.localdomain sudo[216043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjwjprjpqfefxsrnooovsgtstpneoarp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927291.9774766-1590-56068365695561/AnsiballZ_stat.py
Dec 05 09:34:52 np0005546420.localdomain sudo[216043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:52 np0005546420.localdomain python3.9[216045]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:34:52 np0005546420.localdomain sudo[216043]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:53 np0005546420.localdomain sudo[216152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eecztnttyqhpujiqmfnwbmqajixmgjep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927292.9928966-1590-45386870581074/AnsiballZ_copy.py
Dec 05 09:34:53 np0005546420.localdomain sudo[216152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:53 np0005546420.localdomain python3.9[216154]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927292.9928966-1590-45386870581074/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:34:53 np0005546420.localdomain sudo[216152]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:53 np0005546420.localdomain sudo[216207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xsjpgcymsnvuqmfcgeqcxqsbtubykqqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927292.9928966-1590-45386870581074/AnsiballZ_systemd.py
Dec 05 09:34:53 np0005546420.localdomain sudo[216207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:54 np0005546420.localdomain python3.9[216209]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:34:54 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:34:54 np0005546420.localdomain systemd-rc-local-generator[216232]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:34:54 np0005546420.localdomain systemd-sysv-generator[216236]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:34:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:34:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:54 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:54 np0005546420.localdomain sudo[216207]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:54 np0005546420.localdomain sudo[216297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktomzdfvibrfpensjfqtxtwjciinipjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927292.9928966-1590-45386870581074/AnsiballZ_systemd.py
Dec 05 09:34:54 np0005546420.localdomain sudo[216297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:55 np0005546420.localdomain python3.9[216299]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: virtqemud.service: Deactivated successfully.
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:34:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8766 DF PROTO=TCP SPT=34372 DPT=9102 SEQ=2946643415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC295D90000000001030307) 
Dec 05 09:34:55 np0005546420.localdomain systemd-rc-local-generator[216330]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:34:55 np0005546420.localdomain systemd-sysv-generator[216334]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: Starting multipathd container...
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:34:55 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8a13ce97ba2f291e0a0dcfc03e9c5d68b45d56c60c358cf987365d5adabd20/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 09:34:55 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8a13ce97ba2f291e0a0dcfc03e9c5d68b45d56c60c358cf987365d5adabd20/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:34:55 np0005546420.localdomain podman[216342]: 2025-12-05 09:34:55.654751954 +0000 UTC m=+0.165427643 container init 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: + sudo -E kolla_set_configs
Dec 05 09:34:55 np0005546420.localdomain sudo[216362]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 09:34:55 np0005546420.localdomain sudo[216362]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 09:34:55 np0005546420.localdomain sudo[216362]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:34:55 np0005546420.localdomain podman[216342]: 2025-12-05 09:34:55.694403934 +0000 UTC m=+0.205079643 container start 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 09:34:55 np0005546420.localdomain podman[216342]: multipathd
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: Started multipathd container.
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: INFO:__main__:Validating config file
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: INFO:__main__:Writing out command to execute
Dec 05 09:34:55 np0005546420.localdomain sudo[216297]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:55 np0005546420.localdomain sudo[216362]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: ++ cat /run_command
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: + CMD='/usr/sbin/multipathd -d'
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: + ARGS=
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: + sudo kolla_copy_cacerts
Dec 05 09:34:55 np0005546420.localdomain sudo[216378]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 09:34:55 np0005546420.localdomain sudo[216378]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 09:34:55 np0005546420.localdomain sudo[216378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 09:34:55 np0005546420.localdomain podman[216364]: 2025-12-05 09:34:55.77655662 +0000 UTC m=+0.074996170 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 05 09:34:55 np0005546420.localdomain sudo[216378]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: + [[ ! -n '' ]]
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: + . kolla_extend_start
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: Running command: '/usr/sbin/multipathd -d'
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: + umask 0022
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: + exec /usr/sbin/multipathd -d
Dec 05 09:34:55 np0005546420.localdomain podman[216364]: 2025-12-05 09:34:55.787183095 +0000 UTC m=+0.085622645 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: 10413.978441 | --------start up--------
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: 10413.978466 | read /etc/multipath.conf
Dec 05 09:34:55 np0005546420.localdomain multipathd[216356]: 10413.982392 | path checkers start up
Dec 05 09:34:55 np0005546420.localdomain podman[216364]: unhealthy
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:34:55 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Failed with result 'exit-code'.
Dec 05 09:34:57 np0005546420.localdomain python3.9[216503]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:34:57 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35437 DF PROTO=TCP SPT=56578 DPT=9102 SEQ=3610630842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC29FDA0000000001030307) 
Dec 05 09:34:58 np0005546420.localdomain sudo[216613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aaykhwrizehxvsucruvkawzimoskbapv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927297.8407698-1698-169382340170089/AnsiballZ_command.py
Dec 05 09:34:58 np0005546420.localdomain sudo[216613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:58 np0005546420.localdomain python3.9[216615]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:34:58 np0005546420.localdomain sudo[216613]: pam_unix(sudo:session): session closed for user root
Dec 05 09:34:59 np0005546420.localdomain sudo[216736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdxauvaaftqorkgzaactkpgfmfhioess ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927298.6636543-1722-151923625543292/AnsiballZ_systemd.py
Dec 05 09:34:59 np0005546420.localdomain sudo[216736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:34:59 np0005546420.localdomain python3.9[216738]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:34:59 np0005546420.localdomain systemd[1]: Stopping multipathd container...
Dec 05 09:34:59 np0005546420.localdomain multipathd[216356]: 10418.073188 | exit (signal)
Dec 05 09:34:59 np0005546420.localdomain multipathd[216356]: 10418.073611 | --------shut down-------
Dec 05 09:34:59 np0005546420.localdomain systemd[1]: libpod-128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.scope: Deactivated successfully.
Dec 05 09:34:59 np0005546420.localdomain podman[216742]: 2025-12-05 09:34:59.915188144 +0000 UTC m=+0.110144199 container died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Dec 05 09:34:59 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.timer: Deactivated successfully.
Dec 05 09:34:59 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:34:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931-userdata-shm.mount: Deactivated successfully.
Dec 05 09:34:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ec8a13ce97ba2f291e0a0dcfc03e9c5d68b45d56c60c358cf987365d5adabd20-merged.mount: Deactivated successfully.
Dec 05 09:35:00 np0005546420.localdomain podman[216742]: 2025-12-05 09:35:00.105740117 +0000 UTC m=+0.300696132 container cleanup 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 09:35:00 np0005546420.localdomain podman[216742]: multipathd
Dec 05 09:35:00 np0005546420.localdomain podman[216769]: 2025-12-05 09:35:00.212493528 +0000 UTC m=+0.067733958 container cleanup 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 09:35:00 np0005546420.localdomain podman[216769]: multipathd
Dec 05 09:35:00 np0005546420.localdomain systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 05 09:35:00 np0005546420.localdomain systemd[1]: Stopped multipathd container.
Dec 05 09:35:00 np0005546420.localdomain systemd[1]: Starting multipathd container...
Dec 05 09:35:00 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:35:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8a13ce97ba2f291e0a0dcfc03e9c5d68b45d56c60c358cf987365d5adabd20/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 09:35:00 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8a13ce97ba2f291e0a0dcfc03e9c5d68b45d56c60c358cf987365d5adabd20/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 09:35:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:35:00 np0005546420.localdomain podman[216782]: 2025-12-05 09:35:00.419773145 +0000 UTC m=+0.175402686 container init 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: + sudo -E kolla_set_configs
Dec 05 09:35:00 np0005546420.localdomain sudo[216802]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 09:35:00 np0005546420.localdomain sudo[216802]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 09:35:00 np0005546420.localdomain sudo[216802]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 09:35:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:35:00 np0005546420.localdomain podman[216782]: 2025-12-05 09:35:00.460125337 +0000 UTC m=+0.215754868 container start 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 05 09:35:00 np0005546420.localdomain podman[216782]: multipathd
Dec 05 09:35:00 np0005546420.localdomain systemd[1]: Started multipathd container.
Dec 05 09:35:00 np0005546420.localdomain sudo[216736]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: INFO:__main__:Validating config file
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: INFO:__main__:Writing out command to execute
Dec 05 09:35:00 np0005546420.localdomain sudo[216802]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: ++ cat /run_command
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: + CMD='/usr/sbin/multipathd -d'
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: + ARGS=
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: + sudo kolla_copy_cacerts
Dec 05 09:35:00 np0005546420.localdomain sudo[216817]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 09:35:00 np0005546420.localdomain sudo[216817]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 09:35:00 np0005546420.localdomain sudo[216817]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 09:35:00 np0005546420.localdomain sudo[216817]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: + [[ ! -n '' ]]
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: + . kolla_extend_start
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: Running command: '/usr/sbin/multipathd -d'
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: + umask 0022
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: + exec /usr/sbin/multipathd -d
Dec 05 09:35:00 np0005546420.localdomain podman[216805]: 2025-12-05 09:35:00.557799852 +0000 UTC m=+0.088245679 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: 10418.754579 | --------start up--------
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: 10418.754601 | read /etc/multipath.conf
Dec 05 09:35:00 np0005546420.localdomain multipathd[216796]: 10418.759284 | path checkers start up
Dec 05 09:35:00 np0005546420.localdomain podman[216805]: 2025-12-05 09:35:00.581542624 +0000 UTC m=+0.111988461 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 05 09:35:00 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:35:01 np0005546420.localdomain sudo[216940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jorbzffnqwmerbjwqvuiugklcgsckbwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927301.388764-1746-230105926998298/AnsiballZ_file.py
Dec 05 09:35:01 np0005546420.localdomain sudo[216940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:01 np0005546420.localdomain python3.9[216942]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:01 np0005546420.localdomain sudo[216940]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44313 DF PROTO=TCP SPT=46938 DPT=9882 SEQ=3950182211 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC2B3D90000000001030307) 
Dec 05 09:35:03 np0005546420.localdomain sudo[217050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rqomojekzkzafzkifotmwcsqkiaytxfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927302.7559385-1782-226096279936832/AnsiballZ_file.py
Dec 05 09:35:03 np0005546420.localdomain sudo[217050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:03 np0005546420.localdomain python3.9[217052]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 09:35:03 np0005546420.localdomain sudo[217050]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:03 np0005546420.localdomain sudo[217160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-njtscjuinskzetuaghhxdmkaajrdpekt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927303.4588094-1806-256796648938332/AnsiballZ_modprobe.py
Dec 05 09:35:03 np0005546420.localdomain sudo[217160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:35:03 np0005546420.localdomain systemd[1]: tmp-crun.4QG4lu.mount: Deactivated successfully.
Dec 05 09:35:03 np0005546420.localdomain podman[217163]: 2025-12-05 09:35:03.884065397 +0000 UTC m=+0.097273414 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:35:03 np0005546420.localdomain podman[217163]: 2025-12-05 09:35:03.960386167 +0000 UTC m=+0.173594184 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:35:03 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:35:03 np0005546420.localdomain python3.9[217162]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 05 09:35:04 np0005546420.localdomain sudo[217160]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:35:04.079 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:35:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:35:04.080 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:35:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:35:04.080 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:35:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28648 DF PROTO=TCP SPT=45576 DPT=9105 SEQ=865549996 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC2B9DA0000000001030307) 
Dec 05 09:35:04 np0005546420.localdomain sudo[217302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpjxuwbbrcoricwxyewsgnyiiwtmoswu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927304.2356982-1830-56027785698066/AnsiballZ_stat.py
Dec 05 09:35:04 np0005546420.localdomain sudo[217302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:04 np0005546420.localdomain python3.9[217304]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:35:04 np0005546420.localdomain sudo[217302]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:05 np0005546420.localdomain sudo[217390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vgwwxjzptxobihssrxpanxpojpzxclti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927304.2356982-1830-56027785698066/AnsiballZ_copy.py
Dec 05 09:35:05 np0005546420.localdomain sudo[217390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:05 np0005546420.localdomain python3.9[217392]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927304.2356982-1830-56027785698066/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:05 np0005546420.localdomain sudo[217390]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:05 np0005546420.localdomain sudo[217500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lstlaakddycezjoqqnehydlypnwbclee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927305.616842-1878-76677841313063/AnsiballZ_lineinfile.py
Dec 05 09:35:05 np0005546420.localdomain sudo[217500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:06 np0005546420.localdomain python3.9[217502]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:06 np0005546420.localdomain sudo[217500]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:06 np0005546420.localdomain sudo[217610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqzzjobnucignxtwbctaejckloknlbfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927306.3372555-1902-135862860118994/AnsiballZ_systemd.py
Dec 05 09:35:06 np0005546420.localdomain sudo[217610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4527 DF PROTO=TCP SPT=50728 DPT=9101 SEQ=2468780220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC2C3D90000000001030307) 
Dec 05 09:35:06 np0005546420.localdomain python3.9[217612]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:35:06 np0005546420.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 05 09:35:06 np0005546420.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 05 09:35:06 np0005546420.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 05 09:35:06 np0005546420.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 05 09:35:07 np0005546420.localdomain systemd-modules-load[217616]: Module 'msr' is built in
Dec 05 09:35:07 np0005546420.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 05 09:35:07 np0005546420.localdomain sudo[217610]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:07 np0005546420.localdomain sudo[217724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzkcntysqsiotuqbzwqccdfwwearlezt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927307.2818456-1926-156259603792640/AnsiballZ_dnf.py
Dec 05 09:35:07 np0005546420.localdomain sudo[217724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:07 np0005546420.localdomain python3.9[217726]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:35:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55334 DF PROTO=TCP SPT=57534 DPT=9100 SEQ=3719487879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC2D2990000000001030307) 
Dec 05 09:35:11 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:35:11 np0005546420.localdomain systemd-rc-local-generator[217760]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:35:11 np0005546420.localdomain systemd-sysv-generator[217764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:35:11 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:11 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:11 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:11 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:11 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:35:11 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:11 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:11 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:11 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:35:12 np0005546420.localdomain systemd-rc-local-generator[217797]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:35:12 np0005546420.localdomain systemd-sysv-generator[217800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd-logind[762]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 05 09:35:12 np0005546420.localdomain systemd-logind[762]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 05 09:35:12 np0005546420.localdomain lvm[217849]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 09:35:12 np0005546420.localdomain lvm[217849]: VG ceph_vg1 finished
Dec 05 09:35:12 np0005546420.localdomain lvm[217848]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 09:35:12 np0005546420.localdomain lvm[217848]: VG ceph_vg0 finished
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 05 09:35:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55335 DF PROTO=TCP SPT=57534 DPT=9100 SEQ=3719487879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC2DA9A0000000001030307) 
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:35:12 np0005546420.localdomain systemd-rc-local-generator[217895]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:35:12 np0005546420.localdomain systemd-sysv-generator[217900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:12 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:13 np0005546420.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 09:35:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:35:14 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 09:35:14 np0005546420.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 05 09:35:14 np0005546420.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.556s CPU time.
Dec 05 09:35:14 np0005546420.localdomain systemd[1]: run-r68dd2e10ca5a4b1b9fd97e9e46fb98c9.service: Deactivated successfully.
Dec 05 09:35:14 np0005546420.localdomain sudo[217724]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:14 np0005546420.localdomain podman[219034]: 2025-12-05 09:35:14.255889201 +0000 UTC m=+0.101709840 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 09:35:14 np0005546420.localdomain podman[219034]: 2025-12-05 09:35:14.289321477 +0000 UTC m=+0.135142126 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:35:14 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:35:14 np0005546420.localdomain python3.9[219157]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:35:16 np0005546420.localdomain sudo[219269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdgzgykhllqbzenwlamjwpucgzacipkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927315.9544322-1978-39828081845868/AnsiballZ_file.py
Dec 05 09:35:16 np0005546420.localdomain sudo[219269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:16 np0005546420.localdomain python3.9[219271]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:16 np0005546420.localdomain sudo[219269]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55336 DF PROTO=TCP SPT=57534 DPT=9100 SEQ=3719487879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC2EA590000000001030307) 
Dec 05 09:35:17 np0005546420.localdomain sudo[219379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jacbvnsgzbvfayxlcetiacghkzjmukey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927317.077497-2011-225688520593540/AnsiballZ_systemd_service.py
Dec 05 09:35:17 np0005546420.localdomain sudo[219379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:17 np0005546420.localdomain python3.9[219381]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:35:17 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:35:17 np0005546420.localdomain systemd-sysv-generator[219413]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:35:17 np0005546420.localdomain systemd-rc-local-generator[219405]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:35:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:35:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:18 np0005546420.localdomain sudo[219379]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:18 np0005546420.localdomain python3.9[219526]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:35:18 np0005546420.localdomain network[219543]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:35:18 np0005546420.localdomain network[219544]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:35:18 np0005546420.localdomain network[219545]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:35:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23388 DF PROTO=TCP SPT=46722 DPT=9105 SEQ=3255059309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC2F2700000000001030307) 
Dec 05 09:35:21 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:35:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23390 DF PROTO=TCP SPT=46722 DPT=9105 SEQ=3255059309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC2FE590000000001030307) 
Dec 05 09:35:23 np0005546420.localdomain sudo[219778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gckrbvarokukgtrulafalsygikyxmtzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927323.3640037-2068-101806207147788/AnsiballZ_systemd_service.py
Dec 05 09:35:23 np0005546420.localdomain sudo[219778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:23 np0005546420.localdomain python3.9[219780]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:35:23 np0005546420.localdomain sudo[219778]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:24 np0005546420.localdomain sudo[219889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbmieulaqbhnsnmqmybxqlqowiqulnil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927324.138695-2068-38903471102406/AnsiballZ_systemd_service.py
Dec 05 09:35:24 np0005546420.localdomain sudo[219889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:24 np0005546420.localdomain python3.9[219891]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:35:24 np0005546420.localdomain sudo[219889]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55337 DF PROTO=TCP SPT=57534 DPT=9100 SEQ=3719487879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC309D90000000001030307) 
Dec 05 09:35:25 np0005546420.localdomain sudo[220000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vykpokwkjegcljqfrsjdkzdthsuypbtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927324.905415-2068-37749921313884/AnsiballZ_systemd_service.py
Dec 05 09:35:25 np0005546420.localdomain sudo[220000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:25 np0005546420.localdomain python3.9[220002]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:35:25 np0005546420.localdomain sudo[220000]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:25 np0005546420.localdomain sudo[220111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pexezznfbqqbzuiliaipekkhezvaxdzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927325.6393723-2068-207982804401602/AnsiballZ_systemd_service.py
Dec 05 09:35:25 np0005546420.localdomain sudo[220111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:26 np0005546420.localdomain python3.9[220113]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:35:26 np0005546420.localdomain sudo[220111]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:26 np0005546420.localdomain sudo[220222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bajuthooeffddjnuuyttasoupzaoqvkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927326.419645-2068-141147983680428/AnsiballZ_systemd_service.py
Dec 05 09:35:26 np0005546420.localdomain sudo[220222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:26 np0005546420.localdomain python3.9[220224]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:35:27 np0005546420.localdomain sudo[220222]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:28 np0005546420.localdomain sudo[220333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gwblpqlmmghuddtwynvdichasszgufci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927327.972378-2068-37827306082937/AnsiballZ_systemd_service.py
Dec 05 09:35:28 np0005546420.localdomain sudo[220333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:28 np0005546420.localdomain python3.9[220335]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:35:28 np0005546420.localdomain sudo[220333]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:29 np0005546420.localdomain sudo[220444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhjsmwmxqsxypbjglahpaagplyobdncj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927328.7623396-2068-154338763265855/AnsiballZ_systemd_service.py
Dec 05 09:35:29 np0005546420.localdomain sudo[220444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:29 np0005546420.localdomain python3.9[220446]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:35:29 np0005546420.localdomain sudo[220444]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:30 np0005546420.localdomain sudo[220555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozwwqwzofkrizjbfdebdacnscnscgxgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927330.2195413-2068-57807587475908/AnsiballZ_systemd_service.py
Dec 05 09:35:30 np0005546420.localdomain sudo[220555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6478 DF PROTO=TCP SPT=50156 DPT=9102 SEQ=3713931122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC320190000000001030307) 
Dec 05 09:35:30 np0005546420.localdomain python3.9[220557]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:35:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:35:30 np0005546420.localdomain sudo[220555]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:30 np0005546420.localdomain podman[220559]: 2025-12-05 09:35:30.975057388 +0000 UTC m=+0.102498862 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 09:35:30 np0005546420.localdomain podman[220559]: 2025-12-05 09:35:30.990101054 +0000 UTC m=+0.117542528 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Dec 05 09:35:31 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:35:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46863 DF PROTO=TCP SPT=57176 DPT=9882 SEQ=1170211579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC327DA0000000001030307) 
Dec 05 09:35:33 np0005546420.localdomain sudo[220687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjjoiyddkdozjgverclumnzkxcwaskaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927333.0731378-2245-46508596462166/AnsiballZ_file.py
Dec 05 09:35:33 np0005546420.localdomain sudo[220687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:33 np0005546420.localdomain python3.9[220689]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:33 np0005546420.localdomain sudo[220687]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:33 np0005546420.localdomain sudo[220797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptatdrapuibnnsaddzjafiwlcswyilzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927333.6815917-2245-190645768017595/AnsiballZ_file.py
Dec 05 09:35:33 np0005546420.localdomain sudo[220797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23392 DF PROTO=TCP SPT=46722 DPT=9105 SEQ=3255059309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC32DD90000000001030307) 
Dec 05 09:35:34 np0005546420.localdomain python3.9[220799]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:34 np0005546420.localdomain sudo[220797]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:35:34 np0005546420.localdomain podman[220872]: 2025-12-05 09:35:34.498865156 +0000 UTC m=+0.072516874 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 09:35:34 np0005546420.localdomain sudo[220924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-esaaztvnfobxivgrpwezjtadyzbontga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927334.264519-2245-164888216626768/AnsiballZ_file.py
Dec 05 09:35:34 np0005546420.localdomain sudo[220924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:34 np0005546420.localdomain podman[220872]: 2025-12-05 09:35:34.549349034 +0000 UTC m=+0.123000802 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 05 09:35:34 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:35:34 np0005546420.localdomain python3.9[220933]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:34 np0005546420.localdomain sudo[220924]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:35 np0005546420.localdomain sudo[221041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uislbniclrarjzlsbcgafonsicchpkzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927334.8833563-2245-53416341308281/AnsiballZ_file.py
Dec 05 09:35:35 np0005546420.localdomain sudo[221041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:35 np0005546420.localdomain python3.9[221043]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:35 np0005546420.localdomain sudo[221041]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:35 np0005546420.localdomain sudo[221151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jefcwknzvarcmigfcxgerghyitwvqvgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927335.4767723-2245-196005997468253/AnsiballZ_file.py
Dec 05 09:35:35 np0005546420.localdomain sudo[221151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:35 np0005546420.localdomain python3.9[221153]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:35 np0005546420.localdomain sudo[221151]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:36 np0005546420.localdomain sudo[221261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvbxtmttjabqyomeubqaewhvclwgwyox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927336.1124063-2245-167552674892365/AnsiballZ_file.py
Dec 05 09:35:36 np0005546420.localdomain sudo[221261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:36 np0005546420.localdomain python3.9[221263]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:36 np0005546420.localdomain sudo[221261]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:36 np0005546420.localdomain sudo[221371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mjxzirgbbczxfrxwtxgnkxrklpucprgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927336.705574-2245-45879354594773/AnsiballZ_file.py
Dec 05 09:35:36 np0005546420.localdomain sudo[221371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30245 DF PROTO=TCP SPT=55866 DPT=9101 SEQ=4088684035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC339D90000000001030307) 
Dec 05 09:35:37 np0005546420.localdomain python3.9[221373]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:37 np0005546420.localdomain sudo[221371]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:37 np0005546420.localdomain sudo[221481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvxvdpufrknklnezgdismlniymcgcgtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927337.3135152-2245-53344517282890/AnsiballZ_file.py
Dec 05 09:35:37 np0005546420.localdomain sudo[221481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:37 np0005546420.localdomain python3.9[221483]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:37 np0005546420.localdomain sudo[221481]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:38 np0005546420.localdomain sudo[221591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sufnjpdywpzqphwfoxqxdkhfevlhrmne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927338.2931297-2416-127353630514002/AnsiballZ_file.py
Dec 05 09:35:38 np0005546420.localdomain sudo[221591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:39 np0005546420.localdomain python3.9[221593]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:39 np0005546420.localdomain sudo[221591]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:39 np0005546420.localdomain sudo[221594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:35:39 np0005546420.localdomain sudo[221594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:35:39 np0005546420.localdomain sudo[221594]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:39 np0005546420.localdomain sudo[221629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 09:35:39 np0005546420.localdomain sudo[221629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:35:39 np0005546420.localdomain sshd[221685]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:35:39 np0005546420.localdomain sudo[221738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjacvdoziffxxksfwkkplxvotmxjefks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927339.2271183-2416-66249458437836/AnsiballZ_file.py
Dec 05 09:35:39 np0005546420.localdomain sudo[221738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:39 np0005546420.localdomain python3.9[221740]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:39 np0005546420.localdomain sudo[221738]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:39 np0005546420.localdomain sudo[221629]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:39 np0005546420.localdomain sudo[221779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:35:39 np0005546420.localdomain sudo[221779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:35:39 np0005546420.localdomain sudo[221779]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:39 np0005546420.localdomain sudo[221855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:35:39 np0005546420.localdomain sudo[221855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:35:40 np0005546420.localdomain sudo[221906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lbmgratalqwcvseglntcamtgfbsvnifj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927339.7883623-2416-122276334429100/AnsiballZ_file.py
Dec 05 09:35:40 np0005546420.localdomain sudo[221906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:40 np0005546420.localdomain python3.9[221908]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:40 np0005546420.localdomain sudo[221906]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:40 np0005546420.localdomain sudo[221855]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60311 DF PROTO=TCP SPT=58446 DPT=9100 SEQ=3340411252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC347D90000000001030307) 
Dec 05 09:35:41 np0005546420.localdomain sudo[221959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:35:41 np0005546420.localdomain sudo[221959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:35:41 np0005546420.localdomain sudo[221959]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:41 np0005546420.localdomain sudo[222065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hshhynkfermhlsbvljbjirzgdyaybsol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927341.000804-2416-129967611110254/AnsiballZ_file.py
Dec 05 09:35:41 np0005546420.localdomain sudo[222065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:41 np0005546420.localdomain python3.9[222067]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:41 np0005546420.localdomain sudo[222065]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:41 np0005546420.localdomain sudo[222175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxnvygagdzasgngygkfeoohhrfcvwvmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927341.5610225-2416-192915199088442/AnsiballZ_file.py
Dec 05 09:35:41 np0005546420.localdomain sudo[222175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:42 np0005546420.localdomain python3.9[222177]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:42 np0005546420.localdomain sudo[222175]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:42 np0005546420.localdomain sudo[222285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvkaierxkcigkaxwkfntgvdahslritgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927342.1937408-2416-271955989760597/AnsiballZ_file.py
Dec 05 09:35:42 np0005546420.localdomain sudo[222285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:42 np0005546420.localdomain python3.9[222287]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:42 np0005546420.localdomain sudo[222285]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60312 DF PROTO=TCP SPT=58446 DPT=9100 SEQ=3340411252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC34FDA0000000001030307) 
Dec 05 09:35:43 np0005546420.localdomain sudo[222395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ligokukpxarveewnhqrdzizlfosuwxvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927342.7426553-2416-61210726948476/AnsiballZ_file.py
Dec 05 09:35:43 np0005546420.localdomain sudo[222395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:43 np0005546420.localdomain python3.9[222397]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:43 np0005546420.localdomain sudo[222395]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:43 np0005546420.localdomain sudo[222505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfwovkzbirhlvzwcpxihnfgnyyptarso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927343.337684-2416-194616072070569/AnsiballZ_file.py
Dec 05 09:35:43 np0005546420.localdomain sudo[222505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:43 np0005546420.localdomain python3.9[222507]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:35:43 np0005546420.localdomain sudo[222505]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:44 np0005546420.localdomain sudo[222615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ansizfswgfpatbmkefiovxivdycuyzvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927344.117308-2590-244174466561859/AnsiballZ_command.py
Dec 05 09:35:44 np0005546420.localdomain sudo[222615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:35:44 np0005546420.localdomain podman[222618]: 2025-12-05 09:35:44.515611979 +0000 UTC m=+0.083075333 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 09:35:44 np0005546420.localdomain podman[222618]: 2025-12-05 09:35:44.548611842 +0000 UTC m=+0.116075176 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 05 09:35:44 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:35:44 np0005546420.localdomain python3.9[222617]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:35:44 np0005546420.localdomain sudo[222615]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:45 np0005546420.localdomain python3.9[222745]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:35:46 np0005546420.localdomain sudo[222853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxbxlygrafjvsgsxdhrlkkecrzpaypjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927345.7647073-2644-165586638589595/AnsiballZ_systemd_service.py
Dec 05 09:35:46 np0005546420.localdomain sudo[222853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:46 np0005546420.localdomain python3.9[222855]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:35:46 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:35:46 np0005546420.localdomain systemd-rc-local-generator[222877]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:35:46 np0005546420.localdomain systemd-sysv-generator[222882]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:35:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:35:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:46 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:35:46 np0005546420.localdomain sudo[222853]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60313 DF PROTO=TCP SPT=58446 DPT=9100 SEQ=3340411252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC35F990000000001030307) 
Dec 05 09:35:47 np0005546420.localdomain sudo[222999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpwwlbsunngilqblvlfkfryulxsweqkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927346.9192803-2668-185727819657764/AnsiballZ_command.py
Dec 05 09:35:47 np0005546420.localdomain sudo[222999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:47 np0005546420.localdomain python3.9[223001]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:35:47 np0005546420.localdomain sudo[222999]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:47 np0005546420.localdomain sudo[223110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gvoudunqfcqhafaurpqtevigmzoyalex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927347.5458877-2668-133714613595672/AnsiballZ_command.py
Dec 05 09:35:47 np0005546420.localdomain sudo[223110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:48 np0005546420.localdomain python3.9[223112]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:35:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63127 DF PROTO=TCP SPT=40610 DPT=9105 SEQ=478288650 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC367A10000000001030307) 
Dec 05 09:35:49 np0005546420.localdomain sudo[223110]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:49 np0005546420.localdomain sudo[223221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mtjesgqjubnidqfuflgodlrzneeyilei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927349.1903262-2668-53809775919472/AnsiballZ_command.py
Dec 05 09:35:49 np0005546420.localdomain sudo[223221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:49 np0005546420.localdomain python3.9[223223]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:35:49 np0005546420.localdomain sudo[223221]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:50 np0005546420.localdomain sudo[223332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbeimwopyxlmaisnmjqzsywjtsgalddf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927349.7872665-2668-167607573230315/AnsiballZ_command.py
Dec 05 09:35:50 np0005546420.localdomain sudo[223332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:50 np0005546420.localdomain python3.9[223334]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:35:50 np0005546420.localdomain sudo[223332]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:50 np0005546420.localdomain sudo[223443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ytaornsonkqspqqcbfmseilmxpalvqiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927350.4210892-2668-142334852073249/AnsiballZ_command.py
Dec 05 09:35:50 np0005546420.localdomain sudo[223443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:50 np0005546420.localdomain python3.9[223445]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:35:50 np0005546420.localdomain sudo[223443]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:51 np0005546420.localdomain sshd[221685]: ssh_dispatch_run_fatal: Connection from 113.89.53.244 port 52832: Connection timed out [preauth]
Dec 05 09:35:51 np0005546420.localdomain sudo[223554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fgynkfihjncqguvutszflguxjgjuzyus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927351.0246766-2668-228421726563105/AnsiballZ_command.py
Dec 05 09:35:51 np0005546420.localdomain sudo[223554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:51 np0005546420.localdomain python3.9[223556]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:35:51 np0005546420.localdomain sudo[223554]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:51 np0005546420.localdomain sudo[223665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yolapbbaurggomnlybwkvkxlugyncfaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927351.607494-2668-86397873010454/AnsiballZ_command.py
Dec 05 09:35:51 np0005546420.localdomain sudo[223665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63129 DF PROTO=TCP SPT=40610 DPT=9105 SEQ=478288650 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC3739A0000000001030307) 
Dec 05 09:35:52 np0005546420.localdomain python3.9[223667]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:35:52 np0005546420.localdomain sudo[223665]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:52 np0005546420.localdomain sudo[223776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-txmozypzubggdbqwlbfwvapqwquckbwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927352.2211223-2668-28934063604756/AnsiballZ_command.py
Dec 05 09:35:52 np0005546420.localdomain sudo[223776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:52 np0005546420.localdomain python3.9[223778]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:35:53 np0005546420.localdomain sudo[223776]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6480 DF PROTO=TCP SPT=50156 DPT=9102 SEQ=3713931122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC37FD90000000001030307) 
Dec 05 09:35:55 np0005546420.localdomain sudo[223887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chlwtiesbojydlhqzxetareialnmqgpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927354.9301922-2877-173660821906829/AnsiballZ_file.py
Dec 05 09:35:55 np0005546420.localdomain sudo[223887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:55 np0005546420.localdomain python3.9[223889]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:35:55 np0005546420.localdomain sudo[223887]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:55 np0005546420.localdomain sudo[223997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iueatautjuuybizohkvdtegbhbdkxybf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927355.556702-2877-199440951500666/AnsiballZ_file.py
Dec 05 09:35:55 np0005546420.localdomain sudo[223997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:56 np0005546420.localdomain python3.9[223999]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:35:56 np0005546420.localdomain sudo[223997]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:56 np0005546420.localdomain sudo[224107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gnzcandzivbhtliumolxgjokibeonvne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927356.2520661-2877-165642242845854/AnsiballZ_file.py
Dec 05 09:35:56 np0005546420.localdomain sudo[224107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:56 np0005546420.localdomain python3.9[224109]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:35:56 np0005546420.localdomain sudo[224107]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:57 np0005546420.localdomain sudo[224217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pgtbttnywheifcozafzfylovkzmxbrez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927356.9218829-2941-254071036716640/AnsiballZ_file.py
Dec 05 09:35:57 np0005546420.localdomain sudo[224217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:57 np0005546420.localdomain python3.9[224219]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:35:57 np0005546420.localdomain sudo[224217]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:58 np0005546420.localdomain sudo[224327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efughehhezowjmoddquyuveaoucuftsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927358.3423193-2941-100938033000746/AnsiballZ_file.py
Dec 05 09:35:58 np0005546420.localdomain sudo[224327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:58 np0005546420.localdomain python3.9[224329]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:35:58 np0005546420.localdomain sudo[224327]: pam_unix(sudo:session): session closed for user root
Dec 05 09:35:59 np0005546420.localdomain sudo[224437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-balrgdfycpkaivoosesgtahhkbrmwijm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927358.9714246-2941-103290253646864/AnsiballZ_file.py
Dec 05 09:35:59 np0005546420.localdomain sudo[224437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:35:59 np0005546420.localdomain python3.9[224439]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:35:59 np0005546420.localdomain sudo[224437]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18604 DF PROTO=TCP SPT=38080 DPT=9102 SEQ=518200077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC395590000000001030307) 
Dec 05 09:36:00 np0005546420.localdomain sudo[224547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxwjsdvnhytdhbwutzigvkytwyxpweny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927359.6170495-2941-238749137722223/AnsiballZ_file.py
Dec 05 09:36:00 np0005546420.localdomain sudo[224547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:01 np0005546420.localdomain python3.9[224549]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:01 np0005546420.localdomain sudo[224547]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:01 np0005546420.localdomain sudo[224657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upldpwkbqbydnqtfbmpgwamginvjdwbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927361.1492457-2941-274496810612288/AnsiballZ_file.py
Dec 05 09:36:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:36:01 np0005546420.localdomain sudo[224657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:01 np0005546420.localdomain systemd[1]: tmp-crun.E3oQqE.mount: Deactivated successfully.
Dec 05 09:36:01 np0005546420.localdomain podman[224659]: 2025-12-05 09:36:01.553427985 +0000 UTC m=+0.117986121 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 05 09:36:01 np0005546420.localdomain podman[224659]: 2025-12-05 09:36:01.566587427 +0000 UTC m=+0.131145503 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 05 09:36:01 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:36:01 np0005546420.localdomain python3.9[224660]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:01 np0005546420.localdomain sudo[224657]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:02 np0005546420.localdomain sudo[224787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdoqooiabonkerxthbbdgmqaatlagjxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927361.787576-2941-183501612170328/AnsiballZ_file.py
Dec 05 09:36:02 np0005546420.localdomain sudo[224787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:02 np0005546420.localdomain python3.9[224789]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:02 np0005546420.localdomain sudo[224787]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:02 np0005546420.localdomain sudo[224897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltsbivfxdveswhkcfakrwvdlnbqknayr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927362.457522-2941-25419170708788/AnsiballZ_file.py
Dec 05 09:36:02 np0005546420.localdomain sudo[224897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28986 DF PROTO=TCP SPT=40886 DPT=9882 SEQ=2740782447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC39DD90000000001030307) 
Dec 05 09:36:02 np0005546420.localdomain python3.9[224899]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:02 np0005546420.localdomain sudo[224897]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:36:04.081 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:36:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:36:04.082 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:36:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:36:04.083 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:36:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63131 DF PROTO=TCP SPT=40610 DPT=9105 SEQ=478288650 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC3A3DA0000000001030307) 
Dec 05 09:36:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:36:05 np0005546420.localdomain systemd[1]: tmp-crun.Crolzo.mount: Deactivated successfully.
Dec 05 09:36:05 np0005546420.localdomain podman[224917]: 2025-12-05 09:36:05.515390705 +0000 UTC m=+0.091648027 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:36:05 np0005546420.localdomain podman[224917]: 2025-12-05 09:36:05.612756129 +0000 UTC m=+0.189013441 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 09:36:05 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:36:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40859 DF PROTO=TCP SPT=32996 DPT=9101 SEQ=1036259580 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC3ADD90000000001030307) 
Dec 05 09:36:08 np0005546420.localdomain sudo[225032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfvzuwixjhygjubjajfcdeavzkuqvpel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927368.0707152-3266-204537183065996/AnsiballZ_getent.py
Dec 05 09:36:08 np0005546420.localdomain sudo[225032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:08 np0005546420.localdomain python3.9[225034]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 05 09:36:08 np0005546420.localdomain sudo[225032]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:09 np0005546420.localdomain sudo[225143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmozhjrfibsxxdkixgykagrsueolafxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927368.8991668-3290-83705283158320/AnsiballZ_group.py
Dec 05 09:36:09 np0005546420.localdomain sudo[225143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:10 np0005546420.localdomain python3.9[225145]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 09:36:10 np0005546420.localdomain groupadd[225146]: group added to /etc/group: name=nova, GID=42436
Dec 05 09:36:10 np0005546420.localdomain groupadd[225146]: group added to /etc/gshadow: name=nova
Dec 05 09:36:10 np0005546420.localdomain groupadd[225146]: new group: name=nova, GID=42436
Dec 05 09:36:10 np0005546420.localdomain sudo[225143]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7253 DF PROTO=TCP SPT=50832 DPT=9100 SEQ=1761988324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC3BD1A0000000001030307) 
Dec 05 09:36:10 np0005546420.localdomain sudo[225259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dsqmsmjayhmohnzofvywjeqxptiakulg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927370.4784725-3314-198760656007656/AnsiballZ_user.py
Dec 05 09:36:10 np0005546420.localdomain sudo[225259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:11 np0005546420.localdomain python3.9[225261]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546420.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 09:36:11 np0005546420.localdomain useradd[225263]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 05 09:36:11 np0005546420.localdomain useradd[225263]: add 'nova' to group 'libvirt'
Dec 05 09:36:11 np0005546420.localdomain useradd[225263]: add 'nova' to shadow group 'libvirt'
Dec 05 09:36:11 np0005546420.localdomain sudo[225259]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:12 np0005546420.localdomain sshd[225287]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:36:12 np0005546420.localdomain sshd[225287]: Accepted publickey for zuul from 192.168.122.30 port 44314 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:36:12 np0005546420.localdomain systemd-logind[762]: New session 54 of user zuul.
Dec 05 09:36:12 np0005546420.localdomain systemd[1]: Started Session 54 of User zuul.
Dec 05 09:36:12 np0005546420.localdomain sshd[225287]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:36:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7254 DF PROTO=TCP SPT=50832 DPT=9100 SEQ=1761988324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC3C51A0000000001030307) 
Dec 05 09:36:12 np0005546420.localdomain sshd[225290]: Received disconnect from 192.168.122.30 port 44314:11: disconnected by user
Dec 05 09:36:12 np0005546420.localdomain sshd[225290]: Disconnected from user zuul 192.168.122.30 port 44314
Dec 05 09:36:12 np0005546420.localdomain sshd[225287]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:36:12 np0005546420.localdomain systemd[1]: session-54.scope: Deactivated successfully.
Dec 05 09:36:12 np0005546420.localdomain systemd-logind[762]: Session 54 logged out. Waiting for processes to exit.
Dec 05 09:36:12 np0005546420.localdomain systemd-logind[762]: Removed session 54.
Dec 05 09:36:13 np0005546420.localdomain python3.9[225398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:36:13 np0005546420.localdomain python3.9[225484]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927372.968364-3389-109876787110957/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:14 np0005546420.localdomain python3.9[225592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:36:14 np0005546420.localdomain python3.9[225647]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:36:15 np0005546420.localdomain podman[225756]: 2025-12-05 09:36:15.518552814 +0000 UTC m=+0.091286517 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Dec 05 09:36:15 np0005546420.localdomain python3.9[225755]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:36:15 np0005546420.localdomain podman[225756]: 2025-12-05 09:36:15.550556014 +0000 UTC m=+0.123289677 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:36:15 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:36:16 np0005546420.localdomain python3.9[225858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927375.0777793-3389-123735022333956/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:16 np0005546420.localdomain python3.9[225966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:36:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7255 DF PROTO=TCP SPT=50832 DPT=9100 SEQ=1761988324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC3D4D90000000001030307) 
Dec 05 09:36:17 np0005546420.localdomain python3.9[226052]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927376.2033064-3389-272928561614922/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=a84d6f6effa9a5ffb33218dbf52341ee4c9a75da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:17 np0005546420.localdomain python3.9[226160]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:36:18 np0005546420.localdomain python3.9[226246]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927377.290099-3389-36477779510723/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:18 np0005546420.localdomain python3.9[226354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:36:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10336 DF PROTO=TCP SPT=58020 DPT=9105 SEQ=3956441234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC3DCD00000000001030307) 
Dec 05 09:36:19 np0005546420.localdomain python3.9[226440]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927378.32889-3389-240929173555687/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:19 np0005546420.localdomain sudo[226548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vpmcmvpcgzspjpkqliyxyxuqlciqnsvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927379.5543969-3638-250329526545236/AnsiballZ_file.py
Dec 05 09:36:19 np0005546420.localdomain sudo[226548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:20 np0005546420.localdomain python3.9[226550]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:36:20 np0005546420.localdomain sudo[226548]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:21 np0005546420.localdomain sudo[226658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aftmaikyjgomuophgzddocylcmynrtyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927380.2334871-3662-22035605698259/AnsiballZ_copy.py
Dec 05 09:36:21 np0005546420.localdomain sudo[226658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:21 np0005546420.localdomain python3.9[226660]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:36:21 np0005546420.localdomain sudo[226658]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:21 np0005546420.localdomain sudo[226768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nlpcldognnitzpsphlvzsntllhgrqacq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927381.65332-3686-1017141784793/AnsiballZ_stat.py
Dec 05 09:36:21 np0005546420.localdomain sudo[226768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10338 DF PROTO=TCP SPT=58020 DPT=9105 SEQ=3956441234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC3E8D90000000001030307) 
Dec 05 09:36:22 np0005546420.localdomain python3.9[226770]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:36:22 np0005546420.localdomain sudo[226768]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:22 np0005546420.localdomain sudo[226880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofqzgtjmlsymyqinrkuirpzgtioprkrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927382.4367754-3713-93278556662225/AnsiballZ_file.py
Dec 05 09:36:22 np0005546420.localdomain sudo[226880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:22 np0005546420.localdomain python3.9[226882]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:36:22 np0005546420.localdomain sudo[226880]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:24 np0005546420.localdomain python3.9[226990]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:36:24 np0005546420.localdomain python3.9[227100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:36:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18606 DF PROTO=TCP SPT=38080 DPT=9102 SEQ=518200077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC3F5D90000000001030307) 
Dec 05 09:36:25 np0005546420.localdomain python3.9[227186]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927384.4431915-3764-73795126900787/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:26 np0005546420.localdomain python3.9[227294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:36:26 np0005546420.localdomain python3.9[227380]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927385.7610474-3809-237171981176063/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:36:27 np0005546420.localdomain sudo[227488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-crwrvejklwvkzurcwkhioxmvpzutghcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927387.1325426-3860-206885755883436/AnsiballZ_container_config_data.py
Dec 05 09:36:27 np0005546420.localdomain sudo[227488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:27 np0005546420.localdomain python3.9[227490]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 05 09:36:27 np0005546420.localdomain sudo[227488]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:28 np0005546420.localdomain sudo[227598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mapadjjlczzrgtqplcktsnsbviphqqgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927387.8823113-3887-105078879164389/AnsiballZ_container_config_hash.py
Dec 05 09:36:28 np0005546420.localdomain sudo[227598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:28 np0005546420.localdomain python3.9[227600]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:36:28 np0005546420.localdomain sudo[227598]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:29 np0005546420.localdomain sudo[227708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rsihnyjahzrztgayukcgibmztluojpdx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927388.7615392-3917-97746498137992/AnsiballZ_edpm_container_manage.py
Dec 05 09:36:29 np0005546420.localdomain sudo[227708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:29 np0005546420.localdomain python3[227710]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:36:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47078 DF PROTO=TCP SPT=45782 DPT=9102 SEQ=3084634905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC40A590000000001030307) 
Dec 05 09:36:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:36:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58640 DF PROTO=TCP SPT=45098 DPT=9882 SEQ=2971989691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC411DA0000000001030307) 
Dec 05 09:36:32 np0005546420.localdomain podman[227736]: 2025-12-05 09:36:32.534994219 +0000 UTC m=+0.111499990 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 05 09:36:32 np0005546420.localdomain podman[227736]: 2025-12-05 09:36:32.572306476 +0000 UTC m=+0.148812227 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 09:36:32 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:36:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10340 DF PROTO=TCP SPT=58020 DPT=9105 SEQ=3956441234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC419D90000000001030307) 
Dec 05 09:36:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:36:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5749 DF PROTO=TCP SPT=44530 DPT=9101 SEQ=1165813322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC423D90000000001030307) 
Dec 05 09:36:40 np0005546420.localdomain podman[227779]: 2025-12-05 09:36:40.050173547 +0000 UTC m=+4.206640126 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller)
Dec 05 09:36:40 np0005546420.localdomain podman[227723]: 2025-12-05 09:36:29.489463501 +0000 UTC m=+0.089725370 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 05 09:36:40 np0005546420.localdomain podman[227779]: 2025-12-05 09:36:40.095308205 +0000 UTC m=+4.251774824 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec 05 09:36:40 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:36:40 np0005546420.localdomain podman[227827]: 
Dec 05 09:36:40 np0005546420.localdomain podman[227827]: 2025-12-05 09:36:40.332642924 +0000 UTC m=+0.089741180 container create aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:36:40 np0005546420.localdomain podman[227827]: 2025-12-05 09:36:40.287929029 +0000 UTC m=+0.045027325 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 05 09:36:40 np0005546420.localdomain python3[227710]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 05 09:36:40 np0005546420.localdomain sudo[227708]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32088 DF PROTO=TCP SPT=36528 DPT=9100 SEQ=2689085571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC432590000000001030307) 
Dec 05 09:36:40 np0005546420.localdomain sudo[227972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vormkozxdhovsmmfsrnsfiehewjxuslh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927400.735009-3941-6503525733675/AnsiballZ_stat.py
Dec 05 09:36:41 np0005546420.localdomain sudo[227972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:41 np0005546420.localdomain python3.9[227974]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:36:41 np0005546420.localdomain sudo[227975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:36:41 np0005546420.localdomain sudo[227975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:36:41 np0005546420.localdomain sudo[227975]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:41 np0005546420.localdomain sudo[227972]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:41 np0005546420.localdomain sudo[227995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:36:41 np0005546420.localdomain sudo[227995]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:36:42 np0005546420.localdomain sudo[227995]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:42 np0005546420.localdomain sudo[228152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-altcvhlzjaeqcquwkeyuiaokxbigjgan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927401.8879142-3977-17650964221949/AnsiballZ_container_config_data.py
Dec 05 09:36:42 np0005546420.localdomain sudo[228152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:42 np0005546420.localdomain python3.9[228154]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 05 09:36:42 np0005546420.localdomain sudo[228152]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:42 np0005546420.localdomain sudo[228172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:36:42 np0005546420.localdomain sudo[228172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:36:42 np0005546420.localdomain sudo[228172]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32089 DF PROTO=TCP SPT=36528 DPT=9100 SEQ=2689085571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC43A590000000001030307) 
Dec 05 09:36:43 np0005546420.localdomain sudo[228280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sivvlcyqynndslxbakwsetbwwbopevfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927402.8832202-4004-162642905293790/AnsiballZ_container_config_hash.py
Dec 05 09:36:43 np0005546420.localdomain sudo[228280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:43 np0005546420.localdomain python3.9[228282]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:36:43 np0005546420.localdomain sudo[228280]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:44 np0005546420.localdomain sudo[228390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jngcvdxvibaftddicxibbneevkgdvltq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927403.8015893-4034-241911356952406/AnsiballZ_edpm_container_manage.py
Dec 05 09:36:44 np0005546420.localdomain sudo[228390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:44 np0005546420.localdomain python3[228392]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:36:44 np0005546420.localdomain python3[228392]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",
                                                                    "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:31:10.62653219Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1211779450,
                                                                    "VirtualSize": 1211779450,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",
                                                                              "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:53.072482982Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:02.761216507Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:03.785234187Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:17.194997182Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:24.14458279Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:30.048641643Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:09.707360362Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.208898452Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624465805Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624514176Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:18.661822382Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 05 09:36:44 np0005546420.localdomain podman[228442]: 2025-12-05 09:36:44.758567859 +0000 UTC m=+0.088160404 container remove ac5838814f71f83954b3f0ba2326e5c0cf48aeb57187eba0e64c70650888658e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'f466dfc41ade6bb0052985f932e2b61e-ac0f5be6f71e6f8c16cd05155c4b5429'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, release=1761123044)
Dec 05 09:36:44 np0005546420.localdomain python3[228392]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute
Dec 05 09:36:44 np0005546420.localdomain podman[228456]: 
Dec 05 09:36:44 np0005546420.localdomain podman[228456]: 2025-12-05 09:36:44.875997422 +0000 UTC m=+0.094838780 container create 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:36:44 np0005546420.localdomain podman[228456]: 2025-12-05 09:36:44.830170475 +0000 UTC m=+0.049011873 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 05 09:36:44 np0005546420.localdomain python3[228392]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 05 09:36:45 np0005546420.localdomain sudo[228390]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:45 np0005546420.localdomain sudo[228600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twfrzcpqhljayxlbkmnlhrrolhodksmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927405.263117-4058-224446695475781/AnsiballZ_stat.py
Dec 05 09:36:45 np0005546420.localdomain sudo[228600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:45 np0005546420.localdomain python3.9[228602]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:36:45 np0005546420.localdomain sudo[228600]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:36:46 np0005546420.localdomain sudo[228712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pmsqekroajcemzpydbrwlvluuozihrpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927406.1373255-4085-2082245421573/AnsiballZ_file.py
Dec 05 09:36:46 np0005546420.localdomain sudo[228712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:46 np0005546420.localdomain podman[228713]: 2025-12-05 09:36:46.524241987 +0000 UTC m=+0.091142232 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 09:36:46 np0005546420.localdomain podman[228713]: 2025-12-05 09:36:46.534407855 +0000 UTC m=+0.101308110 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:36:46 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:36:46 np0005546420.localdomain python3.9[228720]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:36:46 np0005546420.localdomain sudo[228712]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32090 DF PROTO=TCP SPT=36528 DPT=9100 SEQ=2689085571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC44A190000000001030307) 
Dec 05 09:36:47 np0005546420.localdomain sudo[228887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqispgqwnityozdsjggqajcynkltbhqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927406.728598-4085-128004100999860/AnsiballZ_copy.py
Dec 05 09:36:47 np0005546420.localdomain sudo[228887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:47 np0005546420.localdomain python3.9[228889]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927406.728598-4085-128004100999860/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:36:47 np0005546420.localdomain sudo[228887]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:48 np0005546420.localdomain sudo[228942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-psqpatalnqogfodloeuzoxbnfunbjaxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927406.728598-4085-128004100999860/AnsiballZ_systemd.py
Dec 05 09:36:48 np0005546420.localdomain sudo[228942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:48 np0005546420.localdomain python3.9[228944]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:36:48 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:36:48 np0005546420.localdomain systemd-sysv-generator[228975]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:36:48 np0005546420.localdomain systemd-rc-local-generator[228969]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:36:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:36:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58641 DF PROTO=TCP SPT=45098 DPT=9882 SEQ=2971989691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC451D90000000001030307) 
Dec 05 09:36:48 np0005546420.localdomain sudo[228942]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:49 np0005546420.localdomain sudo[229033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dscyvnqbhflqvrbwzddeiuhxesudlaob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927406.728598-4085-128004100999860/AnsiballZ_systemd.py
Dec 05 09:36:49 np0005546420.localdomain sudo[229033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:50 np0005546420.localdomain python3.9[229035]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:36:50 np0005546420.localdomain systemd-sysv-generator[229068]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:36:50 np0005546420.localdomain systemd-rc-local-generator[229064]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: Starting nova_compute container...
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:36:50 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 09:36:50 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 05 09:36:50 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 09:36:50 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 09:36:50 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 09:36:50 np0005546420.localdomain podman[229076]: 2025-12-05 09:36:50.75527903 +0000 UTC m=+0.146572491 container init 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 09:36:50 np0005546420.localdomain podman[229076]: 2025-12-05 09:36:50.766169701 +0000 UTC m=+0.157463202 container start 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 09:36:50 np0005546420.localdomain podman[229076]: nova_compute
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: + sudo -E kolla_set_configs
Dec 05 09:36:50 np0005546420.localdomain systemd[1]: Started nova_compute container.
Dec 05 09:36:50 np0005546420.localdomain sudo[229033]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Validating config file
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying service configuration files
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Deleting /etc/ceph
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Creating directory /etc/ceph
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /etc/ceph
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Writing out command to execute
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: ++ cat /run_command
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: + CMD=nova-compute
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: + ARGS=
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: + sudo kolla_copy_cacerts
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: + [[ ! -n '' ]]
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: + . kolla_extend_start
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: Running command: 'nova-compute'
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: + echo 'Running command: '\''nova-compute'\'''
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: + umask 0022
Dec 05 09:36:50 np0005546420.localdomain nova_compute[229091]: + exec nova-compute
Dec 05 09:36:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27550 DF PROTO=TCP SPT=52138 DPT=9105 SEQ=1076865942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC45E190000000001030307) 
Dec 05 09:36:52 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:52.666 229095 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 09:36:52 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:52.666 229095 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 09:36:52 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:52.666 229095 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 09:36:52 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:52.666 229095 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 05 09:36:52 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:52.833 229095 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:36:52 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:52.861 229095 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:36:52 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:52.861 229095 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.290 229095 INFO nova.virt.driver [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.413 229095 INFO nova.compute.provider_config [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.421 229095 WARNING nova.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.421 229095 DEBUG oslo_concurrency.lockutils [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.422 229095 DEBUG oslo_concurrency.lockutils [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.422 229095 DEBUG oslo_concurrency.lockutils [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.422 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.422 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.422 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.423 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.423 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.423 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.423 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.423 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.423 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.423 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.424 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.424 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.424 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.424 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.424 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.424 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.424 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.424 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.425 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.425 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] console_host                   = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.425 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.425 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.425 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.425 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.425 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.426 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.426 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.426 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.426 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.426 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.426 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.426 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.427 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.427 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.427 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.427 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.427 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.427 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.427 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] host                           = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.428 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.428 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.428 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.428 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.428 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.428 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.429 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.429 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.429 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.429 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.429 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.429 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.429 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.430 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.430 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.430 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.430 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.430 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.430 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.430 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.430 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.431 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.431 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.431 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.431 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.431 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.431 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.431 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.431 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.432 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.432 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.432 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.432 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.432 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.432 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.433 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.433 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.433 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.433 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.433 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.433 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.433 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] my_block_storage_ip            = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.434 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] my_ip                          = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.434 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.434 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.434 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.434 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.434 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.434 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.435 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.435 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.435 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.435 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.435 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.435 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.435 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.435 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.436 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.436 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.436 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.436 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.436 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.436 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.436 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.437 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.437 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.437 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.437 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.437 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.437 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.437 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.437 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.438 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.438 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.438 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.438 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.438 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.438 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.438 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.438 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.439 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.439 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.439 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.439 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.439 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.439 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.439 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.440 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.440 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.440 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.440 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.440 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.440 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.440 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.440 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.441 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.441 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.441 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.441 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.441 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.441 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.441 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.441 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.442 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.442 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.442 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.442 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.442 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.442 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.442 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.443 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.443 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.443 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.443 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.443 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.443 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.443 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.444 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.444 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.444 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.444 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.444 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.444 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.444 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.445 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.445 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.445 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.445 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.445 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.445 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.445 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.446 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.446 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.446 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.446 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.446 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.446 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.446 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.446 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.447 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.447 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.447 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.447 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.447 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.447 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.447 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.448 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.448 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.448 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.448 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.448 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.448 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.448 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.448 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.449 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.449 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.449 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.449 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.449 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.449 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.449 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.450 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.450 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.450 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.450 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.450 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.450 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.450 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.451 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.451 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.451 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.451 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.451 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.451 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.451 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.451 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.452 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.452 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.452 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.452 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.452 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.452 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.452 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.453 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.453 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.453 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.453 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.453 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.453 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.453 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.453 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.454 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.454 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.454 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.454 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.454 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.454 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.454 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.455 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.455 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.455 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.455 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.455 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.455 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.455 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.456 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.456 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.456 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.456 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.456 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.456 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.456 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.456 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.457 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.457 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.457 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.457 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.457 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.457 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.457 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.458 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.458 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.458 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.458 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.458 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.458 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.458 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.459 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.459 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.459 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.459 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.459 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.459 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.459 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.459 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.460 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.460 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.460 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.460 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.460 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.460 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.460 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.461 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.461 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.461 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.461 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.461 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.461 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.461 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.461 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.462 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.462 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.462 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.462 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.462 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.462 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.462 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.463 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.463 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.463 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.463 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.463 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.463 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.463 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.463 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.464 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.464 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.464 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.464 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.464 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.464 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.464 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.465 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.465 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.465 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.465 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.465 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.465 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.465 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.465 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.466 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.466 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.466 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.466 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.466 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.466 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.466 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.467 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.467 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.467 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.467 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.467 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.467 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.467 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.467 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.468 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.468 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.468 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.468 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.468 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.468 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.468 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.469 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.469 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.469 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.469 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.469 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.469 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.469 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.469 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.470 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.470 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.470 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.470 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.470 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.470 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.471 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.471 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.471 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.471 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.471 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.471 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.471 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.472 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.472 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.472 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.472 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.472 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.472 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.472 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.472 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.473 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.473 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.473 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.473 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.473 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.473 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.473 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.474 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.474 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.474 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.474 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.474 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.474 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.474 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.474 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.475 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.475 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.475 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.475 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.475 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.475 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.475 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.476 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.476 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.476 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.476 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.476 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.476 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.476 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.476 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.477 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.477 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.477 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.477 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.477 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.477 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.477 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.478 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.478 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.478 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.478 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.478 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.478 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.478 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.479 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.479 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.479 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.479 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.479 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.479 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.479 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.480 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.480 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.480 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.480 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.480 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.480 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.480 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.480 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.481 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.481 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.481 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.481 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.481 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.481 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.481 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.482 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.482 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.482 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.482 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.482 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.482 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.482 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.482 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.483 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.483 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.483 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.483 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.483 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.483 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.483 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.484 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.484 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.484 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.484 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.484 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.484 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.484 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.485 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.485 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.485 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.485 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.485 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.485 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.485 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.486 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.486 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.486 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.486 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.486 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.486 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.486 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.486 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.487 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.487 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.487 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.487 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.487 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.487 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.487 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.488 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.488 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.488 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.488 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.488 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.488 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.488 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.489 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.489 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.489 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.489 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.489 229095 WARNING oslo_config.cfg [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: and ``live_migration_inbound_addr`` respectively.
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: ).  Its value may be silently ignored in the future.
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.489 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.490 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.490 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.490 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.490 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.490 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.490 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.490 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.491 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.491 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.491 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.491 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.491 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.491 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.491 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.492 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.492 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.492 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.492 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.rbd_secret_uuid        = 79feddb1-4bfc-557f-83b9-0d57c9f66c1b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.492 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.492 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.492 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.493 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.493 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.493 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.493 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.493 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.493 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.493 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.493 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.494 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.494 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.494 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.494 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.494 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.494 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.495 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.495 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.495 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.495 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.495 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.495 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.496 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.496 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.496 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.496 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.496 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.496 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.496 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.497 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.497 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.497 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.497 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.497 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.497 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.498 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.498 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.498 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.498 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.498 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.498 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.498 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.499 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.499 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.499 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.499 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.499 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.499 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.499 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.499 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.500 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.500 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.500 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.500 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.500 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.500 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.500 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.501 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.501 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.501 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.501 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.501 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.501 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.501 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.502 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.502 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.502 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.502 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.502 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.502 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.502 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.502 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.503 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.503 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.503 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.503 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.503 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.503 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.503 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.504 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.504 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.504 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.504 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.504 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.504 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.504 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.505 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.505 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.505 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.505 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.505 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.505 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.505 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.505 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.506 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.506 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.506 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.506 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.506 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.506 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.507 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.507 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.507 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.507 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.507 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.507 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.507 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.508 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.508 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.508 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.508 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.508 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.508 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.508 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.508 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.509 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.509 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.509 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.509 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.509 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.509 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.510 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.510 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.510 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.510 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.510 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.510 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.510 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.511 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.511 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.511 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.511 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.511 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.511 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.511 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.512 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.512 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.512 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.512 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.512 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.512 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.512 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.512 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.513 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.513 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.513 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.513 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.513 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.513 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.513 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.514 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.514 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.514 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.514 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.514 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.514 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.514 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.515 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.515 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.515 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.515 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.515 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.515 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.515 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.516 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.516 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.516 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.516 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.516 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.516 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.516 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.516 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.517 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.517 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.517 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.517 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.517 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.517 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.518 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.518 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.518 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.518 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.518 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.518 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.518 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.518 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.519 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.519 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.519 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.519 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.519 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.519 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.519 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.520 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.520 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.520 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.520 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.520 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.520 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.520 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.521 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.521 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.521 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.521 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.521 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.521 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.521 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.522 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.522 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.522 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.522 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.522 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.522 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.522 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.522 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.523 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.523 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.523 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.523 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.523 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.523 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.523 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.524 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.524 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.524 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.524 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.524 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.524 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.525 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.525 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.525 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.525 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.525 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.525 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.525 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.526 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.526 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.526 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.526 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.526 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.526 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.526 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.527 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.527 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.527 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.527 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.527 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.527 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.527 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.528 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.528 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.528 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.528 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.528 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.528 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.528 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.529 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.529 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.529 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.529 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.529 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.529 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.529 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.530 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.530 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.530 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.530 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.530 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.530 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.530 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.531 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.531 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.531 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.531 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.531 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.531 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.531 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.532 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.532 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.532 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.532 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.532 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.532 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.532 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.533 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.533 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.533 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.533 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.533 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.533 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.533 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.534 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.534 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.534 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.534 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.534 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.534 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.534 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.535 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.535 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.535 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.535 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.535 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.535 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.535 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.536 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.536 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.536 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.536 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.536 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.536 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.536 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.536 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.537 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.537 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.537 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.537 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.537 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.537 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.538 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.538 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.538 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.538 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.538 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.538 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.538 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.539 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.539 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.539 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.539 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.539 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.539 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.539 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.539 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.540 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.540 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.540 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.540 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.540 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.540 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.540 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.541 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.541 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.541 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.541 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.541 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.541 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.541 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.542 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.542 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.542 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.542 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.542 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.542 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.542 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.543 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.543 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.543 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.543 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.543 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.543 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.543 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.544 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.544 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.544 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.544 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.544 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.544 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.544 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.544 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.545 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.545 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.545 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.545 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.545 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.545 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.545 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.546 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.546 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.546 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.546 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.546 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.546 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.546 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.546 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.547 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.547 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.547 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.547 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.547 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.547 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.547 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.548 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.548 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.548 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.548 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.548 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.548 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.548 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.548 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.549 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.549 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.549 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.549 229095 DEBUG oslo_service.service [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.550 229095 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.561 229095 INFO nova.virt.node [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Determined node identity 2850b2c4-8d07-40ab-9d82-672172ca70fc from /var/lib/nova/compute_id
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.562 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.563 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.563 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.563 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 05 09:36:53 np0005546420.localdomain systemd[1]: Started libvirt QEMU daemon.
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.652 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f7777e1aac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.656 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f7777e1aac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.658 229095 INFO nova.virt.libvirt.driver [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Connection event '1' reason 'None'
Dec 05 09:36:53 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:53.684 229095 DEBUG nova.virt.libvirt.volume.mount [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.545 229095 INFO nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Libvirt host capabilities <capabilities>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <host>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <uuid>38a014e5-f211-4fa1-8868-c362af7c3bc6</uuid>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <cpu>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <arch>x86_64</arch>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model>EPYC-Rome-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <vendor>AMD</vendor>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <microcode version='16777317'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <signature family='23' model='49' stepping='0'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='x2apic'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='tsc-deadline'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='osxsave'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='hypervisor'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='tsc_adjust'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='spec-ctrl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='stibp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='arch-capabilities'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='cmp_legacy'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='topoext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='virt-ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='lbrv'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='tsc-scale'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='vmcb-clean'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='pause-filter'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='pfthreshold'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='svme-addr-chk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='rdctl-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='skip-l1dfl-vmentry'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='mds-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature name='pschange-mc-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <pages unit='KiB' size='4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <pages unit='KiB' size='2048'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <pages unit='KiB' size='1048576'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </cpu>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <power_management>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <suspend_mem/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <suspend_disk/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <suspend_hybrid/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </power_management>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <iommu support='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <migration_features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <live/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <uri_transports>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <uri_transport>tcp</uri_transport>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <uri_transport>rdma</uri_transport>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </uri_transports>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </migration_features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <topology>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <cells num='1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <cell id='0'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:           <memory unit='KiB'>16116612</memory>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:           <pages unit='KiB' size='4'>4029153</pages>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:           <pages unit='KiB' size='2048'>0</pages>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:           <distances>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:             <sibling id='0' value='10'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:           </distances>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:           <cpus num='8'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:           </cpus>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         </cell>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </cells>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </topology>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <cache>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </cache>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <secmodel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model>selinux</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <doi>0</doi>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </secmodel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <secmodel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model>dac</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <doi>0</doi>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </secmodel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </host>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <guest>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <os_type>hvm</os_type>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <arch name='i686'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <wordsize>32</wordsize>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <domain type='qemu'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <domain type='kvm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </arch>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <pae/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <nonpae/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <acpi default='on' toggle='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <apic default='on' toggle='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <cpuselection/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <deviceboot/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <disksnapshot default='on' toggle='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <externalSnapshot/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </guest>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <guest>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <os_type>hvm</os_type>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <arch name='x86_64'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <wordsize>64</wordsize>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <domain type='qemu'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <domain type='kvm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </arch>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <acpi default='on' toggle='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <apic default='on' toggle='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <cpuselection/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <deviceboot/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <disksnapshot default='on' toggle='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <externalSnapshot/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </guest>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: </capabilities>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.555 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.582 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: <domainCapabilities>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <domain>kvm</domain>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <arch>i686</arch>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <vcpu max='1024'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <iothreads supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <os supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <enum name='firmware'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <loader supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>rom</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pflash</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='readonly'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>yes</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>no</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='secure'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>no</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </loader>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </os>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <cpu>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>on</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>off</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='maximum' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='maximumMigratable'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>on</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>off</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='host-model' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <vendor>AMD</vendor>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='x2apic'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='stibp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='succor'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='lbrv'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='custom' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Dhyana-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Genoa'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='auto-ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='auto-ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-128'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-256'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-512'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='KnightsMill'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4fmaps'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4vnniw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512er'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512pf'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='KnightsMill-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4fmaps'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4vnniw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512er'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512pf'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tbm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tbm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SierraForest'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ne-convert'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cmpccxadd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SierraForest-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ne-convert'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cmpccxadd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='athlon'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='athlon-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='core2duo'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='core2duo-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='coreduo'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='coreduo-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='n270'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='n270-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='phenom'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='phenom-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </cpu>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <memoryBacking supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <enum name='sourceType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>file</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>anonymous</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>memfd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </memoryBacking>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <devices>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <disk supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='diskDevice'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>disk</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>cdrom</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>floppy</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>lun</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='bus'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>fdc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>scsi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>sata</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-non-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </disk>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <graphics supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vnc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>egl-headless</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dbus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </graphics>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <video supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='modelType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vga</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>cirrus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>none</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>bochs</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>ramfb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </video>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <hostdev supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='mode'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>subsystem</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='startupPolicy'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>default</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>mandatory</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>requisite</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>optional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='subsysType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pci</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>scsi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='capsType'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='pciBackend'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </hostdev>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <rng supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-non-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>random</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>egd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>builtin</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </rng>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <filesystem supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='driverType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>path</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>handle</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtiofs</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </filesystem>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <tpm supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tpm-tis</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tpm-crb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>emulator</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>external</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendVersion'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>2.0</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </tpm>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <redirdev supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='bus'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </redirdev>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <channel supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pty</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>unix</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </channel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <crypto supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>qemu</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>builtin</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </crypto>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <interface supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>default</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>passt</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </interface>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <panic supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>isa</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>hyperv</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </panic>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <console supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>null</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pty</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dev</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>file</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pipe</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>stdio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>udp</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tcp</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>unix</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>qemu-vdagent</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dbus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </console>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </devices>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <gic supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <vmcoreinfo supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <genid supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <backingStoreInput supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <backup supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <async-teardown supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <ps2 supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <sev supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <sgx supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <hyperv supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='features'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>relaxed</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vapic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>spinlocks</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vpindex</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>runtime</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>synic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>stimer</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>reset</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vendor_id</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>frequencies</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>reenlightenment</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tlbflush</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>ipi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>avic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>emsr_bitmap</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>xmm_input</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <defaults>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <spinlocks>4095</spinlocks>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <stimer_direct>on</stimer_direct>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </defaults>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </hyperv>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <launchSecurity supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='sectype'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tdx</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </launchSecurity>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: </domainCapabilities>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.592 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: <domainCapabilities>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <domain>kvm</domain>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <arch>i686</arch>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <vcpu max='240'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <iothreads supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <os supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <enum name='firmware'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <loader supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>rom</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pflash</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='readonly'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>yes</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>no</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='secure'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>no</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </loader>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </os>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <cpu>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>on</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>off</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='maximum' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='maximumMigratable'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>on</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>off</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='host-model' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <vendor>AMD</vendor>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='x2apic'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='stibp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='succor'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='lbrv'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='custom' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Dhyana-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Genoa'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='auto-ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='auto-ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-128'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-256'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-512'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='KnightsMill'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4fmaps'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4vnniw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512er'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512pf'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='KnightsMill-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4fmaps'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4vnniw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512er'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512pf'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tbm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tbm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SierraForest'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ne-convert'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cmpccxadd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SierraForest-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ne-convert'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cmpccxadd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='athlon'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='athlon-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='core2duo'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='core2duo-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='coreduo'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='coreduo-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='n270'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='n270-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='phenom'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='phenom-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </cpu>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <memoryBacking supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <enum name='sourceType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>file</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>anonymous</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>memfd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </memoryBacking>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <devices>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <disk supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='diskDevice'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>disk</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>cdrom</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>floppy</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>lun</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='bus'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>ide</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>fdc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>scsi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>sata</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-non-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </disk>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <graphics supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vnc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>egl-headless</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dbus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </graphics>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <video supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='modelType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vga</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>cirrus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>none</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>bochs</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>ramfb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </video>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <hostdev supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='mode'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>subsystem</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='startupPolicy'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>default</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>mandatory</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>requisite</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>optional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='subsysType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pci</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>scsi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='capsType'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='pciBackend'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </hostdev>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <rng supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-non-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>random</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>egd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>builtin</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </rng>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <filesystem supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='driverType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>path</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>handle</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtiofs</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </filesystem>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <tpm supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tpm-tis</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tpm-crb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>emulator</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>external</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendVersion'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>2.0</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </tpm>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <redirdev supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='bus'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </redirdev>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <channel supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pty</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>unix</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </channel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <crypto supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>qemu</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>builtin</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </crypto>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <interface supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>default</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>passt</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </interface>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <panic supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>isa</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>hyperv</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </panic>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <console supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>null</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pty</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dev</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>file</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pipe</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>stdio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>udp</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tcp</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>unix</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>qemu-vdagent</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dbus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </console>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </devices>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <gic supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <vmcoreinfo supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <genid supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <backingStoreInput supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <backup supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <async-teardown supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <ps2 supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <sev supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <sgx supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <hyperv supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='features'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>relaxed</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vapic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>spinlocks</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vpindex</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>runtime</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>synic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>stimer</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>reset</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vendor_id</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>frequencies</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>reenlightenment</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tlbflush</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>ipi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>avic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>emsr_bitmap</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>xmm_input</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <defaults>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <spinlocks>4095</spinlocks>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <stimer_direct>on</stimer_direct>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </defaults>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </hyperv>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <launchSecurity supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='sectype'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tdx</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </launchSecurity>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: </domainCapabilities>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.635 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.641 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: <domainCapabilities>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <domain>kvm</domain>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <arch>x86_64</arch>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <vcpu max='1024'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <iothreads supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <os supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <enum name='firmware'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>efi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <loader supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>rom</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pflash</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='readonly'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>yes</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>no</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='secure'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>yes</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>no</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </loader>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </os>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <cpu>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>on</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>off</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='maximum' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='maximumMigratable'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>on</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>off</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='host-model' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <vendor>AMD</vendor>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='x2apic'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='stibp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='succor'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='lbrv'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='custom' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Dhyana-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Genoa'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='auto-ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='auto-ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-128'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-256'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-512'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='KnightsMill'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4fmaps'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4vnniw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512er'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512pf'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='KnightsMill-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4fmaps'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4vnniw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512er'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512pf'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tbm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tbm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SierraForest'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ne-convert'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cmpccxadd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SierraForest-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ne-convert'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cmpccxadd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='athlon'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='athlon-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='core2duo'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='core2duo-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='coreduo'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='coreduo-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='n270'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='n270-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='phenom'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='phenom-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </cpu>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <memoryBacking supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <enum name='sourceType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>file</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>anonymous</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>memfd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </memoryBacking>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <devices>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <disk supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='diskDevice'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>disk</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>cdrom</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>floppy</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>lun</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='bus'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>fdc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>scsi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>sata</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-non-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </disk>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <graphics supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vnc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>egl-headless</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dbus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </graphics>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <video supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='modelType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vga</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>cirrus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>none</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>bochs</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>ramfb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </video>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <hostdev supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='mode'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>subsystem</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='startupPolicy'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>default</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>mandatory</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>requisite</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>optional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='subsysType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pci</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>scsi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='capsType'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='pciBackend'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </hostdev>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <rng supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-non-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>random</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>egd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>builtin</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </rng>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <filesystem supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='driverType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>path</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>handle</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtiofs</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </filesystem>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <tpm supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tpm-tis</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tpm-crb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>emulator</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>external</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendVersion'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>2.0</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </tpm>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <redirdev supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='bus'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </redirdev>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <channel supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pty</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>unix</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </channel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <crypto supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>qemu</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>builtin</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </crypto>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <interface supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>default</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>passt</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </interface>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <panic supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>isa</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>hyperv</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </panic>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <console supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>null</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pty</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dev</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>file</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pipe</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>stdio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>udp</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tcp</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>unix</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>qemu-vdagent</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dbus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </console>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </devices>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <gic supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <vmcoreinfo supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <genid supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <backingStoreInput supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <backup supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <async-teardown supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <ps2 supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <sev supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <sgx supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <hyperv supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='features'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>relaxed</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vapic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>spinlocks</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vpindex</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>runtime</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>synic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>stimer</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>reset</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vendor_id</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>frequencies</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>reenlightenment</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tlbflush</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>ipi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>avic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>emsr_bitmap</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>xmm_input</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <defaults>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <spinlocks>4095</spinlocks>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <stimer_direct>on</stimer_direct>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </defaults>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </hyperv>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <launchSecurity supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='sectype'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tdx</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </launchSecurity>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: </domainCapabilities>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.698 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: <domainCapabilities>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <domain>kvm</domain>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <arch>x86_64</arch>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <vcpu max='240'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <iothreads supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <os supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <enum name='firmware'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <loader supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>rom</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pflash</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='readonly'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>yes</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>no</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='secure'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>no</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </loader>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </os>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <cpu>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>on</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>off</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='maximum' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='maximumMigratable'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>on</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>off</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='host-model' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <vendor>AMD</vendor>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='x2apic'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='stibp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='succor'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='lbrv'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <mode name='custom' supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Broadwell-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Cooperlake-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Denverton-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Dhyana-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Genoa'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='auto-ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='auto-ibrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amd-psfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='no-nested-data-bp'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='null-sel-clr-base'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='stibp-always-on'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='EPYC-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-128'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-256'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx10-512'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='prefetchiti'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Haswell-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='IvyBridge-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='KnightsMill'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4fmaps'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4vnniw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512er'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512pf'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='KnightsMill-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4fmaps'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-4vnniw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512er'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512pf'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tbm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fma4'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tbm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xop'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='amx-tile'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-bf16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-fp16'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bitalg'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vbmi2'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrc'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fzrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='la57'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='taa-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='tsx-ldtrk'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xfd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SierraForest'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ne-convert'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cmpccxadd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='SierraForest-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ifma'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-ne-convert'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx-vnni-int8'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='bus-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cmpccxadd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fbsdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='fsrs'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ibrs-all'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mcdt-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pbrsb-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='psdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='serialize'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vaes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='vpclmulqdq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='hle'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='rtm'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512bw'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512cd'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512dq'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512f'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='avx512vl'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='invpcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pcid'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='pku'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='mpx'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v2'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v3'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='core-capability'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='split-lock-detect'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='Snowridge-v4'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='cldemote'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='erms'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='gfni'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdir64b'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='movdiri'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='xsaves'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='athlon'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='athlon-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='core2duo'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='core2duo-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='coreduo'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='coreduo-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='n270'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='n270-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='ss'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='phenom'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <blockers model='phenom-v1'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnow'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <feature name='3dnowext'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </blockers>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </mode>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </cpu>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <memoryBacking supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <enum name='sourceType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>file</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>anonymous</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <value>memfd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </memoryBacking>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <devices>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <disk supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='diskDevice'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>disk</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>cdrom</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>floppy</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>lun</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='bus'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>ide</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>fdc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>scsi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>sata</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-non-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </disk>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <graphics supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vnc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>egl-headless</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dbus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </graphics>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <video supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='modelType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vga</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>cirrus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>none</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>bochs</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>ramfb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </video>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <hostdev supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='mode'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>subsystem</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='startupPolicy'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>default</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>mandatory</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>requisite</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>optional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='subsysType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pci</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>scsi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='capsType'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='pciBackend'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </hostdev>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <rng supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtio-non-transitional</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>random</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>egd</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>builtin</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </rng>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <filesystem supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='driverType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>path</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>handle</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>virtiofs</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </filesystem>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <tpm supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tpm-tis</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tpm-crb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>emulator</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>external</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendVersion'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>2.0</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </tpm>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <redirdev supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='bus'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>usb</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </redirdev>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <channel supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pty</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>unix</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </channel>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <crypto supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>qemu</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendModel'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>builtin</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </crypto>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <interface supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='backendType'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>default</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>passt</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </interface>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <panic supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='model'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>isa</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>hyperv</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </panic>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <console supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='type'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>null</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vc</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pty</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dev</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>file</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>pipe</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>stdio</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>udp</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tcp</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>unix</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>qemu-vdagent</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>dbus</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </console>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </devices>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   <features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <gic supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <vmcoreinfo supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <genid supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <backingStoreInput supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <backup supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <async-teardown supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <ps2 supported='yes'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <sev supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <sgx supported='no'/>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <hyperv supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='features'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>relaxed</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vapic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>spinlocks</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vpindex</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>runtime</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>synic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>stimer</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>reset</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>vendor_id</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>frequencies</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>reenlightenment</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tlbflush</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>ipi</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>avic</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>emsr_bitmap</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>xmm_input</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <defaults>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <spinlocks>4095</spinlocks>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <stimer_direct>on</stimer_direct>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </defaults>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </hyperv>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     <launchSecurity supported='yes'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       <enum name='sectype'>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:         <value>tdx</value>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:       </enum>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:     </launchSecurity>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:   </features>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: </domainCapabilities>
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.745 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.745 229095 INFO nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Secure Boot support detected
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.748 229095 INFO nova.virt.libvirt.driver [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.748 229095 INFO nova.virt.libvirt.driver [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.762 229095 DEBUG nova.virt.libvirt.driver [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.784 229095 INFO nova.virt.node [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Determined node identity 2850b2c4-8d07-40ab-9d82-672172ca70fc from /var/lib/nova/compute_id
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.802 229095 DEBUG nova.compute.manager [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Verified node 2850b2c4-8d07-40ab-9d82-672172ca70fc matches my host np0005546420.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 05 09:36:54 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:54.830 229095 INFO nova.compute.manager [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 05 09:36:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47080 DF PROTO=TCP SPT=45782 DPT=9102 SEQ=3084634905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC469D90000000001030307) 
Dec 05 09:36:55 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:55.175 229095 INFO nova.service [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Updating service version for nova-compute on np0005546420.localdomain from 57 to 66
Dec 05 09:36:55 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:55.216 229095 DEBUG oslo_concurrency.lockutils [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:36:55 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:55.217 229095 DEBUG oslo_concurrency.lockutils [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:36:55 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:55.217 229095 DEBUG oslo_concurrency.lockutils [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:36:55 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:55.218 229095 DEBUG nova.compute.resource_tracker [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:36:55 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:55.218 229095 DEBUG oslo_concurrency.processutils [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:36:55 np0005546420.localdomain python3.9[229288]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:36:55 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:55.679 229095 DEBUG oslo_concurrency.processutils [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:36:55 np0005546420.localdomain systemd[1]: Started libvirt nodedev daemon.
Dec 05 09:36:55 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:55.997 229095 WARNING nova.virt.libvirt.driver [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:36:55 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:55.998 229095 DEBUG nova.compute.resource_tracker [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=13603MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:36:55 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:55.998 229095 DEBUG oslo_concurrency.lockutils [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:36:55 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:55.999 229095 DEBUG oslo_concurrency.lockutils [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.141 229095 DEBUG nova.compute.resource_tracker [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.141 229095 DEBUG nova.compute.resource_tracker [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.195 229095 DEBUG nova.scheduler.client.report [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.215 229095 DEBUG nova.scheduler.client.report [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.216 229095 DEBUG nova.compute.provider_tree [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.231 229095 DEBUG nova.scheduler.client.report [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.259 229095 DEBUG nova.scheduler.client.report [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: HW_CPU_X86_AVX,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_SSE42,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,HW_CPU_X86_AESNI,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AVX2,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.276 229095 DEBUG oslo_concurrency.processutils [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:36:56 np0005546420.localdomain python3.9[229688]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.755 229095 DEBUG oslo_concurrency.processutils [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.762 229095 DEBUG nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.763 229095 INFO nova.virt.libvirt.host [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] kernel doesn't support AMD SEV
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.765 229095 DEBUG nova.compute.provider_tree [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.765 229095 DEBUG nova.virt.libvirt.driver [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.787 229095 DEBUG nova.scheduler.client.report [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.856 229095 DEBUG nova.compute.provider_tree [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Updating resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.978 229095 DEBUG nova.compute.resource_tracker [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.979 229095 DEBUG oslo_concurrency.lockutils [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:36:56 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:56.980 229095 DEBUG nova.service [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 05 09:36:57 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:57.024 229095 DEBUG nova.service [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 05 09:36:57 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:36:57.024 229095 DEBUG nova.servicegroup.drivers.db [None req-24773441-f6c0-4c54-9cf5-f255113d7ec5 - - - - - -] DB_Driver: join new ServiceGroup member np0005546420.localdomain to the compute group, service = <Service: host=np0005546420.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 05 09:36:57 np0005546420.localdomain python3.9[229818]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:36:59 np0005546420.localdomain sudo[229926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sxainkoiemthgxywpgzubjuwkuhrckpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927418.7764359-4265-101507905912478/AnsiballZ_podman_container.py
Dec 05 09:36:59 np0005546420.localdomain sudo[229926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:36:59 np0005546420.localdomain python3.9[229928]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 05 09:36:59 np0005546420.localdomain sudo[229926]: pam_unix(sudo:session): session closed for user root
Dec 05 09:36:59 np0005546420.localdomain systemd-journald[48245]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 122.2 (407 of 333 items), suggesting rotation.
Dec 05 09:36:59 np0005546420.localdomain systemd-journald[48245]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 05 09:36:59 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:36:59 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:37:00 np0005546420.localdomain sudo[230060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfpkjjceccoinjwjihbnznzispjuuvgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927419.9140897-4289-71274102200842/AnsiballZ_systemd.py
Dec 05 09:37:00 np0005546420.localdomain sudo[230060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:00 np0005546420.localdomain python3.9[230062]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:37:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51539 DF PROTO=TCP SPT=33900 DPT=9102 SEQ=1622881935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC47F9A0000000001030307) 
Dec 05 09:37:00 np0005546420.localdomain systemd[1]: Stopping nova_compute container...
Dec 05 09:37:01 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:37:01.143 229095 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Dec 05 09:37:01 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:37:01.146 229095 DEBUG oslo_concurrency.lockutils [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 09:37:01 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:37:01.146 229095 DEBUG oslo_concurrency.lockutils [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 09:37:01 np0005546420.localdomain nova_compute[229091]: 2025-12-05 09:37:01.146 229095 DEBUG oslo_concurrency.lockutils [None req-8660b577-a799-4b83-a264-90cb12b09884 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 09:37:01 np0005546420.localdomain virtqemud[229316]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 05 09:37:01 np0005546420.localdomain virtqemud[229316]: hostname: np0005546420.localdomain
Dec 05 09:37:01 np0005546420.localdomain virtqemud[229316]: End of file while reading data: Input/output error
Dec 05 09:37:01 np0005546420.localdomain systemd[1]: libpod-2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86.scope: Deactivated successfully.
Dec 05 09:37:01 np0005546420.localdomain systemd[1]: libpod-2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86.scope: Consumed 4.189s CPU time.
Dec 05 09:37:01 np0005546420.localdomain podman[230066]: 2025-12-05 09:37:01.684489884 +0000 UTC m=+1.123703199 container died 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.schema-version=1.0)
Dec 05 09:37:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86-userdata-shm.mount: Deactivated successfully.
Dec 05 09:37:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517-merged.mount: Deactivated successfully.
Dec 05 09:37:01 np0005546420.localdomain podman[230066]: 2025-12-05 09:37:01.750031782 +0000 UTC m=+1.189245057 container cleanup 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:37:01 np0005546420.localdomain podman[230066]: nova_compute
Dec 05 09:37:01 np0005546420.localdomain podman[230104]: error opening file `/run/crun/2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86/status`: No such file or directory
Dec 05 09:37:01 np0005546420.localdomain podman[230092]: 2025-12-05 09:37:01.827732336 +0000 UTC m=+0.048892988 container cleanup 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.build-date=20251125)
Dec 05 09:37:01 np0005546420.localdomain podman[230092]: nova_compute
Dec 05 09:37:01 np0005546420.localdomain systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 05 09:37:01 np0005546420.localdomain systemd[1]: Stopped nova_compute container.
Dec 05 09:37:01 np0005546420.localdomain systemd[1]: Starting nova_compute container...
Dec 05 09:37:01 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:37:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 09:37:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 05 09:37:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 09:37:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 09:37:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 09:37:01 np0005546420.localdomain podman[230106]: 2025-12-05 09:37:01.956089831 +0000 UTC m=+0.101184386 container init 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3)
Dec 05 09:37:01 np0005546420.localdomain podman[230106]: 2025-12-05 09:37:01.966431306 +0000 UTC m=+0.111525861 container start 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 09:37:01 np0005546420.localdomain podman[230106]: nova_compute
Dec 05 09:37:01 np0005546420.localdomain nova_compute[230120]: + sudo -E kolla_set_configs
Dec 05 09:37:01 np0005546420.localdomain systemd[1]: Started nova_compute container.
Dec 05 09:37:02 np0005546420.localdomain sudo[230060]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Validating config file
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying service configuration files
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Deleting /etc/ceph
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Creating directory /etc/ceph
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /etc/ceph
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Writing out command to execute
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: ++ cat /run_command
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: + CMD=nova-compute
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: + ARGS=
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: + sudo kolla_copy_cacerts
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: + [[ ! -n '' ]]
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: + . kolla_extend_start
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: + echo 'Running command: '\''nova-compute'\'''
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: Running command: 'nova-compute'
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: + umask 0022
Dec 05 09:37:02 np0005546420.localdomain nova_compute[230120]: + exec nova-compute
Dec 05 09:37:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4736 DF PROTO=TCP SPT=54782 DPT=9882 SEQ=2642350026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC487DA0000000001030307) 
Dec 05 09:37:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:37:02 np0005546420.localdomain podman[230149]: 2025-12-05 09:37:02.781331712 +0000 UTC m=+0.105115643 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 09:37:02 np0005546420.localdomain podman[230149]: 2025-12-05 09:37:02.817605718 +0000 UTC m=+0.141389569 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 05 09:37:02 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:37:03 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:03.851 230124 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 09:37:03 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:03.852 230124 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 09:37:03 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:03.852 230124 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 09:37:03 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:03.852 230124 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.006 230124 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.032 230124 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.032 230124 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 05 09:37:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:37:04.082 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:37:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:37:04.083 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:37:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:37:04.083 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:37:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27552 DF PROTO=TCP SPT=52138 DPT=9105 SEQ=1076865942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC48DD90000000001030307) 
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.446 230124 INFO nova.virt.driver [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.560 230124 INFO nova.compute.provider_config [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.567 230124 WARNING nova.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.568 230124 DEBUG oslo_concurrency.lockutils [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.568 230124 DEBUG oslo_concurrency.lockutils [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.568 230124 DEBUG oslo_concurrency.lockutils [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.569 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.569 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.569 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.569 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.569 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.569 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.570 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.570 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.570 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.570 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.570 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.570 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.570 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.570 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.571 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.571 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.571 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.571 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.571 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.571 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] console_host                   = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.571 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.572 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.572 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.572 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.572 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.572 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.572 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.572 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.573 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.573 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.573 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.573 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.573 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.573 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.573 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.574 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.574 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.574 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.574 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] host                           = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.574 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.574 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.574 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.575 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.575 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.575 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.575 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.575 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.575 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.575 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.576 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.576 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.576 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.576 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.576 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.576 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.576 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.577 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.577 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.577 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.577 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.577 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.577 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.577 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.577 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.578 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.578 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.578 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.578 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.578 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.578 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.578 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.579 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.579 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.579 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.579 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.579 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.579 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.579 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.580 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.580 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.580 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] my_block_storage_ip            = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.580 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] my_ip                          = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.580 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.580 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.580 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.580 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.581 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.581 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.581 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.581 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.581 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.581 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.581 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.582 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.582 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.582 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.582 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.582 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.582 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.582 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.582 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.583 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.583 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.583 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.583 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.583 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.583 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.583 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.584 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.584 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.584 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.584 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.584 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.584 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.584 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.584 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.585 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.585 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.585 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.585 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.585 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.585 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.585 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.586 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.586 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.586 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.586 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.586 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.586 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.586 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.586 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.587 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.587 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.587 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.587 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.587 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.587 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.587 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.588 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.588 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.588 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.588 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.588 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.588 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.588 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.589 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.589 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.589 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.589 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.589 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.589 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.589 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.589 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.590 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.590 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.590 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.590 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.590 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.590 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.590 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.591 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.591 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.591 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.591 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.591 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.591 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.591 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.592 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.592 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.592 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.592 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.592 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.592 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.592 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.593 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.593 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.593 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.593 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.593 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.593 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.593 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.594 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.594 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.594 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.594 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.594 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.594 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.594 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.594 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.595 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.595 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.595 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.595 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.595 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.595 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.595 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.596 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.596 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.596 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.596 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.596 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.596 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.596 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.597 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.597 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.597 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.597 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.597 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.597 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.597 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.597 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.598 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.598 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.598 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.598 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.598 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.598 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.598 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.599 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.599 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.599 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.599 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.599 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.599 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.599 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.599 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.600 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.600 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.600 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.600 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.600 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.600 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.600 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.601 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.601 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.601 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.601 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.601 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.601 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.601 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.602 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.602 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.602 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.602 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.602 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.602 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.602 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.602 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.603 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.603 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.603 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.603 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.603 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.603 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.603 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.604 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.604 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.604 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.604 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.604 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.604 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.604 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.604 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.605 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.605 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.605 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.605 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.605 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.605 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.605 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.606 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.606 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.606 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.606 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.606 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.606 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.606 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.606 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.607 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.607 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.607 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.607 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.607 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.607 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.607 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.608 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.608 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.608 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.608 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.608 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.608 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.609 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.609 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.609 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.609 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.609 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.609 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.609 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.610 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.610 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.610 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.610 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.610 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.610 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.610 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.610 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.611 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.611 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.611 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.611 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.611 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.611 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.611 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.612 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.612 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.612 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.612 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.612 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.612 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.612 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.613 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.613 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.613 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.613 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.613 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.613 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.613 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.613 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.614 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.614 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.614 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.614 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.614 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.614 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.615 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.615 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.615 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.615 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.615 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.615 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.615 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.615 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.616 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.616 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.616 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.616 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.616 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.616 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.616 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.617 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.617 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.617 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.617 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.617 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.617 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.618 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.618 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.618 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.618 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.618 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.618 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.618 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.619 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.619 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.619 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.619 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.619 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.619 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.619 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.619 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.620 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.620 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.620 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.620 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.620 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.620 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.620 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.621 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.621 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.621 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.621 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.621 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.621 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.621 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.621 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.622 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.622 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.622 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.622 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.622 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.622 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.622 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.623 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.623 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.623 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.623 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.623 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.623 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.623 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.623 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.624 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.624 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.624 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.624 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.624 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.624 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.624 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.625 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.625 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.625 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.625 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.625 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.625 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.625 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.625 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.626 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.626 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.626 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.626 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.626 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.626 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.626 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.627 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.627 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.627 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.627 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.627 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.627 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.627 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.627 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.628 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.628 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.628 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.628 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.628 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.628 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.628 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.629 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.629 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.629 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.629 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.629 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.629 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.629 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.629 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.630 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.630 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.630 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.630 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.630 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.630 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.631 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.631 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.631 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.631 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.631 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.631 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.631 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.632 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.632 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.632 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.632 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.632 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.632 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.633 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.633 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.633 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.633 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.633 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.633 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.633 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.634 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.634 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.634 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.634 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.634 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.634 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.634 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.635 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.635 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.635 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.635 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.635 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.635 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.635 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.636 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.636 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.636 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.636 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.636 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.636 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.637 230124 WARNING oslo_config.cfg [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: and ``live_migration_inbound_addr`` respectively.
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: ).  Its value may be silently ignored in the future.
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.637 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.637 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.637 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.637 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.637 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.637 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.638 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.638 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.638 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.638 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.638 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.638 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.639 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.639 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.639 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.639 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.639 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.639 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.639 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.rbd_secret_uuid        = 79feddb1-4bfc-557f-83b9-0d57c9f66c1b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.640 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.640 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.640 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.640 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.640 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.640 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.640 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.641 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.641 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.641 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.641 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.641 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.641 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.642 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.642 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.642 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.642 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.642 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.642 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.642 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.643 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.643 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.643 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.643 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.643 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.643 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.643 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.644 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.644 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.644 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.644 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.644 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.644 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.644 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.645 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.645 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.645 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.645 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.645 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.645 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.645 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.646 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.646 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.646 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.646 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.646 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.646 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.646 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.647 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.647 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.647 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.647 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.647 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.647 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.647 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.648 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.648 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.648 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.648 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.648 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.648 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.649 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.649 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.649 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.649 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.649 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.649 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.650 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.650 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.650 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.650 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.650 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.650 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.650 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.650 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.651 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.651 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.651 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.651 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.651 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.651 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.651 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.652 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.652 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.652 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.652 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.652 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.652 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.652 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.653 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.653 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.653 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.653 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.653 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.653 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.653 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.654 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.654 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.654 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.654 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.654 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.654 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.654 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.655 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.655 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.655 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.655 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.655 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.655 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.655 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.656 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.656 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.656 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.656 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.656 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.656 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.656 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.657 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.657 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.657 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.657 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.657 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.657 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.658 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.658 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.658 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.658 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.658 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.658 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.658 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.659 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.659 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.659 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.659 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.659 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.659 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.659 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.660 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.660 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.660 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.660 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.660 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.660 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.660 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.661 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.661 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.661 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.661 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.661 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.661 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.661 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.662 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.662 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.662 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.662 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.662 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.662 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.662 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.663 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.663 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.663 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.663 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.663 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.663 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.664 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.664 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.664 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.664 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.664 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.664 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.664 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.665 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.665 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.665 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.665 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.665 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.665 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.665 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.666 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.666 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.666 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.666 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.666 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.666 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.667 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.667 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.667 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.667 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.667 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.667 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.667 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.667 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.668 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.668 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.668 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.668 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.668 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.668 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.668 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.669 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.669 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.669 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.669 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.669 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.669 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.669 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.670 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.670 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.671 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.672 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.672 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.672 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.673 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.673 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.673 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.674 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.674 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.674 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.675 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.675 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.675 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.676 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.676 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.676 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.676 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.677 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.677 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.677 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.678 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.678 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.679 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.679 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.679 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.680 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.680 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.680 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.681 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.681 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.681 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.682 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.682 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.682 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.682 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.683 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.683 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.683 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.684 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.684 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.684 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.685 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.685 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.685 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.685 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.686 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.686 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.686 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.687 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.687 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.687 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.687 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.688 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.688 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.688 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.689 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.689 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.689 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.690 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.690 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.690 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.690 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.691 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.691 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.691 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.692 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.692 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.692 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.693 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.693 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.693 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.694 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.694 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.694 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.695 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.695 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.695 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.696 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.696 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.696 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.697 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.697 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.697 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.697 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.698 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.698 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.698 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.699 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.699 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.699 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.699 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.700 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.700 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.700 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.701 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.701 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.701 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.701 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.702 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.702 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.703 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.703 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.703 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.704 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.704 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.704 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.705 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.705 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.705 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.706 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.706 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.706 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.707 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.707 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.707 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.708 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.708 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.708 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.709 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.709 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.709 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.709 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.710 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.710 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.710 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.711 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.711 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.711 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.712 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.712 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.712 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.713 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.713 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.713 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.714 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.714 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.714 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.715 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.715 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.715 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.716 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.716 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.716 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.716 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.717 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.717 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.717 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.718 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.718 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.718 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.718 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.719 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.719 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.719 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.720 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.720 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.720 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.720 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.721 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.721 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.721 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.721 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.721 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.722 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.722 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.722 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.722 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.722 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.722 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.723 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.723 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.723 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.723 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.724 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.724 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.724 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.724 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.724 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.725 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.725 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.725 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.725 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.725 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.726 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.726 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.726 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.726 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.726 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.726 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.727 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.727 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.727 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.727 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.727 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.728 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.728 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.728 230124 DEBUG oslo_service.service [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.731 230124 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.745 230124 INFO nova.virt.node [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Determined node identity 2850b2c4-8d07-40ab-9d82-672172ca70fc from /var/lib/nova/compute_id
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.746 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.747 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.747 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.747 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.761 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f823fb170d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.764 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f823fb170d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.765 230124 INFO nova.virt.libvirt.driver [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Connection event '1' reason 'None'
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.772 230124 INFO nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Libvirt host capabilities <capabilities>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <host>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <uuid>38a014e5-f211-4fa1-8868-c362af7c3bc6</uuid>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <cpu>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <arch>x86_64</arch>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model>EPYC-Rome-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <vendor>AMD</vendor>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <microcode version='16777317'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <signature family='23' model='49' stepping='0'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='x2apic'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='tsc-deadline'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='osxsave'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='hypervisor'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='tsc_adjust'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='spec-ctrl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='stibp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='arch-capabilities'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='cmp_legacy'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='topoext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='virt-ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='lbrv'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='tsc-scale'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='vmcb-clean'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='pause-filter'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='pfthreshold'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='svme-addr-chk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='rdctl-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='skip-l1dfl-vmentry'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='mds-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature name='pschange-mc-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <pages unit='KiB' size='4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <pages unit='KiB' size='2048'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <pages unit='KiB' size='1048576'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </cpu>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <power_management>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <suspend_mem/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <suspend_disk/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <suspend_hybrid/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </power_management>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <iommu support='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <migration_features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <live/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <uri_transports>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <uri_transport>tcp</uri_transport>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <uri_transport>rdma</uri_transport>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </uri_transports>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </migration_features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <topology>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <cells num='1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <cell id='0'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:           <memory unit='KiB'>16116612</memory>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:           <pages unit='KiB' size='4'>4029153</pages>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:           <pages unit='KiB' size='2048'>0</pages>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:           <distances>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:             <sibling id='0' value='10'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:           </distances>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:           <cpus num='8'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:           </cpus>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         </cell>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </cells>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </topology>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <cache>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </cache>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <secmodel>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model>selinux</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <doi>0</doi>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </secmodel>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <secmodel>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model>dac</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <doi>0</doi>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </secmodel>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </host>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <guest>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <os_type>hvm</os_type>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <arch name='i686'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <wordsize>32</wordsize>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <domain type='qemu'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <domain type='kvm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </arch>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <pae/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <nonpae/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <acpi default='on' toggle='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <apic default='on' toggle='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <cpuselection/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <deviceboot/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <disksnapshot default='on' toggle='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <externalSnapshot/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </guest>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <guest>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <os_type>hvm</os_type>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <arch name='x86_64'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <wordsize>64</wordsize>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <domain type='qemu'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <domain type='kvm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </arch>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <acpi default='on' toggle='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <apic default='on' toggle='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <cpuselection/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <deviceboot/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <disksnapshot default='on' toggle='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <externalSnapshot/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </guest>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: </capabilities>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.779 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.782 230124 DEBUG nova.virt.libvirt.volume.mount [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.784 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: <domainCapabilities>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <domain>kvm</domain>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <arch>i686</arch>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <vcpu max='240'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <iothreads supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <os supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <enum name='firmware'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <loader supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>rom</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pflash</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='readonly'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>yes</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>no</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='secure'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>no</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </loader>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </os>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <cpu>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>on</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>off</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='maximum' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='maximumMigratable'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>on</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>off</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='host-model' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <vendor>AMD</vendor>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='x2apic'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='stibp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='succor'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='lbrv'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='custom' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Dhyana-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Genoa'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='auto-ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='auto-ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-128'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-256'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-512'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='KnightsMill'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4fmaps'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4vnniw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512er'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512pf'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='KnightsMill-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4fmaps'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4vnniw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512er'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512pf'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tbm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tbm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SierraForest'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ne-convert'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cmpccxadd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SierraForest-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ne-convert'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cmpccxadd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='athlon'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='athlon-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='core2duo'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='core2duo-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='coreduo'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='coreduo-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='n270'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='n270-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='phenom'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='phenom-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </cpu>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <memoryBacking supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <enum name='sourceType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>file</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>anonymous</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>memfd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </memoryBacking>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <devices>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <disk supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='diskDevice'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>disk</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>cdrom</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>floppy</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>lun</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='bus'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>ide</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>fdc</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>scsi</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>sata</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-non-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </disk>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <graphics supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vnc</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>egl-headless</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>dbus</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </graphics>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <video supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='modelType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vga</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>cirrus</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>none</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>bochs</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>ramfb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </video>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <hostdev supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='mode'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>subsystem</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='startupPolicy'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>default</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>mandatory</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>requisite</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>optional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='subsysType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pci</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>scsi</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='capsType'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='pciBackend'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </hostdev>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <rng supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-non-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>random</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>egd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>builtin</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </rng>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <filesystem supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='driverType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>path</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>handle</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtiofs</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </filesystem>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <tpm supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tpm-tis</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tpm-crb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>emulator</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>external</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendVersion'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>2.0</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </tpm>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <redirdev supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='bus'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </redirdev>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <channel supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pty</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>unix</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </channel>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <crypto supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>qemu</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>builtin</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </crypto>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <interface supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>default</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>passt</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </interface>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <panic supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>isa</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>hyperv</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </panic>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <console supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>null</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vc</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pty</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>dev</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>file</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pipe</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>stdio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>udp</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tcp</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>unix</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>qemu-vdagent</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>dbus</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </console>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </devices>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <gic supported='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <vmcoreinfo supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <genid supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <backingStoreInput supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <backup supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <async-teardown supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <ps2 supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <sev supported='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <sgx supported='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <hyperv supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='features'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>relaxed</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vapic</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>spinlocks</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vpindex</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>runtime</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>synic</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>stimer</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>reset</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vendor_id</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>frequencies</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>reenlightenment</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tlbflush</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>ipi</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>avic</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>emsr_bitmap</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>xmm_input</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <defaults>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <spinlocks>4095</spinlocks>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <stimer_direct>on</stimer_direct>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </defaults>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </hyperv>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <launchSecurity supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='sectype'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tdx</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </launchSecurity>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: </domainCapabilities>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.791 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: <domainCapabilities>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <domain>kvm</domain>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <arch>i686</arch>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <vcpu max='1024'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <iothreads supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <os supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <enum name='firmware'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <loader supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>rom</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pflash</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='readonly'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>yes</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>no</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='secure'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>no</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </loader>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </os>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <cpu>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>on</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>off</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='maximum' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='maximumMigratable'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>on</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>off</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='host-model' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <vendor>AMD</vendor>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='x2apic'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='stibp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='succor'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='lbrv'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='custom' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Dhyana-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Genoa'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='auto-ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='auto-ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-128'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-256'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-512'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='KnightsMill'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4fmaps'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4vnniw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512er'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512pf'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='KnightsMill-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4fmaps'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4vnniw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512er'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512pf'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tbm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tbm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SierraForest'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ne-convert'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cmpccxadd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SierraForest-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ne-convert'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cmpccxadd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='athlon'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='athlon-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='core2duo'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='core2duo-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='coreduo'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='coreduo-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='n270'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='n270-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='phenom'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='phenom-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </cpu>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <memoryBacking supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <enum name='sourceType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>file</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>anonymous</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>memfd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </memoryBacking>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <devices>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <disk supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='diskDevice'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>disk</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>cdrom</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>floppy</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>lun</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='bus'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>fdc</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>scsi</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>sata</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-non-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </disk>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <graphics supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vnc</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>egl-headless</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>dbus</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </graphics>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <video supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='modelType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vga</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>cirrus</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>none</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>bochs</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>ramfb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </video>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <hostdev supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='mode'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>subsystem</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='startupPolicy'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>default</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>mandatory</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>requisite</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>optional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='subsysType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pci</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>scsi</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='capsType'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='pciBackend'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </hostdev>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <rng supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-non-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>random</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>egd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>builtin</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </rng>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <filesystem supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='driverType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>path</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>handle</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtiofs</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </filesystem>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <tpm supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tpm-tis</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tpm-crb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>emulator</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>external</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendVersion'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>2.0</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </tpm>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <redirdev supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='bus'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </redirdev>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <channel supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pty</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>unix</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </channel>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <crypto supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>qemu</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>builtin</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </crypto>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <interface supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>default</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>passt</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </interface>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <panic supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>isa</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>hyperv</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </panic>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <console supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>null</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vc</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pty</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>dev</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>file</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pipe</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>stdio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>udp</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tcp</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>unix</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>qemu-vdagent</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>dbus</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </console>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </devices>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <gic supported='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <vmcoreinfo supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <genid supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <backingStoreInput supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <backup supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <async-teardown supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <ps2 supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <sev supported='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <sgx supported='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <hyperv supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='features'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>relaxed</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vapic</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>spinlocks</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vpindex</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>runtime</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>synic</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>stimer</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>reset</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vendor_id</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>frequencies</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>reenlightenment</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tlbflush</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>ipi</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>avic</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>emsr_bitmap</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>xmm_input</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <defaults>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <spinlocks>4095</spinlocks>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <stimer_direct>on</stimer_direct>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </defaults>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </hyperv>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <launchSecurity supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='sectype'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tdx</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </launchSecurity>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: </domainCapabilities>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.837 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.845 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: <domainCapabilities>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <domain>kvm</domain>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <arch>x86_64</arch>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <vcpu max='240'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <iothreads supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <os supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <enum name='firmware'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <loader supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>rom</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pflash</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='readonly'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>yes</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>no</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='secure'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>no</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </loader>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </os>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <cpu>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>on</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>off</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='maximum' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='maximumMigratable'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>on</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>off</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='host-model' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <vendor>AMD</vendor>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='x2apic'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='stibp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='succor'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='lbrv'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='custom' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Dhyana-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Genoa'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='auto-ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='auto-ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-128'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-256'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-512'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='KnightsMill'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4fmaps'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4vnniw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512er'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512pf'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='KnightsMill-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4fmaps'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4vnniw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512er'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512pf'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tbm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tbm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SierraForest'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ne-convert'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cmpccxadd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='SierraForest-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ne-convert'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cmpccxadd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='athlon'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='athlon-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='core2duo'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='core2duo-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='coreduo'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='coreduo-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='n270'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='n270-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='phenom'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='phenom-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </cpu>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <memoryBacking supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <enum name='sourceType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>file</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>anonymous</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>memfd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </memoryBacking>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <devices>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <disk supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='diskDevice'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>disk</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>cdrom</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>floppy</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>lun</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='bus'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>ide</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>fdc</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>scsi</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>sata</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-non-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </disk>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <graphics supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vnc</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>egl-headless</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>dbus</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </graphics>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <video supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='modelType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vga</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>cirrus</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>none</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>bochs</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>ramfb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </video>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <hostdev supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='mode'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>subsystem</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='startupPolicy'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>default</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>mandatory</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>requisite</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>optional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='subsysType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pci</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>scsi</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='capsType'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='pciBackend'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </hostdev>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <rng supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtio-non-transitional</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>random</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>egd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>builtin</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </rng>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <filesystem supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='driverType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>path</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>handle</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>virtiofs</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </filesystem>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <tpm supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tpm-tis</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tpm-crb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>emulator</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>external</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendVersion'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>2.0</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </tpm>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <redirdev supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='bus'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </redirdev>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <channel supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pty</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>unix</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </channel>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <crypto supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>qemu</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>builtin</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </crypto>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <interface supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='backendType'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>default</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>passt</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </interface>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <panic supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>isa</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>hyperv</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </panic>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <console supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>null</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vc</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pty</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>dev</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>file</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pipe</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>stdio</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>udp</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tcp</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>unix</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>qemu-vdagent</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>dbus</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </console>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </devices>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <gic supported='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <vmcoreinfo supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <genid supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <backingStoreInput supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <backup supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <async-teardown supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <ps2 supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <sev supported='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <sgx supported='no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <hyperv supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='features'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>relaxed</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vapic</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>spinlocks</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vpindex</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>runtime</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>synic</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>stimer</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>reset</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>vendor_id</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>frequencies</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>reenlightenment</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tlbflush</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>ipi</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>avic</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>emsr_bitmap</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>xmm_input</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <defaults>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <spinlocks>4095</spinlocks>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <stimer_direct>on</stimer_direct>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </defaults>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </hyperv>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <launchSecurity supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='sectype'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>tdx</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </launchSecurity>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </features>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: </domainCapabilities>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.902 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]: <domainCapabilities>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <domain>kvm</domain>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <arch>x86_64</arch>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <vcpu max='1024'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <iothreads supported='yes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <os supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <enum name='firmware'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>efi</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <loader supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>rom</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>pflash</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='readonly'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>yes</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>no</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='secure'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>yes</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>no</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </loader>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   </os>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:   <cpu>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>on</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>off</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='maximum' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <enum name='maximumMigratable'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>on</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <value>off</value>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='host-model' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <vendor>AMD</vendor>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='x2apic'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='stibp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='succor'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='lbrv'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:     <mode name='custom' supported='yes'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Broadwell-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Cooperlake-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Denverton-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Dhyana-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Genoa'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='auto-ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='auto-ibrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amd-psfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='no-nested-data-bp'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='null-sel-clr-base'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='stibp-always-on'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='EPYC-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-128'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-256'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx10-512'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='prefetchiti'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Haswell-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:04 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='IvyBridge-v2'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='KnightsMill'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4fmaps'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4vnniw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512er'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512pf'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='KnightsMill-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4fmaps'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-4vnniw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512er'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512pf'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G4'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G5'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='tbm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fma4'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='tbm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xop'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-bf16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-int8'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='amx-tile'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-bf16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-fp16'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bitalg'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512ifma'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vbmi2'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrc'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fzrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='la57'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='taa-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='tsx-ldtrk'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xfd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='SierraForest'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ifma'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ne-convert'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni-int8'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='cmpccxadd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='SierraForest-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ifma'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-ne-convert'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx-vnni-int8'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='bus-lock-detect'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='cmpccxadd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fbsdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='fsrs'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ibrs-all'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='mcdt-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pbrsb-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='psdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='serialize'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vaes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='vpclmulqdq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='hle'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='rtm'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512bw'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512cd'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512dq'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512f'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='avx512vl'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='invpcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pcid'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='pku'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='mpx'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v2'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v3'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='core-capability'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='split-lock-detect'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='Snowridge-v4'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='cldemote'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='erms'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='gfni'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdir64b'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='movdiri'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='xsaves'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='athlon'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='athlon-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='core2duo'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='core2duo-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='coreduo'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='coreduo-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='n270'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='n270-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='ss'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='phenom'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <blockers model='phenom-v1'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnow'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <feature name='3dnowext'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </blockers>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </mode>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:   </cpu>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:   <memoryBacking supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <enum name='sourceType'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <value>file</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <value>anonymous</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <value>memfd</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:   </memoryBacking>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:   <devices>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <disk supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='diskDevice'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>disk</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>cdrom</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>floppy</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>lun</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='bus'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>fdc</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>scsi</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>sata</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>virtio-transitional</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>virtio-non-transitional</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </disk>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <graphics supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>vnc</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>egl-headless</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>dbus</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </graphics>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <video supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='modelType'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>vga</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>cirrus</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>none</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>bochs</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>ramfb</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </video>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <hostdev supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='mode'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>subsystem</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='startupPolicy'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>default</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>mandatory</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>requisite</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>optional</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='subsysType'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>pci</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>scsi</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='capsType'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='pciBackend'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </hostdev>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <rng supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>virtio</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>virtio-transitional</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>virtio-non-transitional</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>random</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>egd</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>builtin</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </rng>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <filesystem supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='driverType'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>path</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>handle</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>virtiofs</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </filesystem>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <tpm supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>tpm-tis</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>tpm-crb</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>emulator</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>external</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='backendVersion'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>2.0</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </tpm>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <redirdev supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='bus'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>usb</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </redirdev>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <channel supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>pty</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>unix</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </channel>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <crypto supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='model'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>qemu</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='backendModel'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>builtin</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </crypto>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <interface supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='backendType'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>default</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>passt</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </interface>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <panic supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='model'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>isa</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>hyperv</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </panic>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <console supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='type'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>null</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>vc</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>pty</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>dev</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>file</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>pipe</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>stdio</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>udp</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>tcp</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>unix</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>qemu-vdagent</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>dbus</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </console>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:   </devices>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:   <features>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <gic supported='no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <vmcoreinfo supported='yes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <genid supported='yes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <backingStoreInput supported='yes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <backup supported='yes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <async-teardown supported='yes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <ps2 supported='yes'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <sev supported='no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <sgx supported='no'/>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <hyperv supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='features'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>relaxed</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>vapic</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>spinlocks</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>vpindex</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>runtime</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>synic</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>stimer</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>reset</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>vendor_id</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>frequencies</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>reenlightenment</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>tlbflush</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>ipi</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>avic</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>emsr_bitmap</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>xmm_input</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <defaults>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <spinlocks>4095</spinlocks>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <stimer_direct>on</stimer_direct>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </defaults>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </hyperv>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     <launchSecurity supported='yes'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       <enum name='sectype'>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:         <value>tdx</value>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:       </enum>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:     </launchSecurity>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:   </features>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: </domainCapabilities>
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.969 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.969 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.970 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.970 230124 INFO nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Secure Boot support detected
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.973 230124 INFO nova.virt.libvirt.driver [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.974 230124 INFO nova.virt.libvirt.driver [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:04.986 230124 DEBUG nova.virt.libvirt.driver [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.003 230124 INFO nova.virt.node [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Determined node identity 2850b2c4-8d07-40ab-9d82-672172ca70fc from /var/lib/nova/compute_id
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.020 230124 DEBUG nova.compute.manager [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Verified node 2850b2c4-8d07-40ab-9d82-672172ca70fc matches my host np0005546420.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.044 230124 INFO nova.compute.manager [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.116 230124 DEBUG oslo_concurrency.lockutils [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.116 230124 DEBUG oslo_concurrency.lockutils [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.117 230124 DEBUG oslo_concurrency.lockutils [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.117 230124 DEBUG nova.compute.resource_tracker [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.118 230124 DEBUG oslo_concurrency.processutils [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.575 230124 DEBUG oslo_concurrency.processutils [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:37:05 np0005546420.localdomain sudo[230305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-viqdldtlqkamiknupgxyucmgpkibvkhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927425.3656306-4316-17639963255030/AnsiballZ_podman_container.py
Dec 05 09:37:05 np0005546420.localdomain sudo[230305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.764 230124 WARNING nova.virt.libvirt.driver [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.765 230124 DEBUG nova.compute.resource_tracker [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=13595MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.765 230124 DEBUG oslo_concurrency.lockutils [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.766 230124 DEBUG oslo_concurrency.lockutils [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.893 230124 DEBUG nova.compute.resource_tracker [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.893 230124 DEBUG nova.compute.resource_tracker [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:37:05 np0005546420.localdomain python3.9[230307]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.945 230124 DEBUG nova.scheduler.client.report [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.962 230124 DEBUG nova.scheduler.client.report [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.962 230124 DEBUG nova.compute.provider_tree [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.974 230124 DEBUG nova.scheduler.client.report [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 09:37:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:05.992 230124 DEBUG nova.scheduler.client.report [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.004 230124 DEBUG oslo_concurrency.processutils [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:37:06 np0005546420.localdomain systemd[1]: Started libpod-conmon-aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86.scope.
Dec 05 09:37:06 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:37:06 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8da57bc6a9d99cca22f68324e2779eeb1b50a532dd39067fc53d8e0f9f160f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 05 09:37:06 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8da57bc6a9d99cca22f68324e2779eeb1b50a532dd39067fc53d8e0f9f160f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 09:37:06 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8da57bc6a9d99cca22f68324e2779eeb1b50a532dd39067fc53d8e0f9f160f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 05 09:37:06 np0005546420.localdomain podman[230332]: 2025-12-05 09:37:06.265165259 +0000 UTC m=+0.201026862 container init aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible)
Dec 05 09:37:06 np0005546420.localdomain podman[230332]: 2025-12-05 09:37:06.282468669 +0000 UTC m=+0.218330272 container start aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Dec 05 09:37:06 np0005546420.localdomain python3.9[230307]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Applying nova statedir ownership
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673
Dec 05 09:37:06 np0005546420.localdomain nova_compute_init[230370]: INFO:nova_statedir:Nova statedir ownership complete
Dec 05 09:37:06 np0005546420.localdomain systemd[1]: libpod-aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86.scope: Deactivated successfully.
Dec 05 09:37:06 np0005546420.localdomain podman[230371]: 2025-12-05 09:37:06.36140186 +0000 UTC m=+0.061126569 container died aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, io.buildah.version=1.41.3)
Dec 05 09:37:06 np0005546420.localdomain podman[230382]: 2025-12-05 09:37:06.44845462 +0000 UTC m=+0.081773836 container cleanup aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 09:37:06 np0005546420.localdomain systemd[1]: libpod-conmon-aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86.scope: Deactivated successfully.
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.516 230124 DEBUG oslo_concurrency.processutils [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:37:06 np0005546420.localdomain sudo[230305]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.524 230124 DEBUG nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.524 230124 INFO nova.virt.libvirt.host [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] kernel doesn't support AMD SEV
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.526 230124 DEBUG nova.compute.provider_tree [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.527 230124 DEBUG nova.virt.libvirt.driver [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.550 230124 DEBUG nova.scheduler.client.report [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.576 230124 DEBUG nova.compute.resource_tracker [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.577 230124 DEBUG oslo_concurrency.lockutils [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.577 230124 DEBUG nova.service [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.606 230124 DEBUG nova.service [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 05 09:37:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:06.606 230124 DEBUG nova.servicegroup.drivers.db [None req-87c31bf2-1c2f-4724-b408-5cbefaca3a3e - - - - - -] DB_Driver: join new ServiceGroup member np0005546420.localdomain to the compute group, service = <Service: host=np0005546420.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 05 09:37:07 np0005546420.localdomain systemd[1]: tmp-crun.qupFtb.mount: Deactivated successfully.
Dec 05 09:37:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0e8da57bc6a9d99cca22f68324e2779eeb1b50a532dd39067fc53d8e0f9f160f-merged.mount: Deactivated successfully.
Dec 05 09:37:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86-userdata-shm.mount: Deactivated successfully.
Dec 05 09:37:07 np0005546420.localdomain sshd[208327]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:37:07 np0005546420.localdomain systemd[1]: session-53.scope: Deactivated successfully.
Dec 05 09:37:07 np0005546420.localdomain systemd[1]: session-53.scope: Consumed 2min 23.332s CPU time.
Dec 05 09:37:07 np0005546420.localdomain systemd-logind[762]: Session 53 logged out. Waiting for processes to exit.
Dec 05 09:37:07 np0005546420.localdomain systemd-logind[762]: Removed session 53.
Dec 05 09:37:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38167 DF PROTO=TCP SPT=46368 DPT=9101 SEQ=2732094441 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC499D90000000001030307) 
Dec 05 09:37:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:37:10 np0005546420.localdomain podman[230429]: 2025-12-05 09:37:10.553670489 +0000 UTC m=+0.126285618 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 09:37:10 np0005546420.localdomain podman[230429]: 2025-12-05 09:37:10.595864544 +0000 UTC m=+0.168479633 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true)
Dec 05 09:37:10 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:37:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24298 DF PROTO=TCP SPT=36306 DPT=9100 SEQ=2621539284 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC4A7590000000001030307) 
Dec 05 09:37:12 np0005546420.localdomain sshd[230455]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:37:12 np0005546420.localdomain sshd[230455]: Accepted publickey for zuul from 192.168.122.30 port 51292 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:37:12 np0005546420.localdomain systemd-logind[762]: New session 55 of user zuul.
Dec 05 09:37:12 np0005546420.localdomain systemd[1]: Started Session 55 of User zuul.
Dec 05 09:37:12 np0005546420.localdomain sshd[230455]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:37:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24299 DF PROTO=TCP SPT=36306 DPT=9100 SEQ=2621539284 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC4AF590000000001030307) 
Dec 05 09:37:13 np0005546420.localdomain python3.9[230566]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:37:15 np0005546420.localdomain sudo[230678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gatfrhbxjakrwginbawprmigxvxdvpww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927434.5970185-69-20391625567675/AnsiballZ_systemd_service.py
Dec 05 09:37:15 np0005546420.localdomain sudo[230678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:15 np0005546420.localdomain python3.9[230680]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:37:15 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:37:15 np0005546420.localdomain systemd-rc-local-generator[230709]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:37:15 np0005546420.localdomain systemd-sysv-generator[230712]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:37:15 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:15 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:15 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:15 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:15 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:37:15 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:15 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:15 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:15 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:15 np0005546420.localdomain sudo[230678]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:16 np0005546420.localdomain python3.9[230825]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:37:16 np0005546420.localdomain network[230842]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:37:16 np0005546420.localdomain network[230843]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:37:16 np0005546420.localdomain network[230844]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:37:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:37:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24300 DF PROTO=TCP SPT=36306 DPT=9100 SEQ=2621539284 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC4BF190000000001030307) 
Dec 05 09:37:16 np0005546420.localdomain podman[230850]: 2025-12-05 09:37:16.818021259 +0000 UTC m=+0.079603103 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 05 09:37:16 np0005546420.localdomain podman[230850]: 2025-12-05 09:37:16.851431993 +0000 UTC m=+0.113013847 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 09:37:17 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:37:17 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:37:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54481 DF PROTO=TCP SPT=46212 DPT=9105 SEQ=4083033353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC4C7310000000001030307) 
Dec 05 09:37:21 np0005546420.localdomain sudo[231095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kiukxutgnwzalmqskyydyokvyfuhxjaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927440.9321282-126-100102462188118/AnsiballZ_systemd_service.py
Dec 05 09:37:21 np0005546420.localdomain sudo[231095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:21 np0005546420.localdomain python3.9[231097]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:37:21 np0005546420.localdomain sudo[231095]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54483 DF PROTO=TCP SPT=46212 DPT=9105 SEQ=4083033353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC4D31A0000000001030307) 
Dec 05 09:37:23 np0005546420.localdomain sudo[231206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tlpnnybhvhpbzpvnfpvurxnanguwxwmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927442.1710718-156-82810369335280/AnsiballZ_file.py
Dec 05 09:37:23 np0005546420.localdomain sudo[231206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:23 np0005546420.localdomain python3.9[231208]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:23 np0005546420.localdomain sudo[231206]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:23 np0005546420.localdomain systemd-journald[48245]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation.
Dec 05 09:37:23 np0005546420.localdomain systemd-journald[48245]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 05 09:37:23 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:37:23 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:37:23 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:37:23 np0005546420.localdomain sudo[231317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-peatnugolxtotoquqqrrrifjfpwzaslo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927443.728817-180-239426482713486/AnsiballZ_file.py
Dec 05 09:37:23 np0005546420.localdomain sudo[231317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:24 np0005546420.localdomain python3.9[231319]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:24 np0005546420.localdomain sudo[231317]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24301 DF PROTO=TCP SPT=36306 DPT=9100 SEQ=2621539284 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC4DFDA0000000001030307) 
Dec 05 09:37:25 np0005546420.localdomain sudo[231427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gignftwzbhsvatfrcwafuoodharqcscw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927444.4795187-207-82816363324144/AnsiballZ_command.py
Dec 05 09:37:25 np0005546420.localdomain sudo[231427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:25 np0005546420.localdomain python3.9[231429]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:37:25 np0005546420.localdomain sudo[231427]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:26 np0005546420.localdomain python3.9[231539]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:37:27 np0005546420.localdomain sudo[231647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fiqbcejrkxggjhxiuquwbkawdydqhbnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927446.893773-261-156425591959609/AnsiballZ_systemd_service.py
Dec 05 09:37:27 np0005546420.localdomain sudo[231647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:27 np0005546420.localdomain python3.9[231649]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:37:27 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:37:27 np0005546420.localdomain systemd-sysv-generator[231680]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:37:27 np0005546420.localdomain systemd-rc-local-generator[231676]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:37:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:37:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:27 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:27 np0005546420.localdomain sudo[231647]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:28 np0005546420.localdomain sudo[231793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xmetoksfylglrdfojrurxdpydqqfzits ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927448.0282075-285-126027152253557/AnsiballZ_command.py
Dec 05 09:37:28 np0005546420.localdomain sudo[231793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:28 np0005546420.localdomain python3.9[231795]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:37:28 np0005546420.localdomain sudo[231793]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:29 np0005546420.localdomain sudo[231904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upitjrdppgdagjudoknqsvhuuptvehbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927448.793996-312-155320967697637/AnsiballZ_file.py
Dec 05 09:37:29 np0005546420.localdomain sudo[231904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:29 np0005546420.localdomain python3.9[231906]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:37:29 np0005546420.localdomain sudo[231904]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:30 np0005546420.localdomain python3.9[232014]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:37:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41046 DF PROTO=TCP SPT=40828 DPT=9102 SEQ=2835948311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC4F4DA0000000001030307) 
Dec 05 09:37:30 np0005546420.localdomain python3.9[232124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:31 np0005546420.localdomain python3.9[232210]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927450.372101-360-249976072503554/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=bd3939991d32a5833a924c2dbe99af764fc33e56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:37:32 np0005546420.localdomain sudo[232318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xgjoakflzujjheowzaqmdxiuokatpncq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927451.6863441-405-4818643798158/AnsiballZ_group.py
Dec 05 09:37:32 np0005546420.localdomain sudo[232318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:32 np0005546420.localdomain python3.9[232320]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec 05 09:37:32 np0005546420.localdomain sudo[232318]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22083 DF PROTO=TCP SPT=50386 DPT=9882 SEQ=4123004106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC4FDD90000000001030307) 
Dec 05 09:37:33 np0005546420.localdomain sudo[232428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjatactkpyvkhbayifmticufhbadijcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927452.8511474-438-152878412820744/AnsiballZ_getent.py
Dec 05 09:37:33 np0005546420.localdomain sudo[232428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:37:33 np0005546420.localdomain podman[232430]: 2025-12-05 09:37:33.354814268 +0000 UTC m=+0.096759175 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 09:37:33 np0005546420.localdomain podman[232430]: 2025-12-05 09:37:33.368925084 +0000 UTC m=+0.110869961 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Dec 05 09:37:33 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:37:33 np0005546420.localdomain python3.9[232431]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 05 09:37:33 np0005546420.localdomain sudo[232428]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:34 np0005546420.localdomain sudo[232558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nviqibrjopfysvdaqxdvsgbkvbhufruz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927453.9747963-462-243488322415816/AnsiballZ_group.py
Dec 05 09:37:34 np0005546420.localdomain sudo[232558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54485 DF PROTO=TCP SPT=46212 DPT=9105 SEQ=4083033353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC503D90000000001030307) 
Dec 05 09:37:34 np0005546420.localdomain python3.9[232560]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 09:37:34 np0005546420.localdomain groupadd[232561]: group added to /etc/group: name=ceilometer, GID=42405
Dec 05 09:37:34 np0005546420.localdomain groupadd[232561]: group added to /etc/gshadow: name=ceilometer
Dec 05 09:37:34 np0005546420.localdomain groupadd[232561]: new group: name=ceilometer, GID=42405
Dec 05 09:37:34 np0005546420.localdomain sudo[232558]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:36 np0005546420.localdomain sudo[232674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lyfkbrxiihhoetfsxechrlyluxgxyklg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927455.9103456-486-74013454448301/AnsiballZ_user.py
Dec 05 09:37:36 np0005546420.localdomain sudo[232674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:36 np0005546420.localdomain python3.9[232676]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546420.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 09:37:36 np0005546420.localdomain useradd[232678]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Dec 05 09:37:36 np0005546420.localdomain useradd[232678]: add 'ceilometer' to group 'libvirt'
Dec 05 09:37:36 np0005546420.localdomain useradd[232678]: add 'ceilometer' to shadow group 'libvirt'
Dec 05 09:37:36 np0005546420.localdomain sudo[232674]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6193 DF PROTO=TCP SPT=59326 DPT=9101 SEQ=641479055 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC50DD90000000001030307) 
Dec 05 09:37:38 np0005546420.localdomain python3.9[232792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:38 np0005546420.localdomain python3.9[232878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764927457.9844444-564-131234609904569/.source.conf _original_basename=ceilometer.conf follow=False checksum=a081cd36784ea0c14c85c5a4c92e4b020883418d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:39 np0005546420.localdomain python3.9[232986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:40 np0005546420.localdomain python3.9[233072]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764927459.0714288-564-75038163062879/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:40 np0005546420.localdomain python3.9[233180]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12151 DF PROTO=TCP SPT=51226 DPT=9100 SEQ=3135979442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC51C990000000001030307) 
Dec 05 09:37:41 np0005546420.localdomain python3.9[233266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764927460.164157-564-14671725698950/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:37:41 np0005546420.localdomain podman[233355]: 2025-12-05 09:37:41.51314648 +0000 UTC m=+0.084777614 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 09:37:41 np0005546420.localdomain podman[233355]: 2025-12-05 09:37:41.554911093 +0000 UTC m=+0.126542257 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 09:37:41 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:37:41 np0005546420.localdomain python3.9[233385]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:37:42 np0005546420.localdomain python3.9[233506]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:37:42 np0005546420.localdomain sudo[233579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:37:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12152 DF PROTO=TCP SPT=51226 DPT=9100 SEQ=3135979442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC524990000000001030307) 
Dec 05 09:37:42 np0005546420.localdomain sudo[233579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:37:42 np0005546420.localdomain sudo[233579]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:42 np0005546420.localdomain sudo[233633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:37:42 np0005546420.localdomain sudo[233633]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:37:43 np0005546420.localdomain python3.9[233632]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:43 np0005546420.localdomain python3.9[233761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927462.532773-741-33370101746864/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:43 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:43.608 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:37:43 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:37:43.626 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:37:43 np0005546420.localdomain systemd[1]: tmp-crun.Xj2ist.mount: Deactivated successfully.
Dec 05 09:37:43 np0005546420.localdomain podman[233841]: 2025-12-05 09:37:43.751700194 +0000 UTC m=+0.095602818 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1763362218, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-type=git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:37:43 np0005546420.localdomain podman[233841]: 2025-12-05 09:37:43.881400017 +0000 UTC m=+0.225302611 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, release=1763362218, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:37:44 np0005546420.localdomain python3.9[233944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:44 np0005546420.localdomain sudo[233633]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:44 np0005546420.localdomain sudo[234003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:37:44 np0005546420.localdomain sudo[234003]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:37:44 np0005546420.localdomain sudo[234003]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:44 np0005546420.localdomain sudo[234055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:37:44 np0005546420.localdomain sudo[234055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:37:44 np0005546420.localdomain python3.9[234054]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:45 np0005546420.localdomain sudo[234055]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:45 np0005546420.localdomain python3.9[234203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:45 np0005546420.localdomain sudo[234212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:37:45 np0005546420.localdomain sudo[234212]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:37:45 np0005546420.localdomain sudo[234212]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:46 np0005546420.localdomain python3.9[234315]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927464.707249-741-137673920061509/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12153 DF PROTO=TCP SPT=51226 DPT=9100 SEQ=3135979442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC534590000000001030307) 
Dec 05 09:37:46 np0005546420.localdomain python3.9[234423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:37:47 np0005546420.localdomain podman[234490]: 2025-12-05 09:37:47.52309157 +0000 UTC m=+0.094454664 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 09:37:47 np0005546420.localdomain podman[234490]: 2025-12-05 09:37:47.532268103 +0000 UTC m=+0.103631217 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:37:47 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:37:47 np0005546420.localdomain python3.9[234526]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927466.3989742-741-185020872242016/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:48 np0005546420.localdomain python3.9[234634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65463 DF PROTO=TCP SPT=54538 DPT=9105 SEQ=4006414195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC53C600000000001030307) 
Dec 05 09:37:48 np0005546420.localdomain python3.9[234720]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927468.0613241-741-223171339796916/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:49 np0005546420.localdomain python3.9[234828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:50 np0005546420.localdomain python3.9[234914]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927469.1328673-741-200295775752414/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:50 np0005546420.localdomain python3.9[235022]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:51 np0005546420.localdomain python3.9[235108]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927470.2316973-741-270382469844275/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:51 np0005546420.localdomain python3.9[235216]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65465 DF PROTO=TCP SPT=54538 DPT=9105 SEQ=4006414195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC548590000000001030307) 
Dec 05 09:37:52 np0005546420.localdomain python3.9[235302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927471.3404715-741-88550915636725/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:52 np0005546420.localdomain python3.9[235410]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:53 np0005546420.localdomain python3.9[235496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927472.465905-741-152377444174436/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:54 np0005546420.localdomain python3.9[235604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:54 np0005546420.localdomain python3.9[235690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927473.5889482-741-255299207744375/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12154 DF PROTO=TCP SPT=51226 DPT=9100 SEQ=3135979442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC553DA0000000001030307) 
Dec 05 09:37:55 np0005546420.localdomain python3.9[235798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:55 np0005546420.localdomain python3.9[235884]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927474.666951-741-44374214783615/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:37:56 np0005546420.localdomain sudo[235992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-spklfayjcvefwhwynujrppnkspoydsqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927476.0426753-1206-22747685284331/AnsiballZ_file.py
Dec 05 09:37:56 np0005546420.localdomain sudo[235992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:56 np0005546420.localdomain python3.9[235994]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:37:56 np0005546420.localdomain sudo[235992]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:57 np0005546420.localdomain sudo[236102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hcaigrdgbpgspolsqaegjjypdmyqajcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927477.258462-1230-110322211086563/AnsiballZ_systemd_service.py
Dec 05 09:37:57 np0005546420.localdomain sudo[236102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:57 np0005546420.localdomain python3.9[236104]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:37:57 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:37:58 np0005546420.localdomain systemd-rc-local-generator[236130]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:37:58 np0005546420.localdomain systemd-sysv-generator[236136]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:37:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:37:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:58 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:37:58 np0005546420.localdomain systemd[1]: Listening on Podman API Socket.
Dec 05 09:37:58 np0005546420.localdomain sudo[236102]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:59 np0005546420.localdomain sudo[236252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hextufrjzqcfahtyhgkqwrospbdrlcov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927478.6852596-1257-259838894057620/AnsiballZ_stat.py
Dec 05 09:37:59 np0005546420.localdomain sudo[236252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:37:59 np0005546420.localdomain python3.9[236254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:37:59 np0005546420.localdomain sudo[236252]: pam_unix(sudo:session): session closed for user root
Dec 05 09:37:59 np0005546420.localdomain sudo[236340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcifukqwzexmvrlpyfgxdjdorsovkkzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927478.6852596-1257-259838894057620/AnsiballZ_copy.py
Dec 05 09:37:59 np0005546420.localdomain sudo[236340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:00 np0005546420.localdomain python3.9[236342]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927478.6852596-1257-259838894057620/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:38:00 np0005546420.localdomain sudo[236340]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:00 np0005546420.localdomain sudo[236395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vijqqnawjvjynhdvxnuiifxbzpjlkyew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927478.6852596-1257-259838894057620/AnsiballZ_stat.py
Dec 05 09:38:00 np0005546420.localdomain sudo[236395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59952 DF PROTO=TCP SPT=57386 DPT=9102 SEQ=2827989612 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC56A190000000001030307) 
Dec 05 09:38:00 np0005546420.localdomain python3.9[236397]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:38:00 np0005546420.localdomain sudo[236395]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:00 np0005546420.localdomain sudo[236483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikfugbswutjsemliaobzndoalbkbwunr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927478.6852596-1257-259838894057620/AnsiballZ_copy.py
Dec 05 09:38:00 np0005546420.localdomain sudo[236483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:01 np0005546420.localdomain python3.9[236485]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927478.6852596-1257-259838894057620/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:38:01 np0005546420.localdomain sudo[236483]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:01 np0005546420.localdomain sudo[236593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvuwipeufbxfmoxujdvrtxdxozfcjfmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927481.5555723-1341-173699669158166/AnsiballZ_container_config_data.py
Dec 05 09:38:01 np0005546420.localdomain sudo[236593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:02 np0005546420.localdomain python3.9[236595]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec 05 09:38:02 np0005546420.localdomain sudo[236593]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3123 DF PROTO=TCP SPT=44354 DPT=9882 SEQ=4001749183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC571D90000000001030307) 
Dec 05 09:38:02 np0005546420.localdomain sudo[236703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xepkrlbvcaxnboaiuiervindbycivsiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927482.3899636-1368-126424784039262/AnsiballZ_container_config_hash.py
Dec 05 09:38:02 np0005546420.localdomain sudo[236703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:03 np0005546420.localdomain python3.9[236705]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:38:03 np0005546420.localdomain sudo[236703]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:38:03 np0005546420.localdomain podman[236723]: 2025-12-05 09:38:03.524571515 +0000 UTC m=+0.095230647 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:38:03 np0005546420.localdomain podman[236723]: 2025-12-05 09:38:03.57126431 +0000 UTC m=+0.141923432 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:38:03 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:38:03 np0005546420.localdomain sudo[236831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xecxnravhdqdgxvjvhwbpfzkopmxpqjw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927483.4983273-1398-76761010508722/AnsiballZ_edpm_container_manage.py
Dec 05 09:38:03 np0005546420.localdomain sudo[236831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.043 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.043 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.044 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.044 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.059 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.059 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.060 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.060 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.060 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.061 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.061 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.061 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.061 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.081 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.082 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.082 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:38:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65467 DF PROTO=TCP SPT=54538 DPT=9105 SEQ=4006414195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC577DA0000000001030307) 
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.082 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.083 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:38:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:38:04.084 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:38:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:38:04.085 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:38:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:38:04.085 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:38:04 np0005546420.localdomain python3[236833]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:38:04 np0005546420.localdomain python3[236833]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "343ba269c9fe0a56d7572c8ca328dbce002017c4dd4986f43667971dd03085c2",
                                                                    "Digest": "sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:21:53.58682213Z",
                                                                    "Config": {
                                                                         "User": "root",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 505175293,
                                                                    "VirtualSize": 505175293,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:a47016624274f5ebad76019f5a2e465c1737f96caa539b36f90ab8e33592f415",
                                                                              "sha256:38a03f5e96658211fb28e2f87c11ffad531281d1797368f48e6cd4af7ac97c0e"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "root",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:14:56.244673147Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:14:56.960273159Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage ceilometer",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:37.588899909Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:41.197123864Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:21:19.680010224Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ceilometer-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:21:53.584924649Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-compute && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:21:56.278821402Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.543 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:38:04 np0005546420.localdomain podman[236902]: 2025-12-05 09:38:04.586191858 +0000 UTC m=+0.093744780 container remove fd93facb8c6cc6a46da7e5c7666841a4dbbb8287540c40b3c034e1d7c6ee7fbe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2a14d146ce921397a1b78b68c853c045'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container)
Dec 05 09:38:04 np0005546420.localdomain python3[236833]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute
Dec 05 09:38:04 np0005546420.localdomain podman[236919]: 
Dec 05 09:38:04 np0005546420.localdomain podman[236919]: 2025-12-05 09:38:04.697384919 +0000 UTC m=+0.088867111 container create 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=edpm)
Dec 05 09:38:04 np0005546420.localdomain podman[236919]: 2025-12-05 09:38:04.658526296 +0000 UTC m=+0.050008578 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Dec 05 09:38:04 np0005546420.localdomain python3[236833]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.720 230124 WARNING nova.virt.libvirt.driver [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.721 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=13559MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.721 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.722 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.806 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.806 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:38:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:04.848 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:38:04 np0005546420.localdomain sudo[236831]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:05.322 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:38:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:05.330 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:38:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:05.347 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:38:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:05.349 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:38:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:38:05.349 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:38:05 np0005546420.localdomain sudo[237084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xooyevxyljoanrwupqupgxojyyhdbsdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927485.1147587-1422-73605944020480/AnsiballZ_stat.py
Dec 05 09:38:05 np0005546420.localdomain sudo[237084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:05 np0005546420.localdomain python3.9[237086]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:38:05 np0005546420.localdomain sudo[237084]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:06 np0005546420.localdomain sudo[237196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-plcdnacmkbvglqnqxfrtskowvqmpvacq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927485.8534474-1449-8709998619270/AnsiballZ_file.py
Dec 05 09:38:06 np0005546420.localdomain sudo[237196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:06 np0005546420.localdomain python3.9[237198]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:06 np0005546420.localdomain sudo[237196]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:06 np0005546420.localdomain sudo[237305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qsbooqmlyfvzlcqacldilmdynragawcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927486.4008822-1449-126098620837934/AnsiballZ_copy.py
Dec 05 09:38:06 np0005546420.localdomain sudo[237305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:06 np0005546420.localdomain python3.9[237307]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927486.4008822-1449-126098620837934/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:06 np0005546420.localdomain sudo[237305]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59226 DF PROTO=TCP SPT=48458 DPT=9101 SEQ=2390852261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC583D90000000001030307) 
Dec 05 09:38:07 np0005546420.localdomain sudo[237360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vehzbztwnipetvhrqmeocuqwgpmxqzyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927486.4008822-1449-126098620837934/AnsiballZ_systemd.py
Dec 05 09:38:07 np0005546420.localdomain sudo[237360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:07 np0005546420.localdomain python3.9[237362]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:38:07 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:38:07 np0005546420.localdomain systemd-rc-local-generator[237385]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:38:07 np0005546420.localdomain systemd-sysv-generator[237389]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:38:07 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:07 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:07 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:07 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:07 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:38:07 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:07 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:07 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:07 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:08 np0005546420.localdomain sudo[237360]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:08 np0005546420.localdomain sudo[237450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ilkgzstvpdguislthzvmfexlvecbbpmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927486.4008822-1449-126098620837934/AnsiballZ_systemd.py
Dec 05 09:38:08 np0005546420.localdomain sudo[237450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:09 np0005546420.localdomain python3.9[237452]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:38:09 np0005546420.localdomain systemd-rc-local-generator[237481]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:38:09 np0005546420.localdomain systemd-sysv-generator[237486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: tmp-crun.hzFSwl.mount: Deactivated successfully.
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:38:09 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7185e6bfdb75bce1b23c4d5fa5e4fd047de898ab90435918ab1ee38a3c967048/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 05 09:38:09 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7185e6bfdb75bce1b23c4d5fa5e4fd047de898ab90435918ab1ee38a3c967048/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:38:09 np0005546420.localdomain podman[237493]: 2025-12-05 09:38:09.640003038 +0000 UTC m=+0.162277081 container init 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: + sudo -E kolla_set_configs
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: sudo: unable to send audit message: Operation not permitted
Dec 05 09:38:09 np0005546420.localdomain sudo[237515]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 09:38:09 np0005546420.localdomain sudo[237515]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 09:38:09 np0005546420.localdomain podman[237493]: 2025-12-05 09:38:09.677806798 +0000 UTC m=+0.200080781 container start 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 09:38:09 np0005546420.localdomain podman[237493]: ceilometer_agent_compute
Dec 05 09:38:09 np0005546420.localdomain sudo[237515]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: Started ceilometer_agent_compute container.
Dec 05 09:38:09 np0005546420.localdomain sudo[237450]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Validating config file
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Copying service configuration files
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: INFO:__main__:Writing out command to execute
Dec 05 09:38:09 np0005546420.localdomain sudo[237515]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: ++ cat /run_command
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: + ARGS=
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: + sudo kolla_copy_cacerts
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: sudo: unable to send audit message: Operation not permitted
Dec 05 09:38:09 np0005546420.localdomain sudo[237531]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 09:38:09 np0005546420.localdomain sudo[237531]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 09:38:09 np0005546420.localdomain sudo[237531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 09:38:09 np0005546420.localdomain sudo[237531]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: + [[ ! -n '' ]]
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: + . kolla_extend_start
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: + umask 0022
Dec 05 09:38:09 np0005546420.localdomain ceilometer_agent_compute[237509]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 05 09:38:09 np0005546420.localdomain podman[237517]: 2025-12-05 09:38:09.790722971 +0000 UTC m=+0.104049800 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 05 09:38:09 np0005546420.localdomain podman[237517]: 2025-12-05 09:38:09.824557878 +0000 UTC m=+0.137884717 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 05 09:38:09 np0005546420.localdomain podman[237517]: unhealthy
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:38:09 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Failed with result 'exit-code'.
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.648 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.649 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.649 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.649 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.649 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.649 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.650 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.650 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.650 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.650 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.650 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.650 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.650 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.651 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.651 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.651 2 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.651 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.651 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.651 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.652 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.652 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.652 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.652 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.652 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.652 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.652 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.652 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.652 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.653 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.653 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.653 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.653 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.653 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.653 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.653 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.653 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.654 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.654 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.654 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.654 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.654 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.654 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.654 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.654 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.655 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.655 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.655 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.655 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.655 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.655 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.655 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.655 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.656 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.656 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.656 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.656 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.656 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.656 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.656 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.656 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.657 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.657 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.657 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.657 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.657 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.657 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.657 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.657 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.658 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.658 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.658 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.659 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.659 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.659 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.659 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.659 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.659 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.659 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.659 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.660 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.660 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.660 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.660 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.660 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.660 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.660 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.660 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.661 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.661 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.661 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.661 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.661 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.661 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.661 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.661 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.662 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.662 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.662 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.662 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.662 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.662 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.662 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.663 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.663 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.663 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.663 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.663 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.663 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.663 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.664 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.664 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.664 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.664 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.664 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.664 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.664 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.664 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.665 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.665 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.665 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.665 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.665 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.665 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.665 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.665 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.666 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.666 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.666 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.666 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.666 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.666 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.666 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.666 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.667 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.667 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.667 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.667 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.667 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.667 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.667 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.667 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.667 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.668 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.668 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.668 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.668 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.668 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.668 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.668 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.668 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.669 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.669 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.669 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.669 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.669 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.669 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.669 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.669 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.670 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.670 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.689 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.691 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.692 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 05 09:38:10 np0005546420.localdomain sudo[237652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gltvstcmfheeslrfgydrrdxvvioramlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927489.9160407-1521-107098566767352/AnsiballZ_systemd.py
Dec 05 09:38:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34324 DF PROTO=TCP SPT=58484 DPT=9100 SEQ=2000896188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC591D90000000001030307) 
Dec 05 09:38:10 np0005546420.localdomain sudo[237652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.795 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.869 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.869 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.870 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.870 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.870 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.870 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.870 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.870 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.870 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.870 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.870 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.870 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.871 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.871 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.871 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.871 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.871 12 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.871 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.871 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.871 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.871 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.871 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.872 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.872 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.872 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.872 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.872 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.872 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.872 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.872 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.872 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.872 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.872 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.873 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.874 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.875 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.875 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.875 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.875 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.875 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.875 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.875 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.875 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.875 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.875 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.875 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.878 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.878 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.878 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.878 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.878 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.878 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.878 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.878 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.878 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.878 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.879 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.879 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.879 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.879 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.879 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.879 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.879 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.879 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.879 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.879 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.879 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.880 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.880 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.880 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.880 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.880 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.880 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.880 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.880 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.880 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.880 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.880 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.881 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.884 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.885 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.886 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.887 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.887 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.887 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.887 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.887 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.887 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.887 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.887 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.887 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.887 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.887 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.888 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.889 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.892 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.901 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.904 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.904 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.905 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.905 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.905 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.905 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.905 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.906 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.906 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.906 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.906 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.906 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.907 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.907 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.907 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.907 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.907 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.907 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.908 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.908 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.908 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.908 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.908 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.909 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:10 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:10.909 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:11 np0005546420.localdomain python3.9[237654]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: Stopping ceilometer_agent_compute container...
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:11.181 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:11.283 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:11.284 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:11.284 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237509]: 2025-12-05 09:38:11.294 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec 05 09:38:11 np0005546420.localdomain virtqemud[229316]: End of file while reading data: Input/output error
Dec 05 09:38:11 np0005546420.localdomain virtqemud[229316]: End of file while reading data: Input/output error
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: libpod-94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.scope: Deactivated successfully.
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: libpod-94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.scope: Consumed 1.428s CPU time.
Dec 05 09:38:11 np0005546420.localdomain podman[237661]: 2025-12-05 09:38:11.462594614 +0000 UTC m=+0.350680671 container died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.timer: Deactivated successfully.
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110-userdata-shm.mount: Deactivated successfully.
Dec 05 09:38:11 np0005546420.localdomain podman[237661]: 2025-12-05 09:38:11.541832445 +0000 UTC m=+0.429918472 container cleanup 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:38:11 np0005546420.localdomain podman[237661]: ceilometer_agent_compute
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7185e6bfdb75bce1b23c4d5fa5e4fd047de898ab90435918ab1ee38a3c967048-merged.mount: Deactivated successfully.
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:38:11 np0005546420.localdomain podman[237691]: 2025-12-05 09:38:11.657437731 +0000 UTC m=+0.080948185 container cleanup 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute)
Dec 05 09:38:11 np0005546420.localdomain podman[237691]: ceilometer_agent_compute
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: Stopped ceilometer_agent_compute container.
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Dec 05 09:38:11 np0005546420.localdomain podman[237703]: 2025-12-05 09:38:11.7501613 +0000 UTC m=+0.091348817 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:38:11 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7185e6bfdb75bce1b23c4d5fa5e4fd047de898ab90435918ab1ee38a3c967048/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 05 09:38:11 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7185e6bfdb75bce1b23c4d5fa5e4fd047de898ab90435918ab1ee38a3c967048/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 05 09:38:11 np0005546420.localdomain podman[237703]: 2025-12-05 09:38:11.833443417 +0000 UTC m=+0.174630924 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:38:11 np0005546420.localdomain podman[237712]: 2025-12-05 09:38:11.8551934 +0000 UTC m=+0.159612939 container init 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: + sudo -E kolla_set_configs
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:38:11 np0005546420.localdomain sudo[237749]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 09:38:11 np0005546420.localdomain sudo[237749]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 09:38:11 np0005546420.localdomain sudo[237749]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 09:38:11 np0005546420.localdomain podman[237712]: 2025-12-05 09:38:11.884337521 +0000 UTC m=+0.188757050 container start 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 09:38:11 np0005546420.localdomain podman[237712]: ceilometer_agent_compute
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: sudo: unable to send audit message: Operation not permitted
Dec 05 09:38:11 np0005546420.localdomain systemd[1]: Started ceilometer_agent_compute container.
Dec 05 09:38:11 np0005546420.localdomain sudo[237652]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Validating config file
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Copying service configuration files
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: INFO:__main__:Writing out command to execute
Dec 05 09:38:11 np0005546420.localdomain sudo[237749]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: ++ cat /run_command
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: + ARGS=
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: + sudo kolla_copy_cacerts
Dec 05 09:38:11 np0005546420.localdomain sudo[237774]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: sudo: unable to send audit message: Operation not permitted
Dec 05 09:38:11 np0005546420.localdomain sudo[237774]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 09:38:11 np0005546420.localdomain sudo[237774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 09:38:11 np0005546420.localdomain sudo[237774]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: + [[ ! -n '' ]]
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: + . kolla_extend_start
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: + umask 0022
Dec 05 09:38:11 np0005546420.localdomain ceilometer_agent_compute[237741]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 05 09:38:11 np0005546420.localdomain podman[237751]: 2025-12-05 09:38:11.988030949 +0000 UTC m=+0.097978662 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 09:38:12 np0005546420.localdomain podman[237751]: 2025-12-05 09:38:12.016843321 +0000 UTC m=+0.126791024 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 09:38:12 np0005546420.localdomain podman[237751]: unhealthy
Dec 05 09:38:12 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:38:12 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Failed with result 'exit-code'.
Dec 05 09:38:12 np0005546420.localdomain sudo[237880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-juznhilwyduhjlvufnjufxgfhmthdiqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927492.1761985-1545-174475274837775/AnsiballZ_stat.py
Dec 05 09:38:12 np0005546420.localdomain sudo[237880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:12 np0005546420.localdomain python3.9[237882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:38:12 np0005546420.localdomain sudo[237880]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.738 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.738 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.739 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.739 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.739 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.739 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.739 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.739 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.739 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.739 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.739 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.739 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.740 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.740 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.740 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.740 2 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.740 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.740 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.740 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.740 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.740 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.740 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.740 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.741 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.742 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.743 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.744 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.745 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.746 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.747 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.748 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.749 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.750 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.751 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.752 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.752 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.752 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.752 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.752 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.752 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.752 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.771 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.772 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.773 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.786 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 05 09:38:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34325 DF PROTO=TCP SPT=58484 DPT=9100 SEQ=2000896188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC599DA0000000001030307) 
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.928 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.928 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.928 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.931 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.931 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.935 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.943 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:38:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:38:13 np0005546420.localdomain sudo[237974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bulzvoeeqiqmlmgqrmdjjpxlnueuprea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927492.1761985-1545-174475274837775/AnsiballZ_copy.py
Dec 05 09:38:13 np0005546420.localdomain sudo[237974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:13 np0005546420.localdomain python3.9[237976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927492.1761985-1545-174475274837775/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:38:13 np0005546420.localdomain sudo[237974]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:13 np0005546420.localdomain sudo[238084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfympkekqnlpekuarfcbtvorxkvxmprg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927493.651117-1596-199910028430638/AnsiballZ_container_config_data.py
Dec 05 09:38:13 np0005546420.localdomain sudo[238084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:14 np0005546420.localdomain python3.9[238086]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec 05 09:38:14 np0005546420.localdomain sudo[238084]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:14 np0005546420.localdomain sudo[238194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jltcfmdvofrraoevteqjrkjpugwqpjsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927494.444473-1623-197946707934475/AnsiballZ_container_config_hash.py
Dec 05 09:38:14 np0005546420.localdomain sudo[238194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:14 np0005546420.localdomain python3.9[238196]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:38:15 np0005546420.localdomain sudo[238194]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:15 np0005546420.localdomain sudo[238304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iuiowfmkrzebzewkuczyuhxtkkqcoswq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927495.3663146-1653-210097478250132/AnsiballZ_edpm_container_manage.py
Dec 05 09:38:15 np0005546420.localdomain sudo[238304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:15 np0005546420.localdomain python3[238306]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:38:16 np0005546420.localdomain podman[238344]: 
Dec 05 09:38:16 np0005546420.localdomain podman[238344]: 2025-12-05 09:38:16.191079714 +0000 UTC m=+0.067433688 container create cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm)
Dec 05 09:38:16 np0005546420.localdomain podman[238344]: 2025-12-05 09:38:16.159308765 +0000 UTC m=+0.035662769 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Dec 05 09:38:16 np0005546420.localdomain python3[238306]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec 05 09:38:16 np0005546420.localdomain sudo[238304]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34326 DF PROTO=TCP SPT=58484 DPT=9100 SEQ=2000896188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC5A9990000000001030307) 
Dec 05 09:38:16 np0005546420.localdomain sudo[238489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kwefusynzasckeanpkhwoimsvhwqsjwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927496.5715377-1677-206070615342581/AnsiballZ_stat.py
Dec 05 09:38:16 np0005546420.localdomain sudo[238489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:17 np0005546420.localdomain python3.9[238491]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:38:17 np0005546420.localdomain sudo[238489]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:17 np0005546420.localdomain sudo[238601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxbgvvyjifjqauxjgwjvoazpgvrdoeam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927497.3952234-1704-12953706379585/AnsiballZ_file.py
Dec 05 09:38:17 np0005546420.localdomain sudo[238601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:38:17 np0005546420.localdomain podman[238604]: 2025-12-05 09:38:17.781027456 +0000 UTC m=+0.089508745 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 05 09:38:17 np0005546420.localdomain podman[238604]: 2025-12-05 09:38:17.788884226 +0000 UTC m=+0.097365575 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:38:17 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:38:17 np0005546420.localdomain python3.9[238603]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:17 np0005546420.localdomain sudo[238601]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:18 np0005546420.localdomain sudo[238728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dbgbhukzbcysvjkchuikqiyhwqerhrpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927497.9605236-1704-17346110520339/AnsiballZ_copy.py
Dec 05 09:38:18 np0005546420.localdomain sudo[238728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:18 np0005546420.localdomain python3.9[238730]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927497.9605236-1704-17346110520339/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:18 np0005546420.localdomain sudo[238728]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24435 DF PROTO=TCP SPT=45920 DPT=9105 SEQ=2784548727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC5B1910000000001030307) 
Dec 05 09:38:19 np0005546420.localdomain sudo[238783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-klbqaplohitkcdzvjrogafzrbhnoolqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927497.9605236-1704-17346110520339/AnsiballZ_systemd.py
Dec 05 09:38:19 np0005546420.localdomain sudo[238783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:19 np0005546420.localdomain python3.9[238785]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:38:19 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:38:19 np0005546420.localdomain systemd-rc-local-generator[238811]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:38:19 np0005546420.localdomain systemd-sysv-generator[238815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:38:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:38:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:19 np0005546420.localdomain sudo[238783]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:20 np0005546420.localdomain sudo[238873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mhpzbemrupwfaknnyuvswjdstliddtun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927497.9605236-1704-17346110520339/AnsiballZ_systemd.py
Dec 05 09:38:20 np0005546420.localdomain sudo[238873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:20 np0005546420.localdomain python3.9[238875]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:38:20 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:38:20 np0005546420.localdomain systemd-rc-local-generator[238901]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:38:20 np0005546420.localdomain systemd-sysv-generator[238905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:38:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:38:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:20 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:20 np0005546420.localdomain systemd[1]: Starting node_exporter container...
Dec 05 09:38:21 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:38:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:38:21 np0005546420.localdomain podman[238916]: 2025-12-05 09:38:21.114310472 +0000 UTC m=+0.156258792 container init cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.134Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.134Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.134Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.135Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.135Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.135Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.135Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.135Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=arp
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=bcache
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=bonding
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=cpu
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=edac
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=filefd
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=netclass
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=netdev
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=netstat
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=nfs
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=nvme
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=softnet
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=systemd
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=xfs
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.136Z caller=node_exporter.go:117 level=info collector=zfs
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.137Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 05 09:38:21 np0005546420.localdomain node_exporter[238930]: ts=2025-12-05T09:38:21.137Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 05 09:38:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:38:21 np0005546420.localdomain podman[238916]: 2025-12-05 09:38:21.152416908 +0000 UTC m=+0.194365228 container start cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:38:21 np0005546420.localdomain podman[238916]: node_exporter
Dec 05 09:38:21 np0005546420.localdomain systemd[1]: Started node_exporter container.
Dec 05 09:38:21 np0005546420.localdomain sudo[238873]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:21 np0005546420.localdomain podman[238939]: 2025-12-05 09:38:21.244515984 +0000 UTC m=+0.086958444 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:38:21 np0005546420.localdomain podman[238939]: 2025-12-05 09:38:21.258520095 +0000 UTC m=+0.100962595 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:38:21 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:38:21 np0005546420.localdomain sudo[239069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wqjtzdruplddtmzdsinddcranxzahyfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927501.3937743-1776-78677764461251/AnsiballZ_systemd.py
Dec 05 09:38:21 np0005546420.localdomain sudo[239069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24437 DF PROTO=TCP SPT=45920 DPT=9105 SEQ=2784548727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC5BD990000000001030307) 
Dec 05 09:38:21 np0005546420.localdomain python3.9[239071]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: Stopping node_exporter container...
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: libpod-cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.scope: Deactivated successfully.
Dec 05 09:38:23 np0005546420.localdomain podman[239075]: 2025-12-05 09:38:23.139561871 +0000 UTC m=+0.079559446 container died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.timer: Deactivated successfully.
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a-userdata-shm.mount: Deactivated successfully.
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cc32bd3c0cf5c06fba878f3bd745bd0f11694b4079b45742f0e3f66176dc31bc-merged.mount: Deactivated successfully.
Dec 05 09:38:23 np0005546420.localdomain podman[239075]: 2025-12-05 09:38:23.181540905 +0000 UTC m=+0.121538470 container cleanup cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:38:23 np0005546420.localdomain podman[239075]: node_exporter
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 09:38:23 np0005546420.localdomain podman[239101]: 2025-12-05 09:38:23.285458692 +0000 UTC m=+0.070516614 container cleanup cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:38:23 np0005546420.localdomain podman[239101]: node_exporter
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: Stopped node_exporter container.
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: Starting node_exporter container...
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:38:23 np0005546420.localdomain podman[239114]: 2025-12-05 09:38:23.445879469 +0000 UTC m=+0.129330859 container init cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.463Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.463Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.463Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.463Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.463Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.463Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.464Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.464Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.464Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=arp
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=bcache
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=bonding
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=cpu
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=edac
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=filefd
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=netclass
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=netdev
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=netstat
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=nfs
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=nvme
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=softnet
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=systemd
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=xfs
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.465Z caller=node_exporter.go:117 level=info collector=zfs
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.466Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 05 09:38:23 np0005546420.localdomain node_exporter[239128]: ts=2025-12-05T09:38:23.466Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:38:23 np0005546420.localdomain podman[239114]: 2025-12-05 09:38:23.480107297 +0000 UTC m=+0.163558697 container start cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:38:23 np0005546420.localdomain podman[239114]: node_exporter
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: Started node_exporter container.
Dec 05 09:38:23 np0005546420.localdomain sudo[239069]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:23 np0005546420.localdomain podman[239137]: 2025-12-05 09:38:23.575044461 +0000 UTC m=+0.089309838 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:38:23 np0005546420.localdomain podman[239137]: 2025-12-05 09:38:23.584141667 +0000 UTC m=+0.098407014 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:38:23 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:38:24 np0005546420.localdomain sudo[239266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgjtkjhwfmmqxpyhfwolsgipgchpfnkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927503.6988416-1800-182876318039906/AnsiballZ_stat.py
Dec 05 09:38:24 np0005546420.localdomain sudo[239266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:24 np0005546420.localdomain python3.9[239268]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:38:24 np0005546420.localdomain sudo[239266]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:24 np0005546420.localdomain sudo[239354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izfzoqnedatixjtzkljwtewebzzlrqwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927503.6988416-1800-182876318039906/AnsiballZ_copy.py
Dec 05 09:38:24 np0005546420.localdomain sudo[239354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:24 np0005546420.localdomain python3.9[239356]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927503.6988416-1800-182876318039906/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:38:24 np0005546420.localdomain sudo[239354]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34327 DF PROTO=TCP SPT=58484 DPT=9100 SEQ=2000896188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC5C9D90000000001030307) 
Dec 05 09:38:25 np0005546420.localdomain sudo[239464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-govhcmnkbxefsqtarewrumtbhdmmlgfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927505.220019-1851-7745057283264/AnsiballZ_container_config_data.py
Dec 05 09:38:25 np0005546420.localdomain sudo[239464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:25 np0005546420.localdomain python3.9[239466]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec 05 09:38:25 np0005546420.localdomain sudo[239464]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:26 np0005546420.localdomain sudo[239574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zykqiafgrisnzsnrtvfytxxgpiibfmco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927505.9763408-1878-209748890834529/AnsiballZ_container_config_hash.py
Dec 05 09:38:26 np0005546420.localdomain sudo[239574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:26 np0005546420.localdomain python3.9[239576]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:38:26 np0005546420.localdomain sudo[239574]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:27 np0005546420.localdomain sudo[239684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-retixvphxgobirffzbmmncgxbdzwglel ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927506.819359-1908-207365987608253/AnsiballZ_edpm_container_manage.py
Dec 05 09:38:27 np0005546420.localdomain sudo[239684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:27 np0005546420.localdomain python3[239686]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:38:29 np0005546420.localdomain podman[239699]: 2025-12-05 09:38:27.508012781 +0000 UTC m=+0.050240946 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Dec 05 09:38:29 np0005546420.localdomain podman[239768]: 
Dec 05 09:38:29 np0005546420.localdomain podman[239768]: 2025-12-05 09:38:29.408376757 +0000 UTC m=+0.082817768 container create db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:38:29 np0005546420.localdomain podman[239768]: 2025-12-05 09:38:29.366777553 +0000 UTC m=+0.041218644 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Dec 05 09:38:29 np0005546420.localdomain python3[239686]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Dec 05 09:38:29 np0005546420.localdomain sudo[239684]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:30 np0005546420.localdomain sudo[239910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mjnxajpxqystxxlptbjncttbxxddtdjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927509.8903258-1934-9224694601334/AnsiballZ_stat.py
Dec 05 09:38:30 np0005546420.localdomain sudo[239910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:30 np0005546420.localdomain python3.9[239912]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:38:30 np0005546420.localdomain sudo[239910]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39829 DF PROTO=TCP SPT=48376 DPT=9102 SEQ=2953427585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC5DF190000000001030307) 
Dec 05 09:38:31 np0005546420.localdomain sudo[240022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lctgzovpjftqqxhaqkmqqvlscdjftzas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927510.7443416-1959-153661108153545/AnsiballZ_file.py
Dec 05 09:38:31 np0005546420.localdomain sudo[240022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:31 np0005546420.localdomain python3.9[240024]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:31 np0005546420.localdomain sudo[240022]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:31 np0005546420.localdomain sudo[240131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iotewztkggpywavtrjuvcnuiretouvud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927511.360004-1959-73072786846006/AnsiballZ_copy.py
Dec 05 09:38:31 np0005546420.localdomain sudo[240131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:32 np0005546420.localdomain python3.9[240133]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927511.360004-1959-73072786846006/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:38:32 np0005546420.localdomain sudo[240131]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:32 np0005546420.localdomain sudo[240186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ukykudjclptaznklljbtgqnmnmoiylcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927511.360004-1959-73072786846006/AnsiballZ_systemd.py
Dec 05 09:38:32 np0005546420.localdomain sudo[240186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:32 np0005546420.localdomain python3.9[240188]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:38:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62694 DF PROTO=TCP SPT=55736 DPT=9882 SEQ=513262227 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC5E7D90000000001030307) 
Dec 05 09:38:32 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:38:32 np0005546420.localdomain systemd-rc-local-generator[240212]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:38:32 np0005546420.localdomain systemd-sysv-generator[240219]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:38:32 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:32 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:32 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:32 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:32 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:38:33 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:33 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:33 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:33 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:33 np0005546420.localdomain sudo[240186]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:33 np0005546420.localdomain sudo[240277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqabppkjybluqtdukvydkjtwjtsgvwti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927511.360004-1959-73072786846006/AnsiballZ_systemd.py
Dec 05 09:38:33 np0005546420.localdomain sudo[240277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:33 np0005546420.localdomain python3.9[240279]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:38:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:38:33 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:38:33 np0005546420.localdomain podman[240281]: 2025-12-05 09:38:33.898871521 +0000 UTC m=+0.083281250 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 09:38:33 np0005546420.localdomain podman[240281]: 2025-12-05 09:38:33.938647094 +0000 UTC m=+0.123056823 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd)
Dec 05 09:38:33 np0005546420.localdomain systemd-rc-local-generator[240325]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:38:33 np0005546420.localdomain systemd-sysv-generator[240330]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: Starting podman_exporter container...
Dec 05 09:38:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24439 DF PROTO=TCP SPT=45920 DPT=9105 SEQ=2784548727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC5EDD90000000001030307) 
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:38:34 np0005546420.localdomain podman[240337]: 2025-12-05 09:38:34.348862827 +0000 UTC m=+0.148450293 container init db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:38:34 np0005546420.localdomain podman_exporter[240350]: ts=2025-12-05T09:38:34.367Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 05 09:38:34 np0005546420.localdomain podman_exporter[240350]: ts=2025-12-05T09:38:34.367Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 05 09:38:34 np0005546420.localdomain podman_exporter[240350]: ts=2025-12-05T09:38:34.367Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 05 09:38:34 np0005546420.localdomain podman_exporter[240350]: ts=2025-12-05T09:38:34.367Z caller=handler.go:105 level=info collector=container
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:38:34 np0005546420.localdomain podman[240337]: 2025-12-05 09:38:34.389060492 +0000 UTC m=+0.188647948 container start db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:38:34 np0005546420.localdomain podman[240337]: podman_exporter
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: Starting Podman API Service...
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: Started podman_exporter container.
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: Started Podman API Service.
Dec 05 09:38:34 np0005546420.localdomain sudo[240277]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:34 np0005546420.localdomain podman[240363]: time="2025-12-05T09:38:34Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec 05 09:38:34 np0005546420.localdomain podman[240363]: time="2025-12-05T09:38:34Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec 05 09:38:34 np0005546420.localdomain podman[240363]: time="2025-12-05T09:38:34Z" level=info msg="Setting parallel job count to 25"
Dec 05 09:38:34 np0005546420.localdomain podman[240363]: time="2025-12-05T09:38:34Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec 05 09:38:34 np0005546420.localdomain podman[240363]: time="2025-12-05T09:38:34Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\""
Dec 05 09:38:34 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:38:34 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 05 09:38:34 np0005546420.localdomain podman[240363]: time="2025-12-05T09:38:34Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:38:34 np0005546420.localdomain podman[240361]: 2025-12-05 09:38:34.505080547 +0000 UTC m=+0.109823023 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:38:34 np0005546420.localdomain podman[240361]: 2025-12-05 09:38:34.516529587 +0000 UTC m=+0.121272053 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:38:34 np0005546420.localdomain podman[240361]: unhealthy
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:38:34 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:38:35 np0005546420.localdomain sudo[240507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hquxyhxnnvrdzovutzgohotxdbvrxhjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927515.485351-2031-216369753353092/AnsiballZ_systemd.py
Dec 05 09:38:35 np0005546420.localdomain sudo[240507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:36 np0005546420.localdomain python3.9[240509]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:38:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:38:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 05 09:38:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18402 DF PROTO=TCP SPT=50706 DPT=9101 SEQ=3332248860 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC5F7DA0000000001030307) 
Dec 05 09:38:37 np0005546420.localdomain systemd[1]: Stopping podman_exporter container...
Dec 05 09:38:37 np0005546420.localdomain systemd[1]: tmp-crun.oee1Jt.mount: Deactivated successfully.
Dec 05 09:38:37 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:38:34 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 2790 "" "Go-http-client/1.1"
Dec 05 09:38:37 np0005546420.localdomain systemd[1]: libpod-db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.scope: Deactivated successfully.
Dec 05 09:38:37 np0005546420.localdomain podman[240513]: 2025-12-05 09:38:37.232122386 +0000 UTC m=+0.113509067 container died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:38:37 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.timer: Deactivated successfully.
Dec 05 09:38:37 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:38:37 np0005546420.localdomain podman[240513]: 2025-12-05 09:38:37.288483052 +0000 UTC m=+0.169869703 container cleanup db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:38:37 np0005546420.localdomain podman[240513]: podman_exporter
Dec 05 09:38:37 np0005546420.localdomain podman[240525]: 2025-12-05 09:38:37.353321606 +0000 UTC m=+0.118044313 container cleanup db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:38:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 05 09:38:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4e526b4967c1f6584278783501dddac7e7a0cc6a021536f29ec0e845bab5bc41-merged.mount: Deactivated successfully.
Dec 05 09:38:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9-userdata-shm.mount: Deactivated successfully.
Dec 05 09:38:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:38:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:38:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:38:39 np0005546420.localdomain systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 09:38:39 np0005546420.localdomain podman[240542]: 2025-12-05 09:38:39.515768832 +0000 UTC m=+0.071601444 container cleanup db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:38:39 np0005546420.localdomain podman[240542]: podman_exporter
Dec 05 09:38:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:38:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:38:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:38:40 np0005546420.localdomain systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 05 09:38:40 np0005546420.localdomain systemd[1]: Stopped podman_exporter container.
Dec 05 09:38:40 np0005546420.localdomain systemd[1]: Starting podman_exporter container...
Dec 05 09:38:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=995 DF PROTO=TCP SPT=37964 DPT=9100 SEQ=3311167181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC607190000000001030307) 
Dec 05 09:38:41 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:38:41 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:38:41 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:38:41 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:38:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:38:41 np0005546420.localdomain podman[240555]: 2025-12-05 09:38:41.485689463 +0000 UTC m=+0.842846647 container init db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:38:41 np0005546420.localdomain podman_exporter[240570]: ts=2025-12-05T09:38:41.507Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 05 09:38:41 np0005546420.localdomain podman_exporter[240570]: ts=2025-12-05T09:38:41.507Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 05 09:38:41 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:38:41 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 05 09:38:41 np0005546420.localdomain podman[240363]: time="2025-12-05T09:38:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:38:41 np0005546420.localdomain podman_exporter[240570]: ts=2025-12-05T09:38:41.507Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 05 09:38:41 np0005546420.localdomain podman_exporter[240570]: ts=2025-12-05T09:38:41.507Z caller=handler.go:105 level=info collector=container
Dec 05 09:38:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:38:41 np0005546420.localdomain podman[240555]: 2025-12-05 09:38:41.578942481 +0000 UTC m=+0.936099675 container start db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:38:41 np0005546420.localdomain podman[240555]: podman_exporter
Dec 05 09:38:41 np0005546420.localdomain podman[240580]: 2025-12-05 09:38:41.618077876 +0000 UTC m=+0.085929725 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:38:41 np0005546420.localdomain podman[240580]: 2025-12-05 09:38:41.626012648 +0000 UTC m=+0.093864527 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:38:41 np0005546420.localdomain podman[240580]: unhealthy
Dec 05 09:38:41 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:38:41 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:38:41 np0005546420.localdomain systemd[1]: Started podman_exporter container.
Dec 05 09:38:41 np0005546420.localdomain sudo[240507]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:38:42 np0005546420.localdomain podman[240617]: 2025-12-05 09:38:42.021786458 +0000 UTC m=+0.098500536 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Dec 05 09:38:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:38:42 np0005546420.localdomain podman[240617]: 2025-12-05 09:38:42.097385733 +0000 UTC m=+0.174099801 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller)
Dec 05 09:38:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=996 DF PROTO=TCP SPT=37964 DPT=9100 SEQ=3311167181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC60F190000000001030307) 
Dec 05 09:38:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 05 09:38:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-39d2f58466688fe53652a364ae73822e36bcfb567eba3c646d9e26add473af11-merged.mount: Deactivated successfully.
Dec 05 09:38:44 np0005546420.localdomain sudo[240748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xvnrhnqxhaipoypplwubakooxkkdktss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927524.0044708-2055-263380518636910/AnsiballZ_stat.py
Dec 05 09:38:44 np0005546420.localdomain sudo[240748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-39d2f58466688fe53652a364ae73822e36bcfb567eba3c646d9e26add473af11-merged.mount: Deactivated successfully.
Dec 05 09:38:44 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:38:44 np0005546420.localdomain podman[240646]: 2025-12-05 09:38:44.353415937 +0000 UTC m=+2.261404526 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:38:44 np0005546420.localdomain podman[240646]: 2025-12-05 09:38:44.387376627 +0000 UTC m=+2.295365206 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 05 09:38:44 np0005546420.localdomain podman[240646]: unhealthy
Dec 05 09:38:44 np0005546420.localdomain python3.9[240750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:38:44 np0005546420.localdomain sudo[240748]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:44 np0005546420.localdomain sudo[240842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmdidubhgfvhylurcpbxtedrleaorqac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927524.0044708-2055-263380518636910/AnsiballZ_copy.py
Dec 05 09:38:44 np0005546420.localdomain sudo[240842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:45 np0005546420.localdomain python3.9[240844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927524.0044708-2055-263380518636910/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:38:45 np0005546420.localdomain sudo[240842]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:45 np0005546420.localdomain sudo[240933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:38:45 np0005546420.localdomain sudo[240933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:38:45 np0005546420.localdomain sudo[240933]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:45 np0005546420.localdomain sudo[240969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxvkcdwinytorujoifmqlhlrvhzksubx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927525.449143-2106-51235430857268/AnsiballZ_container_config_data.py
Dec 05 09:38:45 np0005546420.localdomain sudo[240969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:45 np0005546420.localdomain sudo[240973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:38:45 np0005546420.localdomain sudo[240973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:38:45 np0005546420.localdomain python3.9[240972]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec 05 09:38:45 np0005546420.localdomain sudo[240969]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:46 np0005546420.localdomain sudo[241110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwddztxrgejtzrxpcyfcgpcofsrdayoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927526.2583532-2133-148324552229407/AnsiballZ_container_config_hash.py
Dec 05 09:38:46 np0005546420.localdomain sudo[241110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:46 np0005546420.localdomain python3.9[241112]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:38:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:38:46 np0005546420.localdomain sudo[241110]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=997 DF PROTO=TCP SPT=37964 DPT=9100 SEQ=3311167181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC61EDA0000000001030307) 
Dec 05 09:38:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 05 09:38:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:38:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 05 09:38:47 np0005546420.localdomain sudo[241220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hwznhqvenkmazvlsaofodeqtwdzlqzdy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927527.1125073-2163-192046985861422/AnsiballZ_edpm_container_manage.py
Dec 05 09:38:47 np0005546420.localdomain sudo[241220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:38:47 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:38:47 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Failed with result 'exit-code'.
Dec 05 09:38:47 np0005546420.localdomain python3[241222]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:38:47 np0005546420.localdomain sudo[240973]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:38:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:38:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31144 DF PROTO=TCP SPT=51888 DPT=9105 SEQ=2933183009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC626C10000000001030307) 
Dec 05 09:38:48 np0005546420.localdomain sudo[241268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:38:48 np0005546420.localdomain sudo[241268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:38:48 np0005546420.localdomain sudo[241268]: pam_unix(sudo:session): session closed for user root
Dec 05 09:38:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:38:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:38:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:38:49 np0005546420.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:38:49 np0005546420.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:38:49 np0005546420.localdomain podman[240363]: time="2025-12-05T09:38:49Z" level=error msg="Getting root fs size for \"02f030890eb186725a9723e8d3bfb921cea292cf52d4aa74c87861d4dad13471\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": creating overlay mount to /var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/IKCF27DQLZIV3KCF4TBEZZFTOC:/var/lib/containers/storage/overlay/l/TGAD4ZE6ATLQI3D32HGPCQBATK,upperdir=/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/diff,workdir=/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/work,nodev,metacopy=on\": no such file or directory"
Dec 05 09:38:49 np0005546420.localdomain podman[241256]: 2025-12-05 09:38:49.850189828 +0000 UTC m=+1.413548810 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:38:49 np0005546420.localdomain podman[241256]: 2025-12-05 09:38:49.882907423 +0000 UTC m=+1.446266425 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 05 09:38:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:38:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:38:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 05 09:38:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31146 DF PROTO=TCP SPT=51888 DPT=9105 SEQ=2933183009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC632D90000000001030307) 
Dec 05 09:38:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-39d2f58466688fe53652a364ae73822e36bcfb567eba3c646d9e26add473af11-merged.mount: Deactivated successfully.
Dec 05 09:38:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-39d2f58466688fe53652a364ae73822e36bcfb567eba3c646d9e26add473af11-merged.mount: Deactivated successfully.
Dec 05 09:38:52 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:38:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:38:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:38:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:38:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:38:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:38:54 np0005546420.localdomain podman[241317]: 2025-12-05 09:38:54.765602998 +0000 UTC m=+0.933439131 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:38:54 np0005546420.localdomain podman[241317]: 2025-12-05 09:38:54.776195254 +0000 UTC m=+0.944031337 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:38:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39831 DF PROTO=TCP SPT=48376 DPT=9102 SEQ=2953427585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC63FD90000000001030307) 
Dec 05 09:38:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:38:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a6854836afb9692a1e5ddd8b5918f8daabf3b654c918f0b10a3b63996a4e7a72-merged.mount: Deactivated successfully.
Dec 05 09:38:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a6854836afb9692a1e5ddd8b5918f8daabf3b654c918f0b10a3b63996a4e7a72-merged.mount: Deactivated successfully.
Dec 05 09:38:58 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:38:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 05 09:38:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:38:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:38:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 05 09:39:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17505 DF PROTO=TCP SPT=48778 DPT=9102 SEQ=2450632867 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC6545A0000000001030307) 
Dec 05 09:39:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 05 09:39:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:39:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:39:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:39:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17745 DF PROTO=TCP SPT=53032 DPT=9882 SEQ=1624520672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC65BD90000000001030307) 
Dec 05 09:39:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:39:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:39:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:04 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:39:04.086 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:39:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:39:04.086 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:39:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:39:04.087 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:39:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:39:04 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:04 np0005546420.localdomain podman[241364]: 2025-12-05 09:39:04.394158461 +0000 UTC m=+0.088980670 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 09:39:04 np0005546420.localdomain podman[241364]: 2025-12-05 09:39:04.409687465 +0000 UTC m=+0.104509694 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:39:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31148 DF PROTO=TCP SPT=51888 DPT=9105 SEQ=2933183009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC663D90000000001030307) 
Dec 05 09:39:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.343 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.344 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.362 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.362 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.362 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.374 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.374 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.375 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.375 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.375 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.376 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:39:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:05.376 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:39:05 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:05 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:05 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:39:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.138 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.138 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.139 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.139 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.139 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:39:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.624 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.814 230124 WARNING nova.virt.libvirt.driver [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.816 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=13186MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.816 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.816 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:39:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.881 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.881 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:39:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:06.903 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:39:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:39:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49033 DF PROTO=TCP SPT=40846 DPT=9101 SEQ=4099680269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC66DD90000000001030307) 
Dec 05 09:39:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:07.336 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:39:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:07.343 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:39:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:07.368 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:39:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:07.371 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:39:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:39:07.371 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:39:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3c2e330997defc689ea7178f3ec3c4e18b224f1742cc6af7ec556ac2e9588fc5-merged.mount: Deactivated successfully.
Dec 05 09:39:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:39:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:39:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:39:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a6854836afb9692a1e5ddd8b5918f8daabf3b654c918f0b10a3b63996a4e7a72-merged.mount: Deactivated successfully.
Dec 05 09:39:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a6854836afb9692a1e5ddd8b5918f8daabf3b654c918f0b10a3b63996a4e7a72-merged.mount: Deactivated successfully.
Dec 05 09:39:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32580 DF PROTO=TCP SPT=39652 DPT=9100 SEQ=3064018211 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC67C190000000001030307) 
Dec 05 09:39:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:39:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-522dca5b0897edc142dfc46111f3114c06dbf23dda84b5305bf810fad13843cc-merged.mount: Deactivated successfully.
Dec 05 09:39:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 05 09:39:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:39:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:39:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:39:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully.
Dec 05 09:39:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:39:12 np0005546420.localdomain podman[241427]: 2025-12-05 09:39:12.107372428 +0000 UTC m=+0.087049975 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:39:12 np0005546420.localdomain podman[241427]: 2025-12-05 09:39:12.141899044 +0000 UTC m=+0.121576621 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:39:12 np0005546420.localdomain podman[241427]: unhealthy
Dec 05 09:39:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32581 DF PROTO=TCP SPT=39652 DPT=9100 SEQ=3064018211 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC684190000000001030307) 
Dec 05 09:39:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:39:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:39:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:39:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:39:14 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:39:14 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:39:14 np0005546420.localdomain podman[241286]: 2025-12-05 09:38:49.880503636 +0000 UTC m=+0.055882144 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Dec 05 09:39:14 np0005546420.localdomain podman[241448]: 2025-12-05 09:39:14.865314531 +0000 UTC m=+0.445498422 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 09:39:14 np0005546420.localdomain podman[241448]: 2025-12-05 09:39:14.971348637 +0000 UTC m=+0.551532538 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 09:39:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:39:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:39:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32582 DF PROTO=TCP SPT=39652 DPT=9100 SEQ=3064018211 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC693D90000000001030307) 
Dec 05 09:39:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:17 np0005546420.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:17 np0005546420.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:17 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:39:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:39:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:18 np0005546420.localdomain podman[240363]: time="2025-12-05T09:39:18Z" level=error msg="Getting root fs size for \"162a75551ba739cd4c6e1f915806d262fab80c7bd9d85c181e3d13b48d9fe544\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": unmounting layer 3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae: replacing mount point \"/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged\": device or resource busy"
Dec 05 09:39:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:18 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:18 np0005546420.localdomain podman[241483]: 2025-12-05 09:39:18.650826813 +0000 UTC m=+0.973242211 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm)
Dec 05 09:39:18 np0005546420.localdomain podman[241483]: 2025-12-05 09:39:18.658669092 +0000 UTC m=+0.981084500 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:39:18 np0005546420.localdomain podman[241483]: unhealthy
Dec 05 09:39:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17746 DF PROTO=TCP SPT=53032 DPT=9882 SEQ=1624520672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC69BD90000000001030307) 
Dec 05 09:39:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:39:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7626528a751d21d59e66c79e0e8f19b9b9ae5356c5571af7f106b1aee9d855ee-merged.mount: Deactivated successfully.
Dec 05 09:39:21 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:21 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:21 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:39:21 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Failed with result 'exit-code'.
Dec 05 09:39:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26758 DF PROTO=TCP SPT=34924 DPT=9105 SEQ=160298062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC6A7DA0000000001030307) 
Dec 05 09:39:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:39:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:39:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 05 09:39:23 np0005546420.localdomain podman[241511]: 2025-12-05 09:39:21.495300115 +0000 UTC m=+2.804996165 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Dec 05 09:39:23 np0005546420.localdomain podman[241525]: 2025-12-05 09:39:23.821553181 +0000 UTC m=+1.407003611 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 05 09:39:23 np0005546420.localdomain podman[241525]: 2025-12-05 09:39:23.852724014 +0000 UTC m=+1.438174454 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:39:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:39:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17507 DF PROTO=TCP SPT=48778 DPT=9102 SEQ=2450632867 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC6B3D90000000001030307) 
Dec 05 09:39:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3c2e330997defc689ea7178f3ec3c4e18b224f1742cc6af7ec556ac2e9588fc5-merged.mount: Deactivated successfully.
Dec 05 09:39:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3c2e330997defc689ea7178f3ec3c4e18b224f1742cc6af7ec556ac2e9588fc5-merged.mount: Deactivated successfully.
Dec 05 09:39:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:39:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:39:25 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:39:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:39:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:39:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:27 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:27 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:39:28 np0005546420.localdomain podman[241543]: 2025-12-05 09:39:28.49062728 +0000 UTC m=+0.068563098 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:39:28 np0005546420.localdomain podman[241543]: 2025-12-05 09:39:28.496310274 +0000 UTC m=+0.074246102 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:39:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:39:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-522dca5b0897edc142dfc46111f3114c06dbf23dda84b5305bf810fad13843cc-merged.mount: Deactivated successfully.
Dec 05 09:39:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 05 09:39:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-6c9a36f7a024434b36038e901614ce5ff2d94721d9179c8f6d2073bbfe0a9a23-merged.mount: Deactivated successfully.
Dec 05 09:39:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-6c9a36f7a024434b36038e901614ce5ff2d94721d9179c8f6d2073bbfe0a9a23-merged.mount: Deactivated successfully.
Dec 05 09:39:30 np0005546420.localdomain podman[241511]: 
Dec 05 09:39:30 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:39:30 np0005546420.localdomain podman[241511]: 2025-12-05 09:39:30.516919108 +0000 UTC m=+11.826615158 container create 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Dec 05 09:39:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14746 DF PROTO=TCP SPT=42418 DPT=9102 SEQ=3971695056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC6C9990000000001030307) 
Dec 05 09:39:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:39:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully.
Dec 05 09:39:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:39:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 05 09:39:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 05 09:39:32 np0005546420.localdomain python3[241222]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Dec 05 09:39:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23888 DF PROTO=TCP SPT=43592 DPT=9882 SEQ=3132945403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC6D1DA0000000001030307) 
Dec 05 09:39:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:39:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:39:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:39:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:39:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26760 DF PROTO=TCP SPT=34924 DPT=9105 SEQ=160298062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC6D7D90000000001030307) 
Dec 05 09:39:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:39:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:35 np0005546420.localdomain sudo[241220]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:35 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:35 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully.
Dec 05 09:39:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:39:35 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:35 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:35 np0005546420.localdomain podman[241607]: 2025-12-05 09:39:35.809523766 +0000 UTC m=+0.095405230 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 09:39:35 np0005546420.localdomain podman[241607]: 2025-12-05 09:39:35.821601165 +0000 UTC m=+0.107482649 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 09:39:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36 DF PROTO=TCP SPT=35750 DPT=9101 SEQ=2417556164 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC6E3D90000000001030307) 
Dec 05 09:39:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:39:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:39:38 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:38 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:38 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:39:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 05 09:39:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-67c0121ab2e02c08e681d8a85898c08bf802edfec3fbfb45ad79be05f6aa5dc4-merged.mount: Deactivated successfully.
Dec 05 09:39:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33347 DF PROTO=TCP SPT=54020 DPT=9100 SEQ=587000010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC6F1590000000001030307) 
Dec 05 09:39:41 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:41 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:41 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 05 09:39:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33348 DF PROTO=TCP SPT=54020 DPT=9100 SEQ=587000010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC6F95A0000000001030307) 
Dec 05 09:39:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 05 09:39:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:39:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 05 09:39:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 05 09:39:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:39:45 np0005546420.localdomain podman[241625]: 2025-12-05 09:39:45.193771833 +0000 UTC m=+0.106507210 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:39:45 np0005546420.localdomain podman[241625]: 2025-12-05 09:39:45.28939971 +0000 UTC m=+0.202135087 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:39:45 np0005546420.localdomain podman[241625]: unhealthy
Dec 05 09:39:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:39:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:39:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:39:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7626528a751d21d59e66c79e0e8f19b9b9ae5356c5571af7f106b1aee9d855ee-merged.mount: Deactivated successfully.
Dec 05 09:39:46 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:39:46 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:39:46 np0005546420.localdomain sudo[241737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ndlnheusuzyvcjgirqyeriemdhksoxpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927586.013231-2187-262550193799907/AnsiballZ_stat.py
Dec 05 09:39:46 np0005546420.localdomain sudo[241737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:46 np0005546420.localdomain python3.9[241739]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:39:46 np0005546420.localdomain sudo[241737]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:39:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:39:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33349 DF PROTO=TCP SPT=54020 DPT=9100 SEQ=587000010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC7091A0000000001030307) 
Dec 05 09:39:47 np0005546420.localdomain sudo[241849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkcyxdkcwticmbohcpwgwzbslwxymlmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927586.7913456-2214-62358506823036/AnsiballZ_file.py
Dec 05 09:39:47 np0005546420.localdomain sudo[241849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:47 np0005546420.localdomain python3.9[241851]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:39:47 np0005546420.localdomain sudo[241849]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:39:47 np0005546420.localdomain podman[241852]: 2025-12-05 09:39:47.550075549 +0000 UTC m=+0.126867593 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 09:39:47 np0005546420.localdomain podman[241852]: 2025-12-05 09:39:47.600829552 +0000 UTC m=+0.177621586 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 09:39:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:39:48 np0005546420.localdomain sudo[241983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwqtvgkgedrgpxqotpnanrsegqsjiaku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927587.7476995-2214-261598728132287/AnsiballZ_copy.py
Dec 05 09:39:48 np0005546420.localdomain sudo[241983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:48 np0005546420.localdomain python3.9[241985]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927587.7476995-2214-261598728132287/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:39:48 np0005546420.localdomain sudo[241983]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:39:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 05 09:39:48 np0005546420.localdomain sudo[242038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tlzrpjxtznqjdtouyhtdguwoqhdstrul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927587.7476995-2214-261598728132287/AnsiballZ_systemd.py
Dec 05 09:39:48 np0005546420.localdomain sudo[242038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 05 09:39:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20605 DF PROTO=TCP SPT=58220 DPT=9105 SEQ=1579501050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC711200000000001030307) 
Dec 05 09:39:49 np0005546420.localdomain python3.9[242040]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:39:49 np0005546420.localdomain sudo[242041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:39:49 np0005546420.localdomain systemd-rc-local-generator[242080]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:39:49 np0005546420.localdomain systemd-sysv-generator[242086]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 05 09:39:49 np0005546420.localdomain sudo[242038]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:49 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:39:49 np0005546420.localdomain sudo[242041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:39:49 np0005546420.localdomain sudo[242041]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:49 np0005546420.localdomain sudo[242094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:39:49 np0005546420.localdomain sudo[242094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:39:50 np0005546420.localdomain sudo[242176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwfwwgmglenbdjnjjhlqougivlcwfzsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927587.7476995-2214-261598728132287/AnsiballZ_systemd.py
Dec 05 09:39:50 np0005546420.localdomain sudo[242176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:50 np0005546420.localdomain python3.9[242178]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:39:50 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:39:50 np0005546420.localdomain systemd-sysv-generator[242209]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:39:50 np0005546420.localdomain systemd-rc-local-generator[242206]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:39:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:39:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:50 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:39:50 np0005546420.localdomain systemd[1]: Starting openstack_network_exporter container...
Dec 05 09:39:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:39:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:39:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:39:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20607 DF PROTO=TCP SPT=58220 DPT=9105 SEQ=1579501050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC71D190000000001030307) 
Dec 05 09:39:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:39:52 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:52 np0005546420.localdomain podman[242229]: 2025-12-05 09:39:52.422762428 +0000 UTC m=+0.872727175 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 05 09:39:52 np0005546420.localdomain podman[242229]: 2025-12-05 09:39:52.457347265 +0000 UTC m=+0.907312062 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:39:52 np0005546420.localdomain podman[242229]: unhealthy
Dec 05 09:39:52 np0005546420.localdomain sudo[242094]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:53 np0005546420.localdomain sudo[242266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:39:53 np0005546420.localdomain sudo[242266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:39:53 np0005546420.localdomain sudo[242266]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:39:54 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:54 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:54 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:39:54 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Failed with result 'exit-code'.
Dec 05 09:39:54 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:39:54 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86684bd1df7cf9c5cddf46305cbbab75dc1bf502e65083299b906555dd95a0e2/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 05 09:39:54 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86684bd1df7cf9c5cddf46305cbbab75dc1bf502e65083299b906555dd95a0e2/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:39:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:39:54 np0005546420.localdomain podman[242218]: 2025-12-05 09:39:54.695054802 +0000 UTC m=+3.944013995 container init 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: INFO    09:39:54 main.go:48: registering *bridge.Collector
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: INFO    09:39:54 main.go:48: registering *coverage.Collector
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: INFO    09:39:54 main.go:48: registering *datapath.Collector
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: INFO    09:39:54 main.go:48: registering *iface.Collector
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: INFO    09:39:54 main.go:48: registering *memory.Collector
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: INFO    09:39:54 main.go:48: registering *ovnnorthd.Collector
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: INFO    09:39:54 main.go:48: registering *ovn.Collector
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: INFO    09:39:54 main.go:48: registering *ovsdbserver.Collector
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: INFO    09:39:54 main.go:48: registering *pmd_perf.Collector
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: INFO    09:39:54 main.go:48: registering *pmd_rxq.Collector
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: INFO    09:39:54 main.go:48: registering *vswitch.Collector
Dec 05 09:39:54 np0005546420.localdomain openstack_network_exporter[242288]: NOTICE  09:39:54 main.go:82: listening on http://:9105/metrics
Dec 05 09:39:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:39:54 np0005546420.localdomain podman[242218]: 2025-12-05 09:39:54.743419682 +0000 UTC m=+3.992378825 container start 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6)
Dec 05 09:39:54 np0005546420.localdomain podman[242218]: openstack_network_exporter
Dec 05 09:39:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14748 DF PROTO=TCP SPT=42418 DPT=9102 SEQ=3971695056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC729D90000000001030307) 
Dec 05 09:39:55 np0005546420.localdomain podman[240363]: time="2025-12-05T09:39:55Z" level=error msg="Getting root fs size for \"1a2b2b87a75e2b978f98c19ff9906b3109792dac8c25675aa3938ebcfb17757b\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy"
Dec 05 09:39:55 np0005546420.localdomain systemd[1]: tmp-crun.e8QxRP.mount: Deactivated successfully.
Dec 05 09:39:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:39:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:39:55 np0005546420.localdomain systemd[1]: Started openstack_network_exporter container.
Dec 05 09:39:55 np0005546420.localdomain podman[242298]: 2025-12-05 09:39:55.897677419 +0000 UTC m=+1.147941244 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal)
Dec 05 09:39:55 np0005546420.localdomain sudo[242176]: pam_unix(sudo:session): session closed for user root
Dec 05 09:39:55 np0005546420.localdomain podman[242298]: 2025-12-05 09:39:55.929815272 +0000 UTC m=+1.180079067 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal)
Dec 05 09:39:56 np0005546420.localdomain sudo[242438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-isxhfouthqpqmvwqdhxkgfzqyoklwqsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927596.1021454-2286-68703092306788/AnsiballZ_systemd.py
Dec 05 09:39:56 np0005546420.localdomain sudo[242438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:39:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:39:56 np0005546420.localdomain python3.9[242440]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:39:56 np0005546420.localdomain systemd[1]: Stopping openstack_network_exporter container...
Dec 05 09:39:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 05 09:39:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-6c9a36f7a024434b36038e901614ce5ff2d94721d9179c8f6d2073bbfe0a9a23-merged.mount: Deactivated successfully.
Dec 05 09:39:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-6c9a36f7a024434b36038e901614ce5ff2d94721d9179c8f6d2073bbfe0a9a23-merged.mount: Deactivated successfully.
Dec 05 09:39:58 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:39:58 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:58 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:39:58 np0005546420.localdomain podman[242311]: 2025-12-05 09:39:58.406228323 +0000 UTC m=+2.522355148 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 09:39:58 np0005546420.localdomain podman[242311]: 2025-12-05 09:39:58.416481197 +0000 UTC m=+2.532608012 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 05 09:39:58 np0005546420.localdomain systemd[1]: libpod-3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.scope: Deactivated successfully.
Dec 05 09:39:58 np0005546420.localdomain podman[242444]: 2025-12-05 09:39:58.495534535 +0000 UTC m=+1.654695939 container died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, version=9.6, vcs-type=git)
Dec 05 09:39:58 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.timer: Deactivated successfully.
Dec 05 09:39:58 np0005546420.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:39:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:39:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74-userdata-shm.mount: Deactivated successfully.
Dec 05 09:39:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:40:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 05 09:40:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 05 09:40:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-86684bd1df7cf9c5cddf46305cbbab75dc1bf502e65083299b906555dd95a0e2-merged.mount: Deactivated successfully.
Dec 05 09:40:00 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:40:00 np0005546420.localdomain podman[242444]: 2025-12-05 09:40:00.128643694 +0000 UTC m=+3.287805018 container cleanup 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vendor=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Dec 05 09:40:00 np0005546420.localdomain podman[242444]: openstack_network_exporter
Dec 05 09:40:00 np0005546420.localdomain podman[242463]: 2025-12-05 09:40:00.197442128 +0000 UTC m=+1.699458678 container cleanup 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 09:40:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53491 DF PROTO=TCP SPT=46928 DPT=9102 SEQ=1717885807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC73ED90000000001030307) 
Dec 05 09:40:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:40:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:40:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:40:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15152 DF PROTO=TCP SPT=53404 DPT=9882 SEQ=3813609787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC745D90000000001030307) 
Dec 05 09:40:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e28f9293ec6754804a09c7d9d69f59819a47e4fdd6275f5b72f6e2577ab30af0-merged.mount: Deactivated successfully.
Dec 05 09:40:02 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:02 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:02 np0005546420.localdomain systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 09:40:03 np0005546420.localdomain podman[242477]: 2025-12-05 09:40:03.028690806 +0000 UTC m=+2.348032754 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:40:03 np0005546420.localdomain podman[242477]: 2025-12-05 09:40:03.063413848 +0000 UTC m=+2.382755786 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:40:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:40:04.087 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:40:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:40:04.090 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:40:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:40:04.090 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:40:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20609 DF PROTO=TCP SPT=58220 DPT=9105 SEQ=1579501050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC74DDA0000000001030307) 
Dec 05 09:40:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:05 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:05 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:05 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:40:05 np0005546420.localdomain podman[242488]: 2025-12-05 09:40:05.823424806 +0000 UTC m=+2.805000406 container cleanup 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-type=git, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 09:40:05 np0005546420.localdomain podman[242488]: openstack_network_exporter
Dec 05 09:40:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:06.368 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:40:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:06.371 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:40:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:06.371 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:40:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:06.372 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:40:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:06.372 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:40:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:06.373 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:40:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10643 DF PROTO=TCP SPT=54708 DPT=9101 SEQ=3568105920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC757DA0000000001030307) 
Dec 05 09:40:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:07.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:40:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:07.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:40:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:07.042 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:40:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:07.060 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:40:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:07.060 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:40:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:07.061 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:40:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:07 np0005546420.localdomain systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 05 09:40:07 np0005546420.localdomain systemd[1]: Stopped openstack_network_exporter container.
Dec 05 09:40:07 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:07 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:07 np0005546420.localdomain systemd[1]: Starting openstack_network_exporter container...
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.068 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.068 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.069 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.069 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.071 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:40:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.541 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:40:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.722 230124 WARNING nova.virt.libvirt.driver [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.724 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=13187MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.724 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.724 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.785 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.785 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:40:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:08.806 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:40:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:09 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:40:09 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86684bd1df7cf9c5cddf46305cbbab75dc1bf502e65083299b906555dd95a0e2/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 05 09:40:09 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86684bd1df7cf9c5cddf46305cbbab75dc1bf502e65083299b906555dd95a0e2/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 09:40:09 np0005546420.localdomain podman[242545]: 2025-12-05 09:40:09.184568886 +0000 UTC m=+0.625450448 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 05 09:40:09 np0005546420.localdomain podman[242545]: 2025-12-05 09:40:09.199444832 +0000 UTC m=+0.640326414 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:40:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:40:09 np0005546420.localdomain podman[242509]: 2025-12-05 09:40:09.260148308 +0000 UTC m=+1.340677641 container init 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible)
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: INFO    09:40:09 main.go:48: registering *bridge.Collector
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: INFO    09:40:09 main.go:48: registering *coverage.Collector
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: INFO    09:40:09 main.go:48: registering *datapath.Collector
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: INFO    09:40:09 main.go:48: registering *iface.Collector
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: INFO    09:40:09 main.go:48: registering *memory.Collector
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: INFO    09:40:09 main.go:48: registering *ovnnorthd.Collector
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: INFO    09:40:09 main.go:48: registering *ovn.Collector
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: INFO    09:40:09 main.go:48: registering *ovsdbserver.Collector
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: INFO    09:40:09 main.go:48: registering *pmd_perf.Collector
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: INFO    09:40:09 main.go:48: registering *pmd_rxq.Collector
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: INFO    09:40:09 main.go:48: registering *vswitch.Collector
Dec 05 09:40:09 np0005546420.localdomain openstack_network_exporter[242579]: NOTICE  09:40:09 main.go:82: listening on http://:9105/metrics
Dec 05 09:40:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:40:09 np0005546420.localdomain podman[242509]: 2025-12-05 09:40:09.291088695 +0000 UTC m=+1.371618018 container start 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Dec 05 09:40:09 np0005546420.localdomain podman[242509]: openstack_network_exporter
Dec 05 09:40:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:09.323 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:40:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:09.332 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:40:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:09.352 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:40:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:09.355 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:40:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:40:09.355 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:40:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:40:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 05 09:40:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-67c0121ab2e02c08e681d8a85898c08bf802edfec3fbfb45ad79be05f6aa5dc4-merged.mount: Deactivated successfully.
Dec 05 09:40:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34816 DF PROTO=TCP SPT=33222 DPT=9100 SEQ=3630913396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC7669A0000000001030307) 
Dec 05 09:40:10 np0005546420.localdomain systemd[1]: Started openstack_network_exporter container.
Dec 05 09:40:10 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:40:10 np0005546420.localdomain podman[242594]: 2025-12-05 09:40:10.85478417 +0000 UTC m=+1.559019103 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 05 09:40:10 np0005546420.localdomain sudo[242438]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:11 np0005546420.localdomain sudo[242723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbbsfwphigzmrdevyxkwbrtqevfgeuyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927611.0803537-2310-160521916782010/AnsiballZ_find.py
Dec 05 09:40:11 np0005546420.localdomain sudo[242723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:40:11 np0005546420.localdomain podman[242594]: 2025-12-05 09:40:11.39536094 +0000 UTC m=+2.099595813 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Dec 05 09:40:11 np0005546420.localdomain python3.9[242725]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:40:11 np0005546420.localdomain sudo[242723]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34817 DF PROTO=TCP SPT=33222 DPT=9100 SEQ=3630913396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC76E990000000001030307) 
Dec 05 09:40:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 05 09:40:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:40:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:40:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 05 09:40:13 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:40:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:40:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 05 09:40:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 05 09:40:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:40:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:40:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a594ca6f65c5dc922c764b7fba6bddaef9e5a11599ecac6b1adff7ab94f7ceb9-merged.mount: Deactivated successfully.
Dec 05 09:40:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:40:16 np0005546420.localdomain podman[242743]: 2025-12-05 09:40:16.390904807 +0000 UTC m=+0.086801737 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:40:16 np0005546420.localdomain podman[242743]: 2025-12-05 09:40:16.428422205 +0000 UTC m=+0.124319125 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:40:16 np0005546420.localdomain podman[242743]: unhealthy
Dec 05 09:40:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34818 DF PROTO=TCP SPT=33222 DPT=9100 SEQ=3630913396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC77E590000000001030307) 
Dec 05 09:40:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:40:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:40:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:18 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:40:18 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:40:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15153 DF PROTO=TCP SPT=53404 DPT=9882 SEQ=3813609787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC785DA0000000001030307) 
Dec 05 09:40:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:40:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:20 np0005546420.localdomain systemd[1]: tmp-crun.SCuROm.mount: Deactivated successfully.
Dec 05 09:40:20 np0005546420.localdomain podman[242766]: 2025-12-05 09:40:20.564165775 +0000 UTC m=+0.136856488 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller)
Dec 05 09:40:20 np0005546420.localdomain podman[242766]: 2025-12-05 09:40:20.598066683 +0000 UTC m=+0.170757406 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 09:40:21 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:21 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:21 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:40:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 05 09:40:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47061 DF PROTO=TCP SPT=39752 DPT=9105 SEQ=3280504076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC7925A0000000001030307) 
Dec 05 09:40:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:40:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:40:23 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34819 DF PROTO=TCP SPT=33222 DPT=9100 SEQ=3630913396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC79DDA0000000001030307) 
Dec 05 09:40:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:40:25 np0005546420.localdomain systemd[1]: tmp-crun.VafsQV.mount: Deactivated successfully.
Dec 05 09:40:25 np0005546420.localdomain podman[242791]: 2025-12-05 09:40:25.520015185 +0000 UTC m=+0.096636521 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:40:25 np0005546420.localdomain podman[242791]: 2025-12-05 09:40:25.550496158 +0000 UTC m=+0.127117504 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 09:40:25 np0005546420.localdomain podman[242791]: unhealthy
Dec 05 09:40:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-62dc5ca56cabff6fee2b8a4f6e4dde9258d2fdbc443d9294aabf255694ff62dc-merged.mount: Deactivated successfully.
Dec 05 09:40:26 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:26 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:40:26 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Failed with result 'exit-code'.
Dec 05 09:40:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:40:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 05 09:40:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:40:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:28 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:29 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:29 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:29 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:29 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:40:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:40:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:40:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:40:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63570 DF PROTO=TCP SPT=53744 DPT=9102 SEQ=2584242375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC7B3D90000000001030307) 
Dec 05 09:40:30 np0005546420.localdomain podman[242809]: 2025-12-05 09:40:30.524068088 +0000 UTC m=+0.101532231 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 09:40:30 np0005546420.localdomain podman[242809]: 2025-12-05 09:40:30.558348868 +0000 UTC m=+0.135813011 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 09:40:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 05 09:40:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ed892db9d5a7f7e4ce8fde13396eaa6545b70008988b501a346c5f00ef20fcbd-merged.mount: Deactivated successfully.
Dec 05 09:40:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ed892db9d5a7f7e4ce8fde13396eaa6545b70008988b501a346c5f00ef20fcbd-merged.mount: Deactivated successfully.
Dec 05 09:40:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e28f9293ec6754804a09c7d9d69f59819a47e4fdd6275f5b72f6e2577ab30af0-merged.mount: Deactivated successfully.
Dec 05 09:40:32 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:40:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26339 DF PROTO=TCP SPT=37622 DPT=9882 SEQ=2303927888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC7BBD90000000001030307) 
Dec 05 09:40:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47063 DF PROTO=TCP SPT=39752 DPT=9105 SEQ=3280504076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC7C1DA0000000001030307) 
Dec 05 09:40:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:40:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:35 np0005546420.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:35 np0005546420.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:35 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:40:36 np0005546420.localdomain podman[242824]: 2025-12-05 09:40:36.509374284 +0000 UTC m=+0.083475417 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:40:36 np0005546420.localdomain podman[242824]: 2025-12-05 09:40:36.546360358 +0000 UTC m=+0.120461471 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:40:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39188 DF PROTO=TCP SPT=43554 DPT=9101 SEQ=3313825904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC7CDD90000000001030307) 
Dec 05 09:40:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:37 np0005546420.localdomain podman[240363]: time="2025-12-05T09:40:37Z" level=error msg="Getting root fs size for \"3436625262c0d6a8d425673ff154c7b6f4d6e143b4ba733fbb7e5532420f42fa\": getting diffsize of layer \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\" and its parent \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\": unmounting layer f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958: replacing mount point \"/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged\": device or resource busy"
Dec 05 09:40:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:37 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:40:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:40:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:40:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7b5d5b29b19f7b6b07c8152e6495d006ec06094c7b209466fc3f0158f64c00cf-merged.mount: Deactivated successfully.
Dec 05 09:40:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7b5d5b29b19f7b6b07c8152e6495d006ec06094c7b209466fc3f0158f64c00cf-merged.mount: Deactivated successfully.
Dec 05 09:40:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5851 DF PROTO=TCP SPT=48820 DPT=9100 SEQ=2250497722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC7DBD90000000001030307) 
Dec 05 09:40:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:40:41 np0005546420.localdomain podman[242847]: 2025-12-05 09:40:41.63105918 +0000 UTC m=+0.203053350 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:40:41 np0005546420.localdomain podman[242847]: 2025-12-05 09:40:41.675710618 +0000 UTC m=+0.247704798 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 09:40:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a594ca6f65c5dc922c764b7fba6bddaef9e5a11599ecac6b1adff7ab94f7ceb9-merged.mount: Deactivated successfully.
Dec 05 09:40:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5852 DF PROTO=TCP SPT=48820 DPT=9100 SEQ=2250497722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC7E3DA0000000001030307) 
Dec 05 09:40:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:43 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:43 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:40:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:40:43 np0005546420.localdomain podman[242866]: 2025-12-05 09:40:43.516789816 +0000 UTC m=+0.091189254 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, vcs-type=git, release=1755695350, container_name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:40:43 np0005546420.localdomain podman[242866]: 2025-12-05 09:40:43.56035549 +0000 UTC m=+0.134754938 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=)
Dec 05 09:40:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:45 np0005546420.localdomain podman[240363]: time="2025-12-05T09:40:45Z" level=error msg="Getting root fs size for \"5e2a5adfbfd5bb2dd81486ed29d05b2f697df8f312c864e3345b5554e3920376\": getting diffsize of layer \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\" and its parent \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\": unmounting layer f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958: replacing mount point \"/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged\": device or resource busy"
Dec 05 09:40:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:46 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:40:46 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5853 DF PROTO=TCP SPT=48820 DPT=9100 SEQ=2250497722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC7F3990000000001030307) 
Dec 05 09:40:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52588 DF PROTO=TCP SPT=43964 DPT=9105 SEQ=2576351854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC7FB810000000001030307) 
Dec 05 09:40:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:40:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-28e6b391ec59dcd99cc00b446ec20b036d136b8f7911be581529c928ff9bef29-merged.mount: Deactivated successfully.
Dec 05 09:40:49 np0005546420.localdomain systemd[1]: tmp-crun.hZcd4e.mount: Deactivated successfully.
Dec 05 09:40:49 np0005546420.localdomain podman[242886]: 2025-12-05 09:40:49.017633894 +0000 UTC m=+0.095480525 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:40:49 np0005546420.localdomain podman[242886]: 2025-12-05 09:40:49.025189396 +0000 UTC m=+0.103035997 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:40:49 np0005546420.localdomain podman[242886]: unhealthy
Dec 05 09:40:49 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:49 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:40:49 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:40:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-28e6b391ec59dcd99cc00b446ec20b036d136b8f7911be581529c928ff9bef29-merged.mount: Deactivated successfully.
Dec 05 09:40:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:40:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:40:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 05 09:40:50 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:50 np0005546420.localdomain podman[240363]: time="2025-12-05T09:40:50Z" level=error msg="Getting root fs size for \"3282a1048676fd8a9ec0469ef95713bacf934e86cc87953eb578e832b2d3a781\": unmounting layer c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6: replacing mount point \"/var/lib/containers/storage/overlay/c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6/merged\": device or resource busy"
Dec 05 09:40:50 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:50 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:40:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:40:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:40:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:51 np0005546420.localdomain podman[242909]: 2025-12-05 09:40:51.542726753 +0000 UTC m=+0.112081554 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:40:51 np0005546420.localdomain podman[242909]: 2025-12-05 09:40:51.621171295 +0000 UTC m=+0.190526106 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 09:40:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52590 DF PROTO=TCP SPT=43964 DPT=9105 SEQ=2576351854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC807990000000001030307) 
Dec 05 09:40:51 np0005546420.localdomain systemd[1]: tmp-crun.pplSpH.mount: Deactivated successfully.
Dec 05 09:40:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:53 np0005546420.localdomain sudo[242931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:40:53 np0005546420.localdomain sudo[242931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:40:53 np0005546420.localdomain sudo[242931]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:53 np0005546420.localdomain sudo[242949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:40:53 np0005546420.localdomain sudo[242949]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:40:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-62dc5ca56cabff6fee2b8a4f6e4dde9258d2fdbc443d9294aabf255694ff62dc-merged.mount: Deactivated successfully.
Dec 05 09:40:53 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:40:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 05 09:40:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-1d1a749154e63d40b680bc56b84ad99f9346ef73a071954dcf2dda725e125803-merged.mount: Deactivated successfully.
Dec 05 09:40:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:40:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5854 DF PROTO=TCP SPT=48820 DPT=9100 SEQ=2250497722 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC813D90000000001030307) 
Dec 05 09:40:55 np0005546420.localdomain sudo[242949]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 05 09:40:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:40:55 np0005546420.localdomain sudo[242999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:40:55 np0005546420.localdomain sudo[242999]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:40:55 np0005546420.localdomain sudo[242999]: pam_unix(sudo:session): session closed for user root
Dec 05 09:40:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:40:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:40:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:40:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:40:56 np0005546420.localdomain podman[243017]: 2025-12-05 09:40:56.697210113 +0000 UTC m=+0.103559843 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:40:56 np0005546420.localdomain podman[243017]: 2025-12-05 09:40:56.731379479 +0000 UTC m=+0.137729229 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute)
Dec 05 09:40:56 np0005546420.localdomain podman[243017]: unhealthy
Dec 05 09:40:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:40:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:40:58 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:40:58 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Failed with result 'exit-code'.
Dec 05 09:40:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 05 09:40:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ed892db9d5a7f7e4ce8fde13396eaa6545b70008988b501a346c5f00ef20fcbd-merged.mount: Deactivated successfully.
Dec 05 09:40:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ed892db9d5a7f7e4ce8fde13396eaa6545b70008988b501a346c5f00ef20fcbd-merged.mount: Deactivated successfully.
Dec 05 09:40:59 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23888 DF PROTO=TCP SPT=37014 DPT=9102 SEQ=3146250194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC829190000000001030307) 
Dec 05 09:41:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:41:01 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:01 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:01 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:41:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:02 np0005546420.localdomain podman[243035]: 2025-12-05 09:41:02.470709423 +0000 UTC m=+0.076075212 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:41:02 np0005546420.localdomain podman[243035]: 2025-12-05 09:41:02.508014965 +0000 UTC m=+0.113380754 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:41:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55933 DF PROTO=TCP SPT=58324 DPT=9882 SEQ=2870925048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC831DA0000000001030307) 
Dec 05 09:41:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:41:04.087 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:41:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:41:04.088 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:41:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:41:04.088 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:41:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52592 DF PROTO=TCP SPT=43964 DPT=9105 SEQ=2576351854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC837DA0000000001030307) 
Dec 05 09:41:04 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:04 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:04 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:41:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:04 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:04 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:05.351 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:41:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:05.374 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:41:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:05.374 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:41:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:06.059 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:41:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27871 DF PROTO=TCP SPT=40208 DPT=9101 SEQ=2759187397 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC841DA0000000001030307) 
Dec 05 09:41:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:07.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:41:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:07.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:41:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:07.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:41:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:07.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:41:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d-merged.mount: Deactivated successfully.
Dec 05 09:41:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:08.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:41:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:41:08 np0005546420.localdomain podman[243053]: 2025-12-05 09:41:08.230057739 +0000 UTC m=+0.083886791 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:41:08 np0005546420.localdomain podman[243053]: 2025-12-05 09:41:08.239701524 +0000 UTC m=+0.093530566 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:41:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 05 09:41:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:09.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:41:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:09.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:41:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:09.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:41:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:09.057 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:41:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:41:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7b5d5b29b19f7b6b07c8152e6495d006ec06094c7b209466fc3f0158f64c00cf-merged.mount: Deactivated successfully.
Dec 05 09:41:09 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:41:09 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:09 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.064 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.064 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.064 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.065 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.065 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:41:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.549 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:41:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40440 DF PROTO=TCP SPT=36742 DPT=9100 SEQ=3268666892 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC850DA0000000001030307) 
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.766 230124 WARNING nova.virt.libvirt.driver [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.768 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=13186MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.768 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.768 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.828 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.828 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:41:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:10.845 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:41:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 05 09:41:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-452fa9ef0503bc3aa3c08de7cd537beefc7561b4484c5941b91d2e19b04d76e4-merged.mount: Deactivated successfully.
Dec 05 09:41:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-452fa9ef0503bc3aa3c08de7cd537beefc7561b4484c5941b91d2e19b04d76e4-merged.mount: Deactivated successfully.
Dec 05 09:41:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:11.339 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:41:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:11.349 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:41:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:11.362 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:41:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:11.364 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:41:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:41:11.364 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:41:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40441 DF PROTO=TCP SPT=36742 DPT=9100 SEQ=3268666892 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC858D90000000001030307) 
Dec 05 09:41:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:41:13 np0005546420.localdomain systemd[1]: tmp-crun.Gs1B1H.mount: Deactivated successfully.
Dec 05 09:41:13 np0005546420.localdomain podman[243120]: 2025-12-05 09:41:13.492276638 +0000 UTC m=+0.073670997 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 09:41:13 np0005546420.localdomain podman[243120]: 2025-12-05 09:41:13.503739819 +0000 UTC m=+0.085134158 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 09:41:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:41:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:41:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:14 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:41:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:41:16 np0005546420.localdomain podman[243138]: 2025-12-05 09:41:16.520400023 +0000 UTC m=+0.097969661 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container)
Dec 05 09:41:16 np0005546420.localdomain podman[243138]: 2025-12-05 09:41:16.536350222 +0000 UTC m=+0.113919860 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 09:41:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40442 DF PROTO=TCP SPT=36742 DPT=9100 SEQ=3268666892 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC868990000000001030307) 
Dec 05 09:41:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:17 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:41:17 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:17 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:17 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:17 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19676 DF PROTO=TCP SPT=58410 DPT=9105 SEQ=1007716298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC870B20000000001030307) 
Dec 05 09:41:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:41:19 np0005546420.localdomain podman[243158]: 2025-12-05 09:41:19.858623216 +0000 UTC m=+0.088254495 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:41:19 np0005546420.localdomain podman[243158]: 2025-12-05 09:41:19.870325374 +0000 UTC m=+0.099956663 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:41:19 np0005546420.localdomain podman[243158]: unhealthy
Dec 05 09:41:20 np0005546420.localdomain systemd[1]: tmp-crun.DdKjKJ.mount: Deactivated successfully.
Dec 05 09:41:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-28e6b391ec59dcd99cc00b446ec20b036d136b8f7911be581529c928ff9bef29-merged.mount: Deactivated successfully.
Dec 05 09:41:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-28e6b391ec59dcd99cc00b446ec20b036d136b8f7911be581529c928ff9bef29-merged.mount: Deactivated successfully.
Dec 05 09:41:20 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:41:20 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:41:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 05 09:41:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 05 09:41:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19678 DF PROTO=TCP SPT=58410 DPT=9105 SEQ=1007716298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC87CD90000000001030307) 
Dec 05 09:41:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:41:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-1a1f4c636b38260509d0a72095ca55b50ae4106843ada128752b1ecf32659770-merged.mount: Deactivated successfully.
Dec 05 09:41:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:41:23 np0005546420.localdomain podman[243180]: 2025-12-05 09:41:23.958749633 +0000 UTC m=+0.103356166 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:41:24 np0005546420.localdomain podman[243180]: 2025-12-05 09:41:24.02849983 +0000 UTC m=+0.173106393 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:41:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 05 09:41:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-1d1a749154e63d40b680bc56b84ad99f9346ef73a071954dcf2dda725e125803-merged.mount: Deactivated successfully.
Dec 05 09:41:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23890 DF PROTO=TCP SPT=37014 DPT=9102 SEQ=3146250194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC889D90000000001030307) 
Dec 05 09:41:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:26 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:41:27 np0005546420.localdomain sudo[243295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqansxsubcnwrxinqtzrxizzgfaimbbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927687.0449748-2568-187045878682015/AnsiballZ_podman_container_info.py
Dec 05 09:41:27 np0005546420.localdomain sudo[243295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:27 np0005546420.localdomain python3.9[243297]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 05 09:41:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:41:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:29 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:29 np0005546420.localdomain systemd[1]: tmp-crun.wPsyb6.mount: Deactivated successfully.
Dec 05 09:41:29 np0005546420.localdomain podman[243309]: 2025-12-05 09:41:29.923150787 +0000 UTC m=+1.490250912 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:41:29 np0005546420.localdomain podman[243309]: 2025-12-05 09:41:29.956900977 +0000 UTC m=+1.524001132 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true)
Dec 05 09:41:29 np0005546420.localdomain podman[243309]: unhealthy
Dec 05 09:41:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32695 DF PROTO=TCP SPT=51918 DPT=9102 SEQ=398716287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC89E590000000001030307) 
Dec 05 09:41:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:32 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:32 np0005546420.localdomain sudo[243295]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:32 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:41:32 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Failed with result 'exit-code'.
Dec 05 09:41:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60394 DF PROTO=TCP SPT=33028 DPT=9882 SEQ=1356678096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC8A5DA0000000001030307) 
Dec 05 09:41:32 np0005546420.localdomain sudo[243431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hedhirpqzqrtgwgspsyyjwlqqnrzjjju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927692.4229505-2576-272491735503349/AnsiballZ_podman_container_exec.py
Dec 05 09:41:32 np0005546420.localdomain sudo[243431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:32 np0005546420.localdomain python3.9[243433]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:41:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:33 np0005546420.localdomain systemd[1]: Started libpod-conmon-d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.scope.
Dec 05 09:41:33 np0005546420.localdomain podman[243434]: 2025-12-05 09:41:33.046318058 +0000 UTC m=+0.132216960 container exec d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Dec 05 09:41:33 np0005546420.localdomain podman[243434]: 2025-12-05 09:41:33.079389468 +0000 UTC m=+0.165288360 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 09:41:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:33 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:33 np0005546420.localdomain sudo[243431]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:34 np0005546420.localdomain sudo[243568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbudveampyhrhhjbsgapujgidykzeodl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927693.7674215-2584-71392682894267/AnsiballZ_podman_container_exec.py
Dec 05 09:41:34 np0005546420.localdomain sudo[243568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:41:34 np0005546420.localdomain python3.9[243570]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:41:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19680 DF PROTO=TCP SPT=58410 DPT=9105 SEQ=1007716298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC8ADD90000000001030307) 
Dec 05 09:41:35 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-fb09081a0f64c6cf9725f53043f5bfef7ea250bf1548c4bcadf49dc8ee839156-merged.mount: Deactivated successfully.
Dec 05 09:41:36 np0005546420.localdomain systemd[1]: libpod-conmon-d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.scope: Deactivated successfully.
Dec 05 09:41:36 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:36 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:36 np0005546420.localdomain podman[243571]: 2025-12-05 09:41:36.294267337 +0000 UTC m=+1.896394030 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 09:41:36 np0005546420.localdomain podman[243571]: 2025-12-05 09:41:36.300862893 +0000 UTC m=+1.902989616 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 09:41:36 np0005546420.localdomain systemd[1]: Started libpod-conmon-d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.scope.
Dec 05 09:41:36 np0005546420.localdomain podman[243578]: 2025-12-05 09:41:36.359012676 +0000 UTC m=+1.928242454 container exec d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 09:41:36 np0005546420.localdomain podman[243578]: 2025-12-05 09:41:36.36423415 +0000 UTC m=+1.933463928 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 05 09:41:36 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:41:36 np0005546420.localdomain sudo[243568]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:37 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9178 DF PROTO=TCP SPT=38410 DPT=9101 SEQ=3792935075 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC8B7D90000000001030307) 
Dec 05 09:41:37 np0005546420.localdomain sudo[243724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpgzkpqyxuwplwdwpicioldxtxkdsjns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927697.1595175-2592-73834869059481/AnsiballZ_file.py
Dec 05 09:41:37 np0005546420.localdomain sudo[243724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:37 np0005546420.localdomain python3.9[243726]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:37 np0005546420.localdomain sudo[243724]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:38 np0005546420.localdomain sudo[243834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yalahqttcijbufpvkvuesfonlzsaiouc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927697.842708-2601-40434523043049/AnsiballZ_podman_container_info.py
Dec 05 09:41:38 np0005546420.localdomain sudo[243834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:38 np0005546420.localdomain python3.9[243836]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 05 09:41:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d-merged.mount: Deactivated successfully.
Dec 05 09:41:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-baf77a242921587d546dbc1c79dfabaeff80cdc186f0b5132ac3cd078884ad2d-merged.mount: Deactivated successfully.
Dec 05 09:41:39 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:39 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:41:39 np0005546420.localdomain systemd[1]: libpod-conmon-d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.scope: Deactivated successfully.
Dec 05 09:41:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 05 09:41:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 05 09:41:40 np0005546420.localdomain podman[243850]: 2025-12-05 09:41:40.556280493 +0000 UTC m=+0.907630083 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:41:40 np0005546420.localdomain podman[243850]: 2025-12-05 09:41:40.566998051 +0000 UTC m=+0.918347621 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:41:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52054 DF PROTO=TCP SPT=37714 DPT=9100 SEQ=3381586912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC8C6190000000001030307) 
Dec 05 09:41:41 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-8ef5a06c835915ebb12133f669566b60e1f53fa40ede7bc1454e6dd2b41cdd2b-merged.mount: Deactivated successfully.
Dec 05 09:41:41 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:41:41 np0005546420.localdomain sudo[243834]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:41 np0005546420.localdomain sudo[243980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zvembsespnoerptuzhvpmriphoyfwbvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927701.6170287-2609-241614391392553/AnsiballZ_podman_container_exec.py
Dec 05 09:41:41 np0005546420.localdomain sudo[243980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:42 np0005546420.localdomain python3.9[243982]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:41:42 np0005546420.localdomain systemd[1]: Started libpod-conmon-e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.scope.
Dec 05 09:41:42 np0005546420.localdomain podman[243983]: 2025-12-05 09:41:42.281169959 +0000 UTC m=+0.146167104 container exec e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:41:42 np0005546420.localdomain podman[243983]: 2025-12-05 09:41:42.314580919 +0000 UTC m=+0.179578074 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:41:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 05 09:41:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:41:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52055 DF PROTO=TCP SPT=37714 DPT=9100 SEQ=3381586912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC8CE1A0000000001030307) 
Dec 05 09:41:42 np0005546420.localdomain sudo[243980]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:43 np0005546420.localdomain sudo[244117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-prrxnnorhrbbytlmbgykqpjmikhiusym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927702.9443176-2617-274309125180977/AnsiballZ_podman_container_exec.py
Dec 05 09:41:43 np0005546420.localdomain sudo[244117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:41:43 np0005546420.localdomain python3.9[244119]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:41:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 05 09:41:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-452fa9ef0503bc3aa3c08de7cd537beefc7561b4484c5941b91d2e19b04d76e4-merged.mount: Deactivated successfully.
Dec 05 09:41:43 np0005546420.localdomain systemd[1]: libpod-conmon-e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.scope: Deactivated successfully.
Dec 05 09:41:43 np0005546420.localdomain systemd[1]: Started libpod-conmon-e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.scope.
Dec 05 09:41:43 np0005546420.localdomain podman[244120]: 2025-12-05 09:41:43.646102175 +0000 UTC m=+0.194623570 container exec e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 09:41:43 np0005546420.localdomain podman[244120]: 2025-12-05 09:41:43.68237085 +0000 UTC m=+0.230892275 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 09:41:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:41:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:41:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:41:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:41:46 np0005546420.localdomain sudo[244117]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:46 np0005546420.localdomain podman[244151]: 2025-12-05 09:41:46.522644826 +0000 UTC m=+1.403223373 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 09:41:46 np0005546420.localdomain podman[244151]: 2025-12-05 09:41:46.536690702 +0000 UTC m=+1.417269279 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 05 09:41:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52056 DF PROTO=TCP SPT=37714 DPT=9100 SEQ=3381586912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC8DDDA0000000001030307) 
Dec 05 09:41:47 np0005546420.localdomain sudo[244278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wgtohnlhaioqkalsbgbuhepltttzdphp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927706.9347956-2625-254854042783094/AnsiballZ_file.py
Dec 05 09:41:47 np0005546420.localdomain sudo[244278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:41:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:47 np0005546420.localdomain python3.9[244280]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 05 09:41:47 np0005546420.localdomain sudo[244278]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 05 09:41:47 np0005546420.localdomain sudo[244399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bovzmzwywhebmcyewwwwqweuaqydgofd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927707.6781378-2634-189189059659233/AnsiballZ_podman_container_info.py
Dec 05 09:41:47 np0005546420.localdomain sudo[244399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:48 np0005546420.localdomain python3.9[244401]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 05 09:41:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60395 DF PROTO=TCP SPT=33028 DPT=9882 SEQ=1356678096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC8E5D90000000001030307) 
Dec 05 09:41:48 np0005546420.localdomain systemd[1]: libpod-conmon-e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.scope: Deactivated successfully.
Dec 05 09:41:48 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:48 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:41:48 np0005546420.localdomain podman[244281]: 2025-12-05 09:41:48.963592915 +0000 UTC m=+1.536201963 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Dec 05 09:41:48 np0005546420.localdomain podman[244281]: 2025-12-05 09:41:48.978363893 +0000 UTC m=+1.550972931 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:41:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:50 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:41:50 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:41:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:41:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:41:51 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:51 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:41:51 np0005546420.localdomain sudo[244399]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:51 np0005546420.localdomain podman[244425]: 2025-12-05 09:41:51.06427864 +0000 UTC m=+0.121395969 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:41:51 np0005546420.localdomain podman[244425]: 2025-12-05 09:41:51.080433319 +0000 UTC m=+0.137550658 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:41:51 np0005546420.localdomain podman[244425]: unhealthy
Dec 05 09:41:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:41:51 np0005546420.localdomain sudo[244554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vlvzrhkwzmnetudjseaeqbxhdypugtwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927711.2554889-2642-228201228563314/AnsiballZ_podman_container_exec.py
Dec 05 09:41:51 np0005546420.localdomain sudo[244554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:51 np0005546420.localdomain python3.9[244556]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:41:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8436 DF PROTO=TCP SPT=56936 DPT=9105 SEQ=3637537346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC8F1DB0000000001030307) 
Dec 05 09:41:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 05 09:41:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-9afdb42a401bcc34daaa41d4513f2b2692e74a65323c260e0716aac1381c2db1-merged.mount: Deactivated successfully.
Dec 05 09:41:52 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:41:52 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:41:52 np0005546420.localdomain systemd[1]: Started libpod-conmon-128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.scope.
Dec 05 09:41:52 np0005546420.localdomain podman[244557]: 2025-12-05 09:41:52.125809533 +0000 UTC m=+0.327276091 container exec 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Dec 05 09:41:52 np0005546420.localdomain podman[244557]: 2025-12-05 09:41:52.131676657 +0000 UTC m=+0.333143185 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 05 09:41:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-9afdb42a401bcc34daaa41d4513f2b2692e74a65323c260e0716aac1381c2db1-merged.mount: Deactivated successfully.
Dec 05 09:41:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 05 09:41:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 05 09:41:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 05 09:41:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:41:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-1a1f4c636b38260509d0a72095ca55b50ae4106843ada128752b1ecf32659770-merged.mount: Deactivated successfully.
Dec 05 09:41:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-1a1f4c636b38260509d0a72095ca55b50ae4106843ada128752b1ecf32659770-merged.mount: Deactivated successfully.
Dec 05 09:41:54 np0005546420.localdomain sudo[244554]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52057 DF PROTO=TCP SPT=37714 DPT=9100 SEQ=3381586912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC8FDD90000000001030307) 
Dec 05 09:41:55 np0005546420.localdomain sudo[244694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sapeqsqbkzcqvzjspbqyhvppzvivtbmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927714.9736786-2650-163828717540556/AnsiballZ_podman_container_exec.py
Dec 05 09:41:55 np0005546420.localdomain sudo[244694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:55 np0005546420.localdomain python3.9[244696]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:41:56 np0005546420.localdomain sudo[244710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:41:56 np0005546420.localdomain sudo[244710]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:41:56 np0005546420.localdomain sudo[244710]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:41:56 np0005546420.localdomain sudo[244728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:41:56 np0005546420.localdomain sudo[244728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:41:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 05 09:41:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 05 09:41:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:41:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:41:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:41:57 np0005546420.localdomain systemd[1]: libpod-conmon-128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.scope: Deactivated successfully.
Dec 05 09:41:57 np0005546420.localdomain podman[244759]: 2025-12-05 09:41:57.661703936 +0000 UTC m=+0.397169783 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 09:41:57 np0005546420.localdomain podman[244759]: 2025-12-05 09:41:57.706916306 +0000 UTC m=+0.442382153 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 05 09:41:57 np0005546420.localdomain systemd[1]: Started libpod-conmon-128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.scope.
Dec 05 09:41:57 np0005546420.localdomain podman[244697]: 2025-12-05 09:41:57.730779124 +0000 UTC m=+2.359144176 container exec 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:41:57 np0005546420.localdomain podman[244697]: 2025-12-05 09:41:57.766483171 +0000 UTC m=+2.394848213 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 05 09:41:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:41:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:41:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:41:58 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:41:58 np0005546420.localdomain sudo[244694]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:58 np0005546420.localdomain sudo[244728]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:59 np0005546420.localdomain sudo[244861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:41:59 np0005546420.localdomain sudo[244861]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:41:59 np0005546420.localdomain sudo[244861]: pam_unix(sudo:session): session closed for user root
Dec 05 09:41:59 np0005546420.localdomain sudo[244945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qlsvngmolpkjhvwvfpvwoophcknoadpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927719.2491047-2658-110915286604765/AnsiballZ_file.py
Dec 05 09:41:59 np0005546420.localdomain sudo[244945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:41:59 np0005546420.localdomain python3.9[244947]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:41:59 np0005546420.localdomain sudo[244945]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:00 np0005546420.localdomain sudo[245055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-axzalffynugzrmirgnrcioaykibutcen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927719.932113-2667-208962434505719/AnsiballZ_podman_container_info.py
Dec 05 09:42:00 np0005546420.localdomain sudo[245055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:00 np0005546420.localdomain python3.9[245057]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 05 09:42:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:42:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58297 DF PROTO=TCP SPT=34036 DPT=9102 SEQ=2475120595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC913990000000001030307) 
Dec 05 09:42:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:42:00 np0005546420.localdomain systemd[1]: libpod-conmon-128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.scope: Deactivated successfully.
Dec 05 09:42:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:42:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:42:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 05 09:42:02 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:02 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18340 DF PROTO=TCP SPT=49312 DPT=9882 SEQ=359047428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC91BD90000000001030307) 
Dec 05 09:42:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:04.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:04.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 09:42:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:04.059 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 09:42:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:04.060 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:04.060 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 09:42:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:42:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:42:04.100 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:42:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:42:04.101 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:42:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:42:04.101 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:42:04 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:04.104 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 05 09:42:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8438 DF PROTO=TCP SPT=56936 DPT=9105 SEQ=3637537346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC921DA0000000001030307) 
Dec 05 09:42:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 05 09:42:04 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:04 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:04 np0005546420.localdomain podman[245070]: 2025-12-05 09:42:04.332080766 +0000 UTC m=+1.982014858 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 09:42:04 np0005546420.localdomain podman[245070]: 2025-12-05 09:42:04.340265569 +0000 UTC m=+1.990199661 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:42:04 np0005546420.localdomain podman[245070]: unhealthy
Dec 05 09:42:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:05.115 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:05 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:05.115 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:42:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:42:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:42:05 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:05 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:42:05 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Failed with result 'exit-code'.
Dec 05 09:42:05 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:05 np0005546420.localdomain sudo[245055]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:42:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6600.1 total, 600.0 interval
                                                          Cumulative writes: 5715 writes, 25K keys, 5715 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5715 writes, 734 syncs, 7.79 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 09:42:06 np0005546420.localdomain sudo[245195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vlnbwiwgtexwpbhwdorrztmnharyfzri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927726.0056458-2675-74262884694494/AnsiballZ_podman_container_exec.py
Dec 05 09:42:06 np0005546420.localdomain sudo[245195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:06 np0005546420.localdomain python3.9[245197]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:42:06 np0005546420.localdomain systemd[1]: Started libpod-conmon-94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.scope.
Dec 05 09:42:06 np0005546420.localdomain podman[245198]: 2025-12-05 09:42:06.629489731 +0000 UTC m=+0.138488625 container exec 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm)
Dec 05 09:42:06 np0005546420.localdomain podman[245198]: 2025-12-05 09:42:06.663098647 +0000 UTC m=+0.172097561 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 09:42:06 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17073 DF PROTO=TCP SPT=48500 DPT=9101 SEQ=2275300134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC92BD90000000001030307) 
Dec 05 09:42:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:07.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:42:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:08.037 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:08.039 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:08.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:42:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-fb09081a0f64c6cf9725f53043f5bfef7ea250bf1548c4bcadf49dc8ee839156-merged.mount: Deactivated successfully.
Dec 05 09:42:08 np0005546420.localdomain sudo[245195]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:08 np0005546420.localdomain podman[245227]: 2025-12-05 09:42:08.706237935 +0000 UTC m=+1.278561787 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 09:42:08 np0005546420.localdomain podman[245227]: 2025-12-05 09:42:08.744436738 +0000 UTC m=+1.316760600 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:42:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:09.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:09.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:09 np0005546420.localdomain sudo[245352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kczqanemufmdmkjiexkcjvvropjvecpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927728.8159657-2683-126565717168539/AnsiballZ_podman_container_exec.py
Dec 05 09:42:09 np0005546420.localdomain sudo[245352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:09 np0005546420.localdomain python3.9[245354]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:42:09 np0005546420.localdomain systemd[1]: libpod-conmon-94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.scope: Deactivated successfully.
Dec 05 09:42:09 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:09 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:42:09 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:09 np0005546420.localdomain systemd[1]: Started libpod-conmon-94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.scope.
Dec 05 09:42:09 np0005546420.localdomain podman[245355]: 2025-12-05 09:42:09.472099416 +0000 UTC m=+0.182303304 container exec 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 09:42:09 np0005546420.localdomain podman[245355]: 2025-12-05 09:42:09.502384814 +0000 UTC m=+0.212588672 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 09:42:09 np0005546420.localdomain systemd[1]: tmp-crun.hjeNNA.mount: Deactivated successfully.
Dec 05 09:42:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:10 np0005546420.localdomain sudo[245352]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 05 09:42:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47064 DF PROTO=TCP SPT=48442 DPT=9100 SEQ=3098447594 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC93B590000000001030307) 
Dec 05 09:42:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:42:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6600.2 total, 600.0 interval
                                                          Cumulative writes: 4690 writes, 21K keys, 4690 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4690 writes, 584 syncs, 8.03 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 09:42:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:10 np0005546420.localdomain sudo[245491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-giivedccqsxyiytpxxfbqubrzgbvbebd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927730.415162-2691-210118807594399/AnsiballZ_file.py
Dec 05 09:42:10 np0005546420.localdomain sudo[245491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:11.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:11.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:42:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:11.042 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:42:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:11.059 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:42:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:11 np0005546420.localdomain python3.9[245493]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:42:11 np0005546420.localdomain sudo[245491]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:42:11 np0005546420.localdomain sudo[245613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uylwzuduygcmozypijhwrccpdligibeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927731.357079-2700-74799164443640/AnsiballZ_podman_container_info.py
Dec 05 09:42:11 np0005546420.localdomain sudo[245613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 05 09:42:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-34a64d6c17ae21dd1cdba3026e372bef8c469d0e8cedde2cc51b076cd6a294ae-merged.mount: Deactivated successfully.
Dec 05 09:42:11 np0005546420.localdomain python3.9[245615]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 05 09:42:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-34a64d6c17ae21dd1cdba3026e372bef8c469d0e8cedde2cc51b076cd6a294ae-merged.mount: Deactivated successfully.
Dec 05 09:42:11 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:11 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:11 np0005546420.localdomain systemd[1]: libpod-conmon-94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.scope: Deactivated successfully.
Dec 05 09:42:11 np0005546420.localdomain podman[245549]: 2025-12-05 09:42:11.886421187 +0000 UTC m=+0.460897243 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.057 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.057 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.057 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.058 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.058 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.502 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.651 230124 WARNING nova.virt.libvirt.driver [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.653 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=13113MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.653 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.653 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:42:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.757 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.758 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:42:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47065 DF PROTO=TCP SPT=48442 DPT=9100 SEQ=3098447594 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC943590000000001030307) 
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.813 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.861 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.861 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.877 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.894 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 09:42:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:12.907 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.944 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:42:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:42:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:42:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:42:13 np0005546420.localdomain podman[245549]: 2025-12-05 09:42:13.131049767 +0000 UTC m=+1.705525833 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:42:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:13.350 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:42:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:13.354 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:42:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:13.369 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:42:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:13.371 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:42:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:42:13.371 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:42:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 05 09:42:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-8ef5a06c835915ebb12133f669566b60e1f53fa40ede7bc1454e6dd2b41cdd2b-merged.mount: Deactivated successfully.
Dec 05 09:42:14 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:42:14 np0005546420.localdomain sudo[245613]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:14 np0005546420.localdomain sudo[245789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vlfuqwlgfwssbxewimvkytzfqwfjxozh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927734.40568-2708-240226534457033/AnsiballZ_podman_container_exec.py
Dec 05 09:42:14 np0005546420.localdomain sudo[245789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:14 np0005546420.localdomain python3.9[245791]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:42:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:14 np0005546420.localdomain systemd[1]: Started libpod-conmon-cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.scope.
Dec 05 09:42:15 np0005546420.localdomain podman[245792]: 2025-12-05 09:42:15.002907269 +0000 UTC m=+0.106744035 container exec cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:42:15 np0005546420.localdomain podman[245792]: 2025-12-05 09:42:15.032028483 +0000 UTC m=+0.135865099 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:42:15 np0005546420.localdomain sudo[245789]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:42:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 05 09:42:16 np0005546420.localdomain sudo[245928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjmabwqqxmkalxdhezlyhqnmnzvvjvux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927735.792398-2716-197007853627418/AnsiballZ_podman_container_exec.py
Dec 05 09:42:16 np0005546420.localdomain sudo[245928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:16 np0005546420.localdomain python3.9[245930]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:42:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:42:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47066 DF PROTO=TCP SPT=48442 DPT=9100 SEQ=3098447594 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC9531A0000000001030307) 
Dec 05 09:42:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ab5576283f602b49fd74c99052bb7baa8b8fd55184846126f29133b6a14b7c4f-merged.mount: Deactivated successfully.
Dec 05 09:42:16 np0005546420.localdomain systemd[1]: libpod-conmon-cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.scope: Deactivated successfully.
Dec 05 09:42:16 np0005546420.localdomain systemd[1]: Started libpod-conmon-cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.scope.
Dec 05 09:42:16 np0005546420.localdomain podman[245931]: 2025-12-05 09:42:16.993716067 +0000 UTC m=+0.709833111 container exec cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:42:17 np0005546420.localdomain podman[245931]: 2025-12-05 09:42:17.024663024 +0000 UTC m=+0.740780098 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:42:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:42:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35334 DF PROTO=TCP SPT=59976 DPT=9105 SEQ=1944852144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC95B110000000001030307) 
Dec 05 09:42:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:42:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:42:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:42:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:42:19 np0005546420.localdomain sudo[245928]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:19 np0005546420.localdomain podman[245958]: 2025-12-05 09:42:19.656645765 +0000 UTC m=+0.415654510 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:42:19 np0005546420.localdomain podman[245958]: 2025-12-05 09:42:19.666275261 +0000 UTC m=+0.425283976 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 05 09:42:19 np0005546420.localdomain sudo[246088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfswceohdmonmmbjynwpquzbejlolsqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927739.7620473-2724-140778095522595/AnsiballZ_file.py
Dec 05 09:42:19 np0005546420.localdomain sudo[246088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:20 np0005546420.localdomain python3.9[246090]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:42:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:42:20 np0005546420.localdomain sudo[246088]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 05 09:42:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 05 09:42:20 np0005546420.localdomain sudo[246211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ybbrkcdsjsoiqqjbjwcvauwgweiuzlih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927740.3788862-2733-222173082176493/AnsiballZ_podman_container_info.py
Dec 05 09:42:20 np0005546420.localdomain sudo[246211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:20 np0005546420.localdomain python3.9[246213]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 05 09:42:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:42:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:42:21 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:42:21 np0005546420.localdomain systemd[1]: libpod-conmon-cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.scope: Deactivated successfully.
Dec 05 09:42:21 np0005546420.localdomain podman[246091]: 2025-12-05 09:42:21.860639092 +0000 UTC m=+1.683449059 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6)
Dec 05 09:42:21 np0005546420.localdomain podman[246091]: 2025-12-05 09:42:21.876342708 +0000 UTC m=+1.699152605 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Dec 05 09:42:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35336 DF PROTO=TCP SPT=59976 DPT=9105 SEQ=1944852144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC967190000000001030307) 
Dec 05 09:42:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:42:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:22 np0005546420.localdomain podman[240363]: time="2025-12-05T09:42:22Z" level=error msg="Getting root fs size for \"915184f7e00f2778e51799e5a4db1730c233f0a82ab29f0740d942127917d069\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy"
Dec 05 09:42:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:22 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:42:22 np0005546420.localdomain podman[246233]: 2025-12-05 09:42:22.941129538 +0000 UTC m=+0.766979065 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:42:22 np0005546420.localdomain podman[246233]: 2025-12-05 09:42:22.979400521 +0000 UTC m=+0.805250068 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:42:22 np0005546420.localdomain podman[246233]: unhealthy
Dec 05 09:42:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:23 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:42:23 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:42:23 np0005546420.localdomain sudo[246211]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 05 09:42:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-9afdb42a401bcc34daaa41d4513f2b2692e74a65323c260e0716aac1381c2db1-merged.mount: Deactivated successfully.
Dec 05 09:42:24 np0005546420.localdomain sudo[246363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhoctgwuwkvuctpjidgbtsgqokftgmgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927744.0937424-2741-199444444041853/AnsiballZ_podman_container_exec.py
Dec 05 09:42:24 np0005546420.localdomain sudo[246363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:24 np0005546420.localdomain python3.9[246365]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:42:24 np0005546420.localdomain systemd[1]: Started libpod-conmon-db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.scope.
Dec 05 09:42:24 np0005546420.localdomain podman[246366]: 2025-12-05 09:42:24.61563551 +0000 UTC m=+0.079469747 container exec db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:42:24 np0005546420.localdomain podman[246366]: 2025-12-05 09:42:24.644755713 +0000 UTC m=+0.108589980 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:42:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58299 DF PROTO=TCP SPT=34036 DPT=9102 SEQ=2475120595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC973D90000000001030307) 
Dec 05 09:42:25 np0005546420.localdomain systemd[1]: tmp-crun.R4iXS5.mount: Deactivated successfully.
Dec 05 09:42:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 05 09:42:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 05 09:42:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 05 09:42:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:42:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-81af632ac7b1bb30b73d3b843d9ead4231843a2eced4d0ef746349ae454b4194-merged.mount: Deactivated successfully.
Dec 05 09:42:27 np0005546420.localdomain sudo[246363]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:27 np0005546420.localdomain sudo[246503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vopjaeeqqxnmyswocbtxzxnyvgzycojb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927747.4967022-2749-173761060190354/AnsiballZ_podman_container_exec.py
Dec 05 09:42:27 np0005546420.localdomain sudo[246503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:27 np0005546420.localdomain python3.9[246505]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:42:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:42:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:42:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 05 09:42:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 05 09:42:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:42:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:42:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:42:30 np0005546420.localdomain systemd[1]: libpod-conmon-db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.scope: Deactivated successfully.
Dec 05 09:42:30 np0005546420.localdomain systemd[1]: Started libpod-conmon-db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.scope.
Dec 05 09:42:30 np0005546420.localdomain podman[246506]: 2025-12-05 09:42:30.326385376 +0000 UTC m=+2.332913129 container exec db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:42:30 np0005546420.localdomain podman[246518]: 2025-12-05 09:42:30.366108413 +0000 UTC m=+1.642565416 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 05 09:42:30 np0005546420.localdomain podman[246537]: 2025-12-05 09:42:30.421291298 +0000 UTC m=+0.083270249 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:42:30 np0005546420.localdomain podman[246506]: 2025-12-05 09:42:30.438357314 +0000 UTC m=+2.444885037 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:42:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56910 DF PROTO=TCP SPT=56226 DPT=9102 SEQ=1607890255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC988990000000001030307) 
Dec 05 09:42:30 np0005546420.localdomain podman[246518]: 2025-12-05 09:42:30.513314806 +0000 UTC m=+1.789771779 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Dec 05 09:42:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:42:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:42:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:42:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:42:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40822 DF PROTO=TCP SPT=54104 DPT=9882 SEQ=1317995603 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC98FDA0000000001030307) 
Dec 05 09:42:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:42:32 np0005546420.localdomain systemd[1]: libpod-conmon-db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.scope: Deactivated successfully.
Dec 05 09:42:32 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:42:32 np0005546420.localdomain sudo[246503]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:33 np0005546420.localdomain sudo[246664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzfduijlrioscufupkjkifvbjwloruyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927752.8598492-2757-250846345199758/AnsiballZ_file.py
Dec 05 09:42:33 np0005546420.localdomain sudo[246664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:42:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:42:33 np0005546420.localdomain python3.9[246666]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:42:33 np0005546420.localdomain sudo[246664]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:42:33 np0005546420.localdomain sudo[246774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vuqwqmkfftespctczntspacvoarmksyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927753.5541048-2766-39626028716437/AnsiballZ_podman_container_info.py
Dec 05 09:42:33 np0005546420.localdomain sudo[246774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:34 np0005546420.localdomain python3.9[246776]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 05 09:42:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35338 DF PROTO=TCP SPT=59976 DPT=9105 SEQ=1944852144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC997D90000000001030307) 
Dec 05 09:42:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 05 09:42:34 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:34 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:34 np0005546420.localdomain sudo[246774]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:35 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:35 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:35 np0005546420.localdomain sudo[246897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rsuvfeyhinopsfapykjeyrwwuwwwixql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927754.9852784-2774-187244522387471/AnsiballZ_podman_container_exec.py
Dec 05 09:42:35 np0005546420.localdomain sudo[246897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:35 np0005546420.localdomain python3.9[246899]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:42:35 np0005546420.localdomain systemd[1]: Started libpod-conmon-3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.scope.
Dec 05 09:42:35 np0005546420.localdomain podman[246900]: 2025-12-05 09:42:35.600862449 +0000 UTC m=+0.115477997 container exec 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public)
Dec 05 09:42:35 np0005546420.localdomain podman[246900]: 2025-12-05 09:42:35.630892024 +0000 UTC m=+0.145507562 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-type=git, version=9.6, name=ubi9-minimal, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Dec 05 09:42:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:42:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:42:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 05 09:42:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 05 09:42:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39209 DF PROTO=TCP SPT=51042 DPT=9101 SEQ=1957784099 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC9A1DA0000000001030307) 
Dec 05 09:42:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:42:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5f9b52405571b7dbea88b728550de84377ddb5cebafdc587dadde8e1530aa413-merged.mount: Deactivated successfully.
Dec 05 09:42:37 np0005546420.localdomain sudo[246897]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:38 np0005546420.localdomain podman[246928]: 2025-12-05 09:42:38.003662722 +0000 UTC m=+2.071427942 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 09:42:38 np0005546420.localdomain podman[246928]: 2025-12-05 09:42:38.014883827 +0000 UTC m=+2.082649117 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute)
Dec 05 09:42:38 np0005546420.localdomain sudo[247056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dgimcjorpjgmucgqnlyubcpbqodpkwaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927758.0960634-2782-113561662452026/AnsiballZ_podman_container_exec.py
Dec 05 09:42:38 np0005546420.localdomain sudo[247056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:38 np0005546420.localdomain python3.9[247058]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 09:42:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 05 09:42:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:42:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:42:39 np0005546420.localdomain systemd[1]: libpod-conmon-3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.scope: Deactivated successfully.
Dec 05 09:42:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:42:39 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:42:39 np0005546420.localdomain systemd[1]: Started libpod-conmon-3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.scope.
Dec 05 09:42:39 np0005546420.localdomain podman[247059]: 2025-12-05 09:42:39.636033612 +0000 UTC m=+1.090204148 container exec 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1755695350, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 09:42:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:42:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 05 09:42:39 np0005546420.localdomain podman[247071]: 2025-12-05 09:42:39.708162553 +0000 UTC m=+0.095397848 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 09:42:39 np0005546420.localdomain podman[247059]: 2025-12-05 09:42:39.716874671 +0000 UTC m=+1.171045137 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 09:42:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 05 09:42:39 np0005546420.localdomain podman[247071]: 2025-12-05 09:42:39.74052913 +0000 UTC m=+0.127764445 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 09:42:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:40 np0005546420.localdomain systemd[1]: libpod-conmon-3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.scope: Deactivated successfully.
Dec 05 09:42:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35540 DF PROTO=TCP SPT=42572 DPT=9100 SEQ=3610718876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC9B0990000000001030307) 
Dec 05 09:42:40 np0005546420.localdomain sudo[247056]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:41 np0005546420.localdomain sudo[247212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trdbzcmbadpjklyhcrvvirovpvukqxee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927760.906605-2790-127130605400770/AnsiballZ_file.py
Dec 05 09:42:41 np0005546420.localdomain sudo[247212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:42:41 np0005546420.localdomain python3.9[247214]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:42:41 np0005546420.localdomain sudo[247212]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:41 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:42:41 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:42:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35541 DF PROTO=TCP SPT=42572 DPT=9100 SEQ=3610718876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC9B8990000000001030307) 
Dec 05 09:42:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 05 09:42:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:42:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 05 09:42:44 np0005546420.localdomain podman[247233]: 2025-12-05 09:42:44.43877463 +0000 UTC m=+0.157623365 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:42:44 np0005546420.localdomain podman[247233]: 2025-12-05 09:42:44.472924431 +0000 UTC m=+0.191773136 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:42:44 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:42:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 05 09:42:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-34a64d6c17ae21dd1cdba3026e372bef8c469d0e8cedde2cc51b076cd6a294ae-merged.mount: Deactivated successfully.
Dec 05 09:42:45 np0005546420.localdomain sshd[247254]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:42:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:42:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:42:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35542 DF PROTO=TCP SPT=42572 DPT=9100 SEQ=3610718876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC9C8590000000001030307) 
Dec 05 09:42:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 05 09:42:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:47 np0005546420.localdomain sshd[247254]: Connection reset by authenticating user root 45.140.17.124 port 40124 [preauth]
Dec 05 09:42:47 np0005546420.localdomain sshd[247256]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:42:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:42:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40823 DF PROTO=TCP SPT=54104 DPT=9882 SEQ=1317995603 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC9CFD90000000001030307) 
Dec 05 09:42:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:42:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:42:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 05 09:42:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ab5576283f602b49fd74c99052bb7baa8b8fd55184846126f29133b6a14b7c4f-merged.mount: Deactivated successfully.
Dec 05 09:42:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ab5576283f602b49fd74c99052bb7baa8b8fd55184846126f29133b6a14b7c4f-merged.mount: Deactivated successfully.
Dec 05 09:42:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 05 09:42:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0e8da57bc6a9d99cca22f68324e2779eeb1b50a532dd39067fc53d8e0f9f160f-merged.mount: Deactivated successfully.
Dec 05 09:42:51 np0005546420.localdomain sshd[247256]: Connection reset by authenticating user root 45.140.17.124 port 40136 [preauth]
Dec 05 09:42:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0e8da57bc6a9d99cca22f68324e2779eeb1b50a532dd39067fc53d8e0f9f160f-merged.mount: Deactivated successfully.
Dec 05 09:42:51 np0005546420.localdomain sshd[247258]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:42:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26471 DF PROTO=TCP SPT=60544 DPT=9105 SEQ=1606988335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC9DC590000000001030307) 
Dec 05 09:42:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:42:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:42:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:42:52 np0005546420.localdomain podman[247260]: 2025-12-05 09:42:52.354076213 +0000 UTC m=+0.111486733 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 05 09:42:52 np0005546420.localdomain podman[247260]: 2025-12-05 09:42:52.36402245 +0000 UTC m=+0.121433000 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Dec 05 09:42:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:42:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:42:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:42:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 05 09:42:53 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:42:53 np0005546420.localdomain podman[247279]: 2025-12-05 09:42:53.319728186 +0000 UTC m=+0.146146920 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 09:42:53 np0005546420.localdomain podman[247279]: 2025-12-05 09:42:53.333361916 +0000 UTC m=+0.159780670 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, architecture=x86_64)
Dec 05 09:42:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:42:54 np0005546420.localdomain sshd[247258]: Connection reset by authenticating user root 45.140.17.124 port 40152 [preauth]
Dec 05 09:42:54 np0005546420.localdomain sshd[247310]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:42:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35543 DF PROTO=TCP SPT=42572 DPT=9100 SEQ=3610718876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC9E7DA0000000001030307) 
Dec 05 09:42:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:42:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:42:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:42:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:42:55 np0005546420.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:55 np0005546420.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:55 np0005546420.localdomain podman[240363]: time="2025-12-05T09:42:55Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged: invalid argument"
Dec 05 09:42:55 np0005546420.localdomain podman[240363]: time="2025-12-05T09:42:55Z" level=error msg="Getting root fs size for \"ab04cb20961c060e7222b6793733e2aafe112e6687eff2dd1d3e9af68f8c531e\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": creating overlay mount to /var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/IKCF27DQLZIV3KCF4TBEZZFTOC:/var/lib/containers/storage/overlay/l/TGAD4ZE6ATLQI3D32HGPCQBATK,upperdir=/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/diff,workdir=/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/work,nodev,metacopy=on\": no such file or directory"
Dec 05 09:42:55 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:42:55 np0005546420.localdomain podman[247298]: 2025-12-05 09:42:55.597767298 +0000 UTC m=+1.425592826 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:42:55 np0005546420.localdomain podman[247298]: 2025-12-05 09:42:55.632511617 +0000 UTC m=+1.460337175 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:42:55 np0005546420.localdomain podman[247298]: unhealthy
Dec 05 09:42:56 np0005546420.localdomain sshd[247310]: Connection reset by authenticating user root 45.140.17.124 port 44224 [preauth]
Dec 05 09:42:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:56 np0005546420.localdomain sshd[247322]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:42:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:42:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 05 09:42:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-aff96401da9550f267dc9e7f47ea63cc3ba29a151559ef0a7447672bdf20407f-merged.mount: Deactivated successfully.
Dec 05 09:42:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-aff96401da9550f267dc9e7f47ea63cc3ba29a151559ef0a7447672bdf20407f-merged.mount: Deactivated successfully.
Dec 05 09:42:57 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:57 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:42:57 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:42:57 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:42:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:42:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:58 np0005546420.localdomain sshd[247322]: Connection reset by authenticating user root 45.140.17.124 port 44226 [preauth]
Dec 05 09:42:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:42:59 np0005546420.localdomain sudo[247324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:42:59 np0005546420.localdomain sudo[247324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:42:59 np0005546420.localdomain sudo[247324]: pam_unix(sudo:session): session closed for user root
Dec 05 09:42:59 np0005546420.localdomain sudo[247342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:42:59 np0005546420.localdomain sudo[247342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:43:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:43:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:43:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48605 DF PROTO=TCP SPT=56454 DPT=9102 SEQ=4193752131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AC9FDD90000000001030307) 
Dec 05 09:43:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:43:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:43:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-81af632ac7b1bb30b73d3b843d9ead4231843a2eced4d0ef746349ae454b4194-merged.mount: Deactivated successfully.
Dec 05 09:43:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-81af632ac7b1bb30b73d3b843d9ead4231843a2eced4d0ef746349ae454b4194-merged.mount: Deactivated successfully.
Dec 05 09:43:01 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:01 np0005546420.localdomain sudo[247342]: pam_unix(sudo:session): session closed for user root
Dec 05 09:43:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23509 DF PROTO=TCP SPT=45226 DPT=9882 SEQ=3426550704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA05D90000000001030307) 
Dec 05 09:43:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:43:03 np0005546420.localdomain systemd[1]: tmp-crun.zIAU5Y.mount: Deactivated successfully.
Dec 05 09:43:03 np0005546420.localdomain podman[247391]: 2025-12-05 09:43:03.181039928 +0000 UTC m=+0.078521049 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:03 np0005546420.localdomain podman[247391]: 2025-12-05 09:43:03.293405457 +0000 UTC m=+0.190886588 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 05 09:43:03 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:04 np0005546420.localdomain sudo[247416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:43:04 np0005546420.localdomain sudo[247416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:43:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:43:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:43:04 np0005546420.localdomain sudo[247416]: pam_unix(sudo:session): session closed for user root
Dec 05 09:43:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:43:04.101 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:43:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:43:04.102 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:43:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:43:04.103 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:43:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26473 DF PROTO=TCP SPT=60544 DPT=9105 SEQ=1606988335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA0BD90000000001030307) 
Dec 05 09:43:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:04 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:04 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:04 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:43:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:05 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:05 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:43:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:06.368 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:43:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:43:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:07.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:43:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:07.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:43:07 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:07.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:43:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17555 DF PROTO=TCP SPT=47724 DPT=9101 SEQ=1025798553 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA17DA0000000001030307) 
Dec 05 09:43:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:08 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:08 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:08 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:43:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-eb541339826395780260e54eaea5ebe9da0c74cf9b96dae2643192eb4d511174-merged.mount: Deactivated successfully.
Dec 05 09:43:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:09.042 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:43:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:43:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 05 09:43:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:43:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 05 09:43:09 np0005546420.localdomain podman[247434]: 2025-12-05 09:43:09.785413338 +0000 UTC m=+0.082491081 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 09:43:09 np0005546420.localdomain podman[247434]: 2025-12-05 09:43:09.824600375 +0000 UTC m=+0.121678118 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:10.036 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:43:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:10.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:43:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23064 DF PROTO=TCP SPT=55206 DPT=9100 SEQ=4083965468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA25990000000001030307) 
Dec 05 09:43:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:43:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5f9b52405571b7dbea88b728550de84377ddb5cebafdc587dadde8e1530aa413-merged.mount: Deactivated successfully.
Dec 05 09:43:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:11.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:43:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:11.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:43:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:11.042 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:43:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:11.056 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:43:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:11.057 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:43:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:11.057 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:43:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:43:11 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:43:11 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.058 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.058 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.059 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.059 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.060 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:43:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:43:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 05 09:43:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:43:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:12 np0005546420.localdomain podman[247454]: 2025-12-05 09:43:12.2560838 +0000 UTC m=+0.112927408 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:43:12 np0005546420.localdomain podman[247454]: 2025-12-05 09:43:12.285765534 +0000 UTC m=+0.142609122 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.529 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:43:12 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.688 230124 WARNING nova.virt.libvirt.driver [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.689 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=13109MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.689 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.689 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.741 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.741 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:43:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23065 DF PROTO=TCP SPT=55206 DPT=9100 SEQ=4083965468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA2D990000000001030307) 
Dec 05 09:43:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:12.757 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:43:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:13.194 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:43:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:13.201 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:43:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:13.216 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:43:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:13.219 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:43:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:43:13.219 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:43:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 05 09:43:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:43:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 05 09:43:15 np0005546420.localdomain podman[247516]: 2025-12-05 09:43:15.527516158 +0000 UTC m=+0.101325381 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:43:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-67ea8042eed03f5757776778f40d7e49103f0ad20171558f729fe6c81cd471bb-merged.mount: Deactivated successfully.
Dec 05 09:43:15 np0005546420.localdomain podman[247516]: 2025-12-05 09:43:15.602908579 +0000 UTC m=+0.176717782 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:43:16 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:43:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 05 09:43:16 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23066 DF PROTO=TCP SPT=55206 DPT=9100 SEQ=4083965468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA3D590000000001030307) 
Dec 05 09:43:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:43:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:43:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:43:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 05 09:43:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:43:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:43:18 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9656 DF PROTO=TCP SPT=59774 DPT=9105 SEQ=1453215621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA45700000000001030307) 
Dec 05 09:43:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:43:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:43:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e9c00df1f2da9b4f8cc0d82d682fbe65babf7715eec1d298da553452a4b2d783-merged.mount: Deactivated successfully.
Dec 05 09:43:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:43:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:43:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 05 09:43:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 05 09:43:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0e8da57bc6a9d99cca22f68324e2779eeb1b50a532dd39067fc53d8e0f9f160f-merged.mount: Deactivated successfully.
Dec 05 09:43:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 05 09:43:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16-merged.mount: Deactivated successfully.
Dec 05 09:43:21 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9658 DF PROTO=TCP SPT=59774 DPT=9105 SEQ=1453215621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA51590000000001030307) 
Dec 05 09:43:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully.
Dec 05 09:43:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully.
Dec 05 09:43:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:43:23 np0005546420.localdomain podman[247539]: 2025-12-05 09:43:23.518173392 +0000 UTC m=+0.094225692 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 09:43:23 np0005546420.localdomain podman[247539]: 2025-12-05 09:43:23.563675523 +0000 UTC m=+0.139727793 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 09:43:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:43:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:43:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:43:24 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:43:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 05 09:43:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully.
Dec 05 09:43:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23067 DF PROTO=TCP SPT=55206 DPT=9100 SEQ=4083965468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA5DD90000000001030307) 
Dec 05 09:43:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully.
Dec 05 09:43:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully.
Dec 05 09:43:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:43:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:43:26 np0005546420.localdomain podman[247558]: 2025-12-05 09:43:26.110215162 +0000 UTC m=+0.084713989 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, config_id=edpm, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec 05 09:43:26 np0005546420.localdomain podman[247558]: 2025-12-05 09:43:26.126283487 +0000 UTC m=+0.100782364 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6)
Dec 05 09:43:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 05 09:43:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 05 09:43:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 05 09:43:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 05 09:43:27 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:43:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:43:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:28 np0005546420.localdomain podman[247577]: 2025-12-05 09:43:28.486060974 +0000 UTC m=+0.073264096 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:43:28 np0005546420.localdomain podman[247577]: 2025-12-05 09:43:28.518430972 +0000 UTC m=+0.105633994 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:43:28 np0005546420.localdomain systemd[1]: tmp-crun.rOErFv.mount: Deactivated successfully.
Dec 05 09:43:28 np0005546420.localdomain podman[247577]: unhealthy
Dec 05 09:43:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 05 09:43:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 05 09:43:29 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:43:29 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:43:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 05 09:43:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14244 DF PROTO=TCP SPT=32838 DPT=9102 SEQ=912901281 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA73190000000001030307) 
Dec 05 09:43:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:43:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 05 09:43:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:43:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:43:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:43:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 05 09:43:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-aff96401da9550f267dc9e7f47ea63cc3ba29a151559ef0a7447672bdf20407f-merged.mount: Deactivated successfully.
Dec 05 09:43:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:43:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:43:32 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60118 DF PROTO=TCP SPT=43060 DPT=9882 SEQ=2834464438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA7BD90000000001030307) 
Dec 05 09:43:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-aff96401da9550f267dc9e7f47ea63cc3ba29a151559ef0a7447672bdf20407f-merged.mount: Deactivated successfully.
Dec 05 09:43:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:43:34 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9660 DF PROTO=TCP SPT=59774 DPT=9105 SEQ=1453215621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA81D90000000001030307) 
Dec 05 09:43:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:43:34 np0005546420.localdomain podman[247600]: 2025-12-05 09:43:34.485175568 +0000 UTC m=+0.058268564 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:43:34 np0005546420.localdomain podman[247600]: 2025-12-05 09:43:34.558436625 +0000 UTC m=+0.131529651 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:43:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 05 09:43:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:43:35 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:43:35 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:43:35 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:43:35 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:36 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2097 DF PROTO=TCP SPT=60352 DPT=9101 SEQ=3901961696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA8BDA0000000001030307) 
Dec 05 09:43:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:43:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:43:37 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:43:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:38 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:38 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:38 np0005546420.localdomain podman[240363]: time="2025-12-05T09:43:38Z" level=error msg="Getting root fs size for \"ad51d61555ed63df015c6bea7037c214d842021980eac1fa0c93aa80106530ed\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": unmounting layer 3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae: replacing mount point \"/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged\": device or resource busy"
Dec 05 09:43:38 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:38 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:39 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:43:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:43:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 05 09:43:40 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29681 DF PROTO=TCP SPT=45226 DPT=9100 SEQ=2194639838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACA9AD90000000001030307) 
Dec 05 09:43:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-eb541339826395780260e54eaea5ebe9da0c74cf9b96dae2643192eb4d511174-merged.mount: Deactivated successfully.
Dec 05 09:43:40 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:40 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:40 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:41 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:41 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:41 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:43:41 np0005546420.localdomain podman[247625]: 2025-12-05 09:43:41.429023631 +0000 UTC m=+0.087755572 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 09:43:41 np0005546420.localdomain podman[247625]: 2025-12-05 09:43:41.441788375 +0000 UTC m=+0.100520306 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 09:43:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:43:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 05 09:43:42 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:43:42 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:42 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:42 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29682 DF PROTO=TCP SPT=45226 DPT=9100 SEQ=2194639838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACAA2D90000000001030307) 
Dec 05 09:43:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:43:43 np0005546420.localdomain podman[247644]: 2025-12-05 09:43:43.014194679 +0000 UTC m=+0.092785758 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 09:43:43 np0005546420.localdomain podman[247644]: 2025-12-05 09:43:43.0190818 +0000 UTC m=+0.097672889 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec 05 09:43:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:43:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 05 09:43:44 np0005546420.localdomain sshd[247663]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:43:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:43:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-28babb087ffe0501289e9c462f881c66803c6126daf8f53cd6e97c97c184b295-merged.mount: Deactivated successfully.
Dec 05 09:43:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-28babb087ffe0501289e9c462f881c66803c6126daf8f53cd6e97c97c184b295-merged.mount: Deactivated successfully.
Dec 05 09:43:45 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:43:46 np0005546420.localdomain sshd[247663]: Connection reset by authenticating user root 45.135.232.92 port 44744 [preauth]
Dec 05 09:43:46 np0005546420.localdomain sshd[247665]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:43:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:43:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:43:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 05 09:43:46 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 05 09:43:46 np0005546420.localdomain podman[247666]: 2025-12-05 09:43:46.472306015 +0000 UTC m=+0.092860510 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:43:46 np0005546420.localdomain podman[247666]: 2025-12-05 09:43:46.484746588 +0000 UTC m=+0.105301103 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:43:46 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29683 DF PROTO=TCP SPT=45226 DPT=9100 SEQ=2194639838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACAB29A0000000001030307) 
Dec 05 09:43:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:43:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:43:47 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:43:47 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:43:47 np0005546420.localdomain sshd[247665]: Invalid user cisco from 45.135.232.92 port 44772
Dec 05 09:43:48 np0005546420.localdomain podman[240363]: time="2025-12-05T09:43:48Z" level=error msg="Getting root fs size for \"c56a91a521ca13953e2d4d9c7da780f3481ff312136203d6f38c8a5305c83fa0\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6: replacing mount point \"/var/lib/containers/storage/overlay/c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6/merged\": device or resource busy"
Dec 05 09:43:48 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:48 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:48 np0005546420.localdomain sshd[247665]: Connection reset by invalid user cisco 45.135.232.92 port 44772 [preauth]
Dec 05 09:43:48 np0005546420.localdomain sshd[247690]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:43:48 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:48 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48197 DF PROTO=TCP SPT=50480 DPT=9105 SEQ=1617167798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACABAA00000000001030307) 
Dec 05 09:43:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 05 09:43:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f3c115c686a4e871b821c782f5b4cfb35a8dcd215e49958df3fb5148fa2c1e76-merged.mount: Deactivated successfully.
Dec 05 09:43:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 05 09:43:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-67ea8042eed03f5757776778f40d7e49103f0ad20171558f729fe6c81cd471bb-merged.mount: Deactivated successfully.
Dec 05 09:43:50 np0005546420.localdomain sshd[247690]: Invalid user admin from 45.135.232.92 port 44786
Dec 05 09:43:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 05 09:43:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:43:50 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:43:50 np0005546420.localdomain sshd[247690]: Connection reset by invalid user admin 45.135.232.92 port 44786 [preauth]
Dec 05 09:43:50 np0005546420.localdomain sshd[247692]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:43:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 05 09:43:51 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48199 DF PROTO=TCP SPT=50480 DPT=9105 SEQ=1617167798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACAC6990000000001030307) 
Dec 05 09:43:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 05 09:43:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e9c00df1f2da9b4f8cc0d82d682fbe65babf7715eec1d298da553452a4b2d783-merged.mount: Deactivated successfully.
Dec 05 09:43:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5a287e5baf3cfe95e635859734034da81401b58443d725291782300b9af04e40-merged.mount: Deactivated successfully.
Dec 05 09:43:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e9c00df1f2da9b4f8cc0d82d682fbe65babf7715eec1d298da553452a4b2d783-merged.mount: Deactivated successfully.
Dec 05 09:43:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 05 09:43:52 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:52 np0005546420.localdomain sshd[247692]: Connection reset by authenticating user root 45.135.232.92 port 44808 [preauth]
Dec 05 09:43:52 np0005546420.localdomain sshd[247694]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:43:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully.
Dec 05 09:43:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully.
Dec 05 09:43:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:53 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:53 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:53 np0005546420.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 05 09:43:53 np0005546420.localdomain podman[240363]: time="2025-12-05T09:43:53Z" level=error msg="Unable to write json: \"write unix /run/podman/podman.sock->@: write: broken pipe\""
Dec 05 09:43:53 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:38:34 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 4096 "" "Go-http-client/1.1"
Dec 05 09:43:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:43:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-6e062bde15c561a186b7e30080880293f1be1996e7656eda685d28e2ac8dfedb-merged.mount: Deactivated successfully.
Dec 05 09:43:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 05 09:43:54 np0005546420.localdomain sshd[247694]: Invalid user abc from 45.135.232.92 port 44812
Dec 05 09:43:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:43:54 np0005546420.localdomain podman[247696]: 2025-12-05 09:43:54.348415422 +0000 UTC m=+0.091914202 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 09:43:54 np0005546420.localdomain podman[247696]: 2025-12-05 09:43:54.355938074 +0000 UTC m=+0.099436873 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 09:43:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:43:54 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:43:54 np0005546420.localdomain sshd[247694]: Connection reset by invalid user abc 45.135.232.92 port 44812 [preauth]
Dec 05 09:43:55 np0005546420.localdomain sudo[247805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mefhbgqgxloglrrgddlckeuomrygjhxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927834.9062703-3036-133536283543589/AnsiballZ_file.py
Dec 05 09:43:55 np0005546420.localdomain sudo[247805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:43:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7fac0cbfe5674aeeef5f32f29c54934661fa536efaa149f24d134e460cee6a16-merged.mount: Deactivated successfully.
Dec 05 09:43:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29684 DF PROTO=TCP SPT=45226 DPT=9100 SEQ=2194639838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACAD3D90000000001030307) 
Dec 05 09:43:55 np0005546420.localdomain python3.9[247807]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:43:55 np0005546420.localdomain sudo[247805]: pam_unix(sudo:session): session closed for user root
Dec 05 09:43:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully.
Dec 05 09:43:55 np0005546420.localdomain sudo[247915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fileatzzdmskkrvydjzuczilterdrtak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927835.7188592-3063-173868183719691/AnsiballZ_stat.py
Dec 05 09:43:55 np0005546420.localdomain sudo[247915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:43:56 np0005546420.localdomain python3.9[247917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:43:56 np0005546420.localdomain sudo[247915]: pam_unix(sudo:session): session closed for user root
Dec 05 09:43:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully.
Dec 05 09:43:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:43:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully.
Dec 05 09:43:56 np0005546420.localdomain sudo[248003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gbtxpeveezwvcwgqjlhmfkuddgtnqngw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927835.7188592-3063-173868183719691/AnsiballZ_copy.py
Dec 05 09:43:56 np0005546420.localdomain sudo[248003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:43:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully.
Dec 05 09:43:56 np0005546420.localdomain python3.9[248005]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927835.7188592-3063-173868183719691/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:43:56 np0005546420.localdomain sudo[248003]: pam_unix(sudo:session): session closed for user root
Dec 05 09:43:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:43:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully.
Dec 05 09:43:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:43:57 np0005546420.localdomain podman[248023]: 2025-12-05 09:43:57.964613496 +0000 UTC m=+0.094028336 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41)
Dec 05 09:43:57 np0005546420.localdomain podman[248023]: 2025-12-05 09:43:57.9780641 +0000 UTC m=+0.107478940 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 09:43:58 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:43:58 np0005546420.localdomain sudo[248132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nrqyerihjzbabnqmwhqvcanidchoputm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927837.9012017-3111-179182027350499/AnsiballZ_file.py
Dec 05 09:43:58 np0005546420.localdomain sudo[248132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:43:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully.
Dec 05 09:43:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully.
Dec 05 09:43:58 np0005546420.localdomain python3.9[248134]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:43:58 np0005546420.localdomain sudo[248132]: pam_unix(sudo:session): session closed for user root
Dec 05 09:43:58 np0005546420.localdomain sudo[248242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gplloiqcqbfbpmjegnsfunzldxpvsjpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927838.5980015-3135-59132050729144/AnsiballZ_stat.py
Dec 05 09:43:58 np0005546420.localdomain sudo[248242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:43:59 np0005546420.localdomain python3.9[248244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:43:59 np0005546420.localdomain sudo[248242]: pam_unix(sudo:session): session closed for user root
Dec 05 09:43:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:43:59 np0005546420.localdomain sudo[248305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bukbytqgjfoaojptpwiyqezzlwawxdcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927838.5980015-3135-59132050729144/AnsiballZ_file.py
Dec 05 09:43:59 np0005546420.localdomain sudo[248305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:43:59 np0005546420.localdomain systemd[1]: tmp-crun.8vZPYK.mount: Deactivated successfully.
Dec 05 09:43:59 np0005546420.localdomain podman[248287]: 2025-12-05 09:43:59.868741045 +0000 UTC m=+0.117387666 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:43:59 np0005546420.localdomain podman[248287]: 2025-12-05 09:43:59.877233506 +0000 UTC m=+0.125880137 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:43:59 np0005546420.localdomain podman[248287]: unhealthy
Dec 05 09:43:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 05 09:44:00 np0005546420.localdomain python3.9[248310]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:00 np0005546420.localdomain sudo[248305]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 05 09:44:00 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 09:44:00 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Failed with result 'exit-code'.
Dec 05 09:44:00 np0005546420.localdomain sudo[248429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ogfpmdfjxtdlzizclahnfgwnjyxhskaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927840.2020686-3171-216029776618692/AnsiballZ_stat.py
Dec 05 09:44:00 np0005546420.localdomain sudo[248429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6504 DF PROTO=TCP SPT=45966 DPT=9102 SEQ=1891154944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACAE8590000000001030307) 
Dec 05 09:44:00 np0005546420.localdomain python3.9[248431]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:00 np0005546420.localdomain sudo[248429]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:00 np0005546420.localdomain sudo[248486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iychyiqqyvrugvnmnasppnghmorwbkob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927840.2020686-3171-216029776618692/AnsiballZ_file.py
Dec 05 09:44:00 np0005546420.localdomain sudo[248486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:00 np0005546420.localdomain python3.9[248488]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.um41r7t0 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:00 np0005546420.localdomain sudo[248486]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:01 np0005546420.localdomain sudo[248596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vlnewkuyxxbxjbekubfxtnjpaqfftoar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927841.292314-3208-172826909209376/AnsiballZ_stat.py
Dec 05 09:44:01 np0005546420.localdomain sudo[248596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:01 np0005546420.localdomain python3.9[248598]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:01 np0005546420.localdomain sudo[248596]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:02 np0005546420.localdomain sudo[248653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnoedsmcvdopsytidosxeavlatqxbqtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927841.292314-3208-172826909209376/AnsiballZ_file.py
Dec 05 09:44:02 np0005546420.localdomain sudo[248653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 05 09:44:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 05 09:44:02 np0005546420.localdomain python3.9[248655]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:02 np0005546420.localdomain sudo[248653]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 05 09:44:02 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5931 DF PROTO=TCP SPT=35160 DPT=9882 SEQ=161205765 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACAEFDA0000000001030307) 
Dec 05 09:44:02 np0005546420.localdomain sudo[248763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tuwhhmurekpbhyvfoysskabchijpyzfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927842.5559337-3246-183137190978535/AnsiballZ_command.py
Dec 05 09:44:02 np0005546420.localdomain sudo[248763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:03 np0005546420.localdomain python3.9[248765]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:03 np0005546420.localdomain sudo[248763]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:03 np0005546420.localdomain sudo[248874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pjctvonfvunmrqgqzqwstxisicbecjta ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927843.2726772-3270-230619237126990/AnsiballZ_edpm_nftables_from_files.py
Dec 05 09:44:03 np0005546420.localdomain sudo[248874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:44:03 np0005546420.localdomain python3[248876]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 09:44:03 np0005546420.localdomain sudo[248874]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:03 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 05 09:44:04 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48201 DF PROTO=TCP SPT=50480 DPT=9105 SEQ=1617167798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACAF5D90000000001030307) 
Dec 05 09:44:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:44:04.103 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:44:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:44:04.103 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:44:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:44:04.104 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:44:04 np0005546420.localdomain sudo[248932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:44:04 np0005546420.localdomain sudo[248932]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:44:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 05 09:44:04 np0005546420.localdomain sudo[248932]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:04 np0005546420.localdomain sudo[248966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:44:04 np0005546420.localdomain sudo[248966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:44:04 np0005546420.localdomain sudo[249020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sssxrgexkxvgbjljbvgrgnxwcshwoqkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927844.101297-3294-58407862617890/AnsiballZ_stat.py
Dec 05 09:44:04 np0005546420.localdomain sudo[249020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:04 np0005546420.localdomain python3.9[249022]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:04 np0005546420.localdomain sudo[249020]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:04 np0005546420.localdomain sudo[249089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsnohobwmnuhzrxhlykctfnnfycnhhif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927844.101297-3294-58407862617890/AnsiballZ_file.py
Dec 05 09:44:04 np0005546420.localdomain sudo[249089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:44:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 05 09:44:05 np0005546420.localdomain python3.9[249091]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:05 np0005546420.localdomain sudo[249089]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:44:05 np0005546420.localdomain sudo[248966]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:05 np0005546420.localdomain podman[249137]: 2025-12-05 09:44:05.521190194 +0000 UTC m=+0.099196774 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:44:05 np0005546420.localdomain podman[249137]: 2025-12-05 09:44:05.570365469 +0000 UTC m=+0.148372049 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 09:44:05 np0005546420.localdomain sudo[249244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxfwuktwvgfvrcrnzmzxajqywvuhfbdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927845.400457-3330-79733240090858/AnsiballZ_stat.py
Dec 05 09:44:05 np0005546420.localdomain sudo[249244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:05 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:44:05 np0005546420.localdomain python3.9[249246]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:05 np0005546420.localdomain sudo[249244]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:05 np0005546420.localdomain auditd[708]: Audit daemon rotating log files
Dec 05 09:44:06 np0005546420.localdomain sudo[249301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sptxdfevxfwwsuzgdyozskhwqhmejppl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927845.400457-3330-79733240090858/AnsiballZ_file.py
Dec 05 09:44:06 np0005546420.localdomain sudo[249301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:06 np0005546420.localdomain systemd[1]: tmp-crun.Cjb9pi.mount: Deactivated successfully.
Dec 05 09:44:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 05 09:44:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 05 09:44:06 np0005546420.localdomain python3.9[249303]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:06 np0005546420.localdomain sudo[249301]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:06 np0005546420.localdomain sudo[249383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:44:06 np0005546420.localdomain sudo[249383]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:44:06 np0005546420.localdomain sudo[249383]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:06 np0005546420.localdomain sudo[249429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhgfuxphusdlmhjfqqnknzqaljslowja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927846.6498017-3366-194788866426533/AnsiballZ_stat.py
Dec 05 09:44:06 np0005546420.localdomain sudo[249429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:07 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=378 DF PROTO=TCP SPT=53816 DPT=9101 SEQ=4253095569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACB01D90000000001030307) 
Dec 05 09:44:07 np0005546420.localdomain python3.9[249431]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:07 np0005546420.localdomain sudo[249429]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:07 np0005546420.localdomain sudo[249486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ipljakoukehzffdovexyaettzytmbpum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927846.6498017-3366-194788866426533/AnsiballZ_file.py
Dec 05 09:44:07 np0005546420.localdomain sudo[249486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:07 np0005546420.localdomain python3.9[249488]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:07 np0005546420.localdomain sudo[249486]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 05 09:44:08 np0005546420.localdomain sudo[249596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bqjmidwzrvacbvtgjfjefczshbqvkxfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927847.7903485-3402-158105722685447/AnsiballZ_stat.py
Dec 05 09:44:08 np0005546420.localdomain sudo[249596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:08.221 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:44:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:08.221 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:44:08 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:08.221 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:44:08 np0005546420.localdomain python3.9[249598]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:08 np0005546420.localdomain sudo[249596]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:09.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:44:09 np0005546420.localdomain sudo[249653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzcgqsdaynfnztygpftketqjfaptrfxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927847.7903485-3402-158105722685447/AnsiballZ_file.py
Dec 05 09:44:09 np0005546420.localdomain sudo[249653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:09 np0005546420.localdomain python3.9[249655]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:09 np0005546420.localdomain sudo[249653]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:09 np0005546420.localdomain sudo[249763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zttcwtynmukfucahaaaattnockavpqtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927849.4740183-3438-198100159759929/AnsiballZ_stat.py
Dec 05 09:44:09 np0005546420.localdomain sudo[249763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:10 np0005546420.localdomain python3.9[249765]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:10.036 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:44:10 np0005546420.localdomain sudo[249763]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:44:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:44:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:44:10 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40836 DF PROTO=TCP SPT=58532 DPT=9100 SEQ=2939052007 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACB101A0000000001030307) 
Dec 05 09:44:11 np0005546420.localdomain sudo[249853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kzklshvgygvlnazcfsxgemoeripwuonv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927849.4740183-3438-198100159759929/AnsiballZ_copy.py
Dec 05 09:44:11 np0005546420.localdomain sudo[249853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:11.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:44:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:11.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:44:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:11.042 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:44:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:11.056 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:44:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:11.056 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:44:11 np0005546420.localdomain python3.9[249855]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927849.4740183-3438-198100159759929/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:11 np0005546420.localdomain sudo[249853]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:44:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:44:12 np0005546420.localdomain sudo[249963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-brarbcugqavvndevgegoivpgetkwhpgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927852.0764818-3483-224316390676768/AnsiballZ_file.py
Dec 05 09:44:12 np0005546420.localdomain sudo[249963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:12 np0005546420.localdomain python3.9[249965]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:12 np0005546420.localdomain sudo[249963]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 05 09:44:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:44:12 np0005546420.localdomain podman[249982]: 2025-12-05 09:44:12.686703343 +0000 UTC m=+0.061896537 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:44:12 np0005546420.localdomain podman[249982]: 2025-12-05 09:44:12.72139504 +0000 UTC m=+0.096588274 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 05 09:44:12 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40837 DF PROTO=TCP SPT=58532 DPT=9100 SEQ=2939052007 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACB181A0000000001030307) 
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:44:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:44:12 np0005546420.localdomain sudo[250092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pjkckgjfikobcndgraoypzfgxpvopsrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927852.7305944-3507-251576612572165/AnsiballZ_command.py
Dec 05 09:44:12 np0005546420.localdomain sudo[250092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.062 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.062 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.063 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.063 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.064 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:44:13 np0005546420.localdomain python3.9[250094]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:13 np0005546420.localdomain sudo[250092]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.500 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:44:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:44:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.654 230124 WARNING nova.virt.libvirt.driver [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.655 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=13039MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.656 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.656 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:44:13 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 05 09:44:13 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.737 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.738 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:44:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:13.774 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:44:13 np0005546420.localdomain sudo[250227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tnqyiomwuprxslaavhcfcnezqkrjpvcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927853.4290564-3531-281253241650079/AnsiballZ_blockinfile.py
Dec 05 09:44:13 np0005546420.localdomain sudo[250227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:14 np0005546420.localdomain python3.9[250230]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:14 np0005546420.localdomain sudo[250227]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:14.255 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:44:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:14.262 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:44:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:14.280 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:44:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:14.283 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:44:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:44:14.283 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:44:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:44:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:44:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:44:14 np0005546420.localdomain sudo[250359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ggfaljeitqpmpstgoudutcximfevcgrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927854.3234456-3558-211248293889639/AnsiballZ_command.py
Dec 05 09:44:14 np0005546420.localdomain sudo[250359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:14 np0005546420.localdomain python3.9[250361]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:14 np0005546420.localdomain sudo[250359]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 np0005546420.localdomain sudo[250470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ruylsnjcmedkdpscjtypfiqfeseecbzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927854.9743245-3582-9629646234980/AnsiballZ_stat.py
Dec 05 09:44:15 np0005546420.localdomain sudo[250470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:15 np0005546420.localdomain python3.9[250472]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:44:15 np0005546420.localdomain sudo[250470]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:15 np0005546420.localdomain sudo[250582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-beasjvqtyjfztcdqfopbkhgwmiblcxky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927855.6450133-3606-101119634125185/AnsiballZ_command.py
Dec 05 09:44:15 np0005546420.localdomain sudo[250582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:44:15 np0005546420.localdomain podman[250585]: 2025-12-05 09:44:15.995648326 +0000 UTC m=+0.077715293 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 05 09:44:16 np0005546420.localdomain podman[250585]: 2025-12-05 09:44:16.032559752 +0000 UTC m=+0.114626719 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 09:44:16 np0005546420.localdomain python3.9[250584]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:44:16 np0005546420.localdomain sudo[250582]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:16 np0005546420.localdomain sudo[250711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xedqnbsatefuafdvartsyllyxoisyhde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927856.389013-3631-63660711203538/AnsiballZ_file.py
Dec 05 09:44:16 np0005546420.localdomain sudo[250711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:16 np0005546420.localdomain python3.9[250713]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:16 np0005546420.localdomain sudo[250711]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 05 09:44:17 np0005546420.localdomain sshd[230455]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:44:17 np0005546420.localdomain systemd[1]: session-55.scope: Deactivated successfully.
Dec 05 09:44:17 np0005546420.localdomain systemd[1]: session-55.scope: Consumed 1min 30.997s CPU time.
Dec 05 09:44:17 np0005546420.localdomain systemd-logind[762]: Session 55 logged out. Waiting for processes to exit.
Dec 05 09:44:17 np0005546420.localdomain systemd-logind[762]: Removed session 55.
Dec 05 09:44:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-28babb087ffe0501289e9c462f881c66803c6126daf8f53cd6e97c97c184b295-merged.mount: Deactivated successfully.
Dec 05 09:44:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:44:17 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:44:17 np0005546420.localdomain podman[250731]: 2025-12-05 09:44:17.590334657 +0000 UTC m=+0.212238566 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:44:17 np0005546420.localdomain podman[250731]: 2025-12-05 09:44:17.602424739 +0000 UTC m=+0.224328628 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:44:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:44:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 05 09:44:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 05 09:44:18 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:44:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:44:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:44:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:44:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:44:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:44:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:44:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:44:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:44:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:44:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:44:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:44:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:44:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:44:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:44:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 05 09:44:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:44:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:44:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 05 09:44:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f3c115c686a4e871b821c782f5b4cfb35a8dcd215e49958df3fb5148fa2c1e76-merged.mount: Deactivated successfully.
Dec 05 09:44:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f3c115c686a4e871b821c782f5b4cfb35a8dcd215e49958df3fb5148fa2c1e76-merged.mount: Deactivated successfully.
Dec 05 09:44:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 05 09:44:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:44:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:44:23 np0005546420.localdomain sshd[250758]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:44:23 np0005546420.localdomain sshd[250758]: Accepted publickey for zuul from 192.168.122.30 port 38446 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:44:23 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50924 DF PROTO=TCP SPT=56888 DPT=9102 SEQ=3543656540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACB41A80000000001030307) 
Dec 05 09:44:23 np0005546420.localdomain systemd-logind[762]: New session 56 of user zuul.
Dec 05 09:44:23 np0005546420.localdomain systemd[1]: Started Session 56 of User zuul.
Dec 05 09:44:23 np0005546420.localdomain sshd[250758]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:44:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:44:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:44:23 np0005546420.localdomain sudo[250869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-boufslrsitnkqewzaijlttqttieibetl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927863.5437338-27-99693460926610/AnsiballZ_file.py
Dec 05 09:44:24 np0005546420.localdomain sudo[250869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:24 np0005546420.localdomain python3.9[250871]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:24 np0005546420.localdomain sudo[250869]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 05 09:44:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5a287e5baf3cfe95e635859734034da81401b58443d725291782300b9af04e40-merged.mount: Deactivated successfully.
Dec 05 09:44:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50925 DF PROTO=TCP SPT=56888 DPT=9102 SEQ=3543656540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACB45990000000001030307) 
Dec 05 09:44:24 np0005546420.localdomain sudo[250979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-imoksdxupwqsldksronjwsdzdxrpszxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927864.3473666-27-160210766066897/AnsiballZ_file.py
Dec 05 09:44:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:44:24 np0005546420.localdomain sudo[250979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:24 np0005546420.localdomain podman[250981]: 2025-12-05 09:44:24.733590249 +0000 UTC m=+0.095378508 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 09:44:24 np0005546420.localdomain podman[250981]: 2025-12-05 09:44:24.745904459 +0000 UTC m=+0.107692738 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=multipathd)
Dec 05 09:44:24 np0005546420.localdomain python3.9[250982]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:24 np0005546420.localdomain sudo[250979]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6506 DF PROTO=TCP SPT=45966 DPT=9102 SEQ=1891154944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACB47D90000000001030307) 
Dec 05 09:44:25 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:44:25 np0005546420.localdomain sudo[251109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahqsmwfwhwmenciohdldhefmxygrjyzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927865.005222-27-122245078185106/AnsiballZ_file.py
Dec 05 09:44:25 np0005546420.localdomain sudo[251109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:25 np0005546420.localdomain python3.9[251111]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:25 np0005546420.localdomain sudo[251109]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:44:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:44:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 05 09:44:26 np0005546420.localdomain python3.9[251219]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 05 09:44:26 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50926 DF PROTO=TCP SPT=56888 DPT=9102 SEQ=3543656540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACB4D990000000001030307) 
Dec 05 09:44:26 np0005546420.localdomain python3.9[251305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927865.6207304-105-116156573371058/.source.yaml follow=False _original_basename=neutron_sriov_agent.yaml.j2 checksum=d3942d8476d006ea81540d2a1d96dd9d67f33f5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 05 09:44:27 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:38:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 140637 "" "Go-http-client/1.1"
Dec 05 09:44:27 np0005546420.localdomain podman_exporter[240570]: ts=2025-12-05T09:44:27.067Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 05 09:44:27 np0005546420.localdomain podman_exporter[240570]: ts=2025-12-05T09:44:27.067Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 05 09:44:27 np0005546420.localdomain podman_exporter[240570]: ts=2025-12-05T09:44:27.067Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882
Dec 05 09:44:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-6e062bde15c561a186b7e30080880293f1be1996e7656eda685d28e2ac8dfedb-merged.mount: Deactivated successfully.
Dec 05 09:44:27 np0005546420.localdomain python3.9[251414]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:27 np0005546420.localdomain python3.9[251500]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927867.0174034-150-218219449840018/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:44:28 np0005546420.localdomain podman[251609]: 2025-12-05 09:44:28.493085775 +0000 UTC m=+0.073312398 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal)
Dec 05 09:44:28 np0005546420.localdomain podman[251609]: 2025-12-05 09:44:28.528626409 +0000 UTC m=+0.108853022 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350)
Dec 05 09:44:28 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:44:28 np0005546420.localdomain python3.9[251608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:29 np0005546420.localdomain python3.9[251713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927868.119981-150-143975197803719/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:30 np0005546420.localdomain python3.9[251821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:44:30 np0005546420.localdomain podman[251822]: 2025-12-05 09:44:30.506463278 +0000 UTC m=+0.079824269 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:44:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50927 DF PROTO=TCP SPT=56888 DPT=9102 SEQ=3543656540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACB5D5A0000000001030307) 
Dec 05 09:44:30 np0005546420.localdomain podman[251822]: 2025-12-05 09:44:30.537183123 +0000 UTC m=+0.110544034 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:44:30 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:44:30 np0005546420.localdomain python3.9[251930]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927869.1892862-150-264380074866059/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=2c50b0d43b24b79b12e9daf52898e4b17341872d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:32 np0005546420.localdomain python3.9[252038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:32 np0005546420.localdomain python3.9[252124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927872.0360096-324-278470674695973/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=75518b9ca1c9a507fba8f4d8f8342e6edd3bf5ee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:33 np0005546420.localdomain python3.9[252232]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:44:34 np0005546420.localdomain sudo[252342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nszrxmojfofjjrrdtksgplamkegdorow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927873.8202684-396-71773340130062/AnsiballZ_file.py
Dec 05 09:44:34 np0005546420.localdomain sudo[252342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:34 np0005546420.localdomain python3.9[252344]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:34 np0005546420.localdomain sudo[252342]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:34 np0005546420.localdomain sudo[252452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-skarorkeuukpnsuwnhvnhptgpgpoiwfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927874.5035825-420-266591892750103/AnsiballZ_stat.py
Dec 05 09:44:34 np0005546420.localdomain sudo[252452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:34 np0005546420.localdomain python3.9[252454]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:34 np0005546420.localdomain sudo[252452]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:35 np0005546420.localdomain sudo[252509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bocphhrjghrirvpqfwkloubczffmwbxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927874.5035825-420-266591892750103/AnsiballZ_file.py
Dec 05 09:44:35 np0005546420.localdomain sudo[252509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:35 np0005546420.localdomain python3.9[252511]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:35 np0005546420.localdomain sudo[252509]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:35 np0005546420.localdomain sudo[252619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mwkkoyfduqeulrujjrdhbbpbidaxowzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927875.5353818-420-8292456860223/AnsiballZ_stat.py
Dec 05 09:44:35 np0005546420.localdomain sudo[252619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:35 np0005546420.localdomain python3.9[252621]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:35 np0005546420.localdomain sudo[252619]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:36 np0005546420.localdomain sudo[252676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-srebyzvcxfadgkehjrxnuuocnzcdachh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927875.5353818-420-8292456860223/AnsiballZ_file.py
Dec 05 09:44:36 np0005546420.localdomain sudo[252676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:44:36 np0005546420.localdomain podman[252679]: 2025-12-05 09:44:36.309929718 +0000 UTC m=+0.079141658 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:44:36 np0005546420.localdomain podman[252679]: 2025-12-05 09:44:36.381543492 +0000 UTC m=+0.150755432 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 09:44:36 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:44:36 np0005546420.localdomain python3.9[252678]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:36 np0005546420.localdomain sudo[252676]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:37 np0005546420.localdomain sudo[252810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csurhpjfiowhkzabdrowhozlpkkbcsfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927876.783365-489-61163880202341/AnsiballZ_file.py
Dec 05 09:44:37 np0005546420.localdomain sudo[252810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:37 np0005546420.localdomain python3.9[252812]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:37 np0005546420.localdomain sudo[252810]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:37 np0005546420.localdomain sudo[252920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ynaqybwumqglfaphquvyqqsxmfgsildb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927877.4481254-513-187931982702805/AnsiballZ_stat.py
Dec 05 09:44:37 np0005546420.localdomain sudo[252920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:37 np0005546420.localdomain python3.9[252922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:37 np0005546420.localdomain sudo[252920]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:38 np0005546420.localdomain sudo[252977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvkzwhoixenkwjtgbvxybdkcmyyoyfrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927877.4481254-513-187931982702805/AnsiballZ_file.py
Dec 05 09:44:38 np0005546420.localdomain sudo[252977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:38 np0005546420.localdomain python3.9[252979]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:38 np0005546420.localdomain sudo[252977]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:38 np0005546420.localdomain sudo[253087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znkxoehtmeadlhdsxckyscttmziohxvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927878.5313425-549-180602091704846/AnsiballZ_stat.py
Dec 05 09:44:38 np0005546420.localdomain sudo[253087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:38 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50928 DF PROTO=TCP SPT=56888 DPT=9102 SEQ=3543656540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACB7DDA0000000001030307) 
Dec 05 09:44:39 np0005546420.localdomain python3.9[253089]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:39 np0005546420.localdomain sudo[253087]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:39 np0005546420.localdomain sudo[253144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xsrylikdolbrrdhvjbrcmbaoxvbxrgsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927878.5313425-549-180602091704846/AnsiballZ_file.py
Dec 05 09:44:39 np0005546420.localdomain sudo[253144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:39 np0005546420.localdomain python3.9[253146]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:39 np0005546420.localdomain sudo[253144]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:40 np0005546420.localdomain sudo[253254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lobndlqkmxiczpnkuxzblyxdreyljssq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927879.6210353-585-276016610825572/AnsiballZ_systemd.py
Dec 05 09:44:40 np0005546420.localdomain sudo[253254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:40 np0005546420.localdomain python3.9[253256]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:44:40 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:44:40 np0005546420.localdomain systemd-rc-local-generator[253280]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:44:40 np0005546420.localdomain systemd-sysv-generator[253287]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:44:40 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:40 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:40 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:40 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:40 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:44:40 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:40 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:40 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:40 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:40 np0005546420.localdomain sudo[253254]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:41 np0005546420.localdomain sudo[253402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sdcfztvivasvvypjoiyuelmiknymevhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927881.630124-609-28194106220818/AnsiballZ_stat.py
Dec 05 09:44:41 np0005546420.localdomain sudo[253402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:42 np0005546420.localdomain python3.9[253404]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:42 np0005546420.localdomain sudo[253402]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:42 np0005546420.localdomain sudo[253459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxxwtayglqptsqzjpncsbksrojtekhzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927881.630124-609-28194106220818/AnsiballZ_file.py
Dec 05 09:44:42 np0005546420.localdomain sudo[253459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:42 np0005546420.localdomain python3.9[253461]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:42 np0005546420.localdomain sudo[253459]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:43 np0005546420.localdomain sudo[253569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svdrwesoeiapecxzawxwrqbhykccfhoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927883.4431996-645-233767427263553/AnsiballZ_stat.py
Dec 05 09:44:43 np0005546420.localdomain sudo[253569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:43 np0005546420.localdomain python3.9[253571]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:43 np0005546420.localdomain sudo[253569]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:44 np0005546420.localdomain sudo[253626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhktjncxnbhcwsmzrwfblrpgyigdmzyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927883.4431996-645-233767427263553/AnsiballZ_file.py
Dec 05 09:44:44 np0005546420.localdomain sudo[253626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:44:44 np0005546420.localdomain podman[253629]: 2025-12-05 09:44:44.305795746 +0000 UTC m=+0.094635203 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 05 09:44:44 np0005546420.localdomain podman[253629]: 2025-12-05 09:44:44.363900297 +0000 UTC m=+0.152739704 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 05 09:44:44 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:44:44 np0005546420.localdomain python3.9[253628]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:44 np0005546420.localdomain sudo[253626]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:44 np0005546420.localdomain sudo[253755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yscarnslhqwdmmdpmjibnxbofmyhnznx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927884.5972939-681-121622881625159/AnsiballZ_systemd.py
Dec 05 09:44:44 np0005546420.localdomain sudo[253755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:45 np0005546420.localdomain python3.9[253757]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:44:45 np0005546420.localdomain systemd-sysv-generator[253788]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:44:45 np0005546420.localdomain systemd-rc-local-generator[253784]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 09:44:45 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 09:44:45 np0005546420.localdomain sudo[253755]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:46 np0005546420.localdomain sudo[253907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-psgmjycofwogtibhyfinsbnnmhjpvygp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927886.315569-711-121443178502746/AnsiballZ_file.py
Dec 05 09:44:46 np0005546420.localdomain sudo[253907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:46 np0005546420.localdomain python3.9[253909]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:44:46 np0005546420.localdomain sudo[253907]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:44:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:44:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:44:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142280 "" "Go-http-client/1.1"
Dec 05 09:44:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:44:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15862 "" "Go-http-client/1.1"
Dec 05 09:44:47 np0005546420.localdomain sudo[254018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oslgnrorwshjkportlrofsizshibrsxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927887.0347638-735-177351142057462/AnsiballZ_stat.py
Dec 05 09:44:47 np0005546420.localdomain sudo[254018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:47 np0005546420.localdomain python3.9[254020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:44:47 np0005546420.localdomain sudo[254018]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:47 np0005546420.localdomain sudo[254106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mthtvqqfzfyffhpggxdqxlqvqbukhgwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927887.0347638-735-177351142057462/AnsiballZ_copy.py
Dec 05 09:44:47 np0005546420.localdomain sudo[254106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:44:47 np0005546420.localdomain podman[254109]: 2025-12-05 09:44:47.946538697 +0000 UTC m=+0.093075015 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 09:44:47 np0005546420.localdomain podman[254109]: 2025-12-05 09:44:47.954518823 +0000 UTC m=+0.101055141 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Dec 05 09:44:47 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:44:48 np0005546420.localdomain python3.9[254108]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927887.0347638-735-177351142057462/.source.json _original_basename=.m4m8bvlp follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:48 np0005546420.localdomain sudo[254106]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:48 np0005546420.localdomain sudo[254232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-drrnwskloxnwzzhcbqnpehmakctdozvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927888.260431-780-17131884150110/AnsiballZ_file.py
Dec 05 09:44:48 np0005546420.localdomain sudo[254232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:44:48 np0005546420.localdomain systemd[1]: tmp-crun.V2Onw5.mount: Deactivated successfully.
Dec 05 09:44:48 np0005546420.localdomain podman[254234]: 2025-12-05 09:44:48.645285834 +0000 UTC m=+0.105615754 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:44:48 np0005546420.localdomain podman[254234]: 2025-12-05 09:44:48.657402429 +0000 UTC m=+0.117732409 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:44:48 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:44:48 np0005546420.localdomain python3.9[254235]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:48 np0005546420.localdomain sudo[254232]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:44:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:44:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:44:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:44:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:44:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:44:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:44:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:44:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:44:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:44:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:44:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:44:49 np0005546420.localdomain sudo[254365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eckvktzjcbryqljghnappvtliwmclrwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927888.9700234-804-7586240700006/AnsiballZ_stat.py
Dec 05 09:44:49 np0005546420.localdomain sudo[254365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:49 np0005546420.localdomain sudo[254365]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:49 np0005546420.localdomain sudo[254453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yuhbfsbrnjedzlxhzdtsqquiqvbuiijo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927888.9700234-804-7586240700006/AnsiballZ_copy.py
Dec 05 09:44:49 np0005546420.localdomain sudo[254453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:49 np0005546420.localdomain sudo[254453]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:50 np0005546420.localdomain sudo[254563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lgvzlswkgcdyxujoiqwgxhvrbtcqbzle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927890.340465-855-214651336299690/AnsiballZ_container_config_data.py
Dec 05 09:44:50 np0005546420.localdomain sudo[254563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:50 np0005546420.localdomain python3.9[254565]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False
Dec 05 09:44:50 np0005546420.localdomain sudo[254563]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:52 np0005546420.localdomain sudo[254673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zaeiaczngrktnfqwhushqvjeenguzycp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927891.6181264-882-102648946185548/AnsiballZ_container_config_hash.py
Dec 05 09:44:52 np0005546420.localdomain sudo[254673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:52 np0005546420.localdomain python3.9[254675]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:44:52 np0005546420.localdomain sudo[254673]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:53 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19331 DF PROTO=TCP SPT=40784 DPT=9102 SEQ=3297855435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACBB6DA0000000001030307) 
Dec 05 09:44:53 np0005546420.localdomain sudo[254783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pyyoanxljilzsvxlmxxkltskvzptkbxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927892.511814-909-63888057028345/AnsiballZ_podman_container_info.py
Dec 05 09:44:53 np0005546420.localdomain sudo[254783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:53 np0005546420.localdomain python3.9[254785]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 09:44:54 np0005546420.localdomain sudo[254783]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19332 DF PROTO=TCP SPT=40784 DPT=9102 SEQ=3297855435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACBBAD90000000001030307) 
Dec 05 09:44:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50929 DF PROTO=TCP SPT=56888 DPT=9102 SEQ=3543656540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACBBDD90000000001030307) 
Dec 05 09:44:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:44:55 np0005546420.localdomain podman[254829]: 2025-12-05 09:44:55.497819405 +0000 UTC m=+0.075341595 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec 05 09:44:55 np0005546420.localdomain podman[254829]: 2025-12-05 09:44:55.534297825 +0000 UTC m=+0.111820005 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 09:44:55 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:44:56 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19333 DF PROTO=TCP SPT=40784 DPT=9102 SEQ=3297855435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACBC2D90000000001030307) 
Dec 05 09:44:57 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6507 DF PROTO=TCP SPT=45966 DPT=9102 SEQ=1891154944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACBC5D90000000001030307) 
Dec 05 09:44:57 np0005546420.localdomain sudo[254937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttrpzxvxbvluayeihdhowvvviwiraeap ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927897.3576787-948-237045944905928/AnsiballZ_edpm_container_manage.py
Dec 05 09:44:57 np0005546420.localdomain sudo[254937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:58 np0005546420.localdomain python3[254939]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:44:58 np0005546420.localdomain podman[254975]: 
Dec 05 09:44:58 np0005546420.localdomain podman[254975]: 2025-12-05 09:44:58.305845648 +0000 UTC m=+0.079640698 container create 4b4000a135361006a007148c8410f989d1932dfd02fcd2b14a5f4638b438fbbd (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=neutron_sriov_agent, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c3e7b7e9b0ea3e65eff717028260216463ba087d3c3e0f525dd22b36c7577d97'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=neutron_sriov_agent)
Dec 05 09:44:58 np0005546420.localdomain podman[254975]: 2025-12-05 09:44:58.260001278 +0000 UTC m=+0.033796358 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Dec 05 09:44:58 np0005546420.localdomain python3[254939]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=c3e7b7e9b0ea3e65eff717028260216463ba087d3c3e0f525dd22b36c7577d97 --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c3e7b7e9b0ea3e65eff717028260216463ba087d3c3e0f525dd22b36c7577d97'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Dec 05 09:44:58 np0005546420.localdomain sudo[254937]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:58 np0005546420.localdomain sudo[255121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-unzgfwuadyfckiesvqcnjywyvuknrqya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927898.7610407-972-218625895424834/AnsiballZ_stat.py
Dec 05 09:44:59 np0005546420.localdomain sudo[255121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:44:59 np0005546420.localdomain systemd[1]: tmp-crun.9IX9h0.mount: Deactivated successfully.
Dec 05 09:44:59 np0005546420.localdomain podman[255124]: 2025-12-05 09:44:59.078195685 +0000 UTC m=+0.066150800 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9-minimal, distribution-scope=public)
Dec 05 09:44:59 np0005546420.localdomain podman[255124]: 2025-12-05 09:44:59.092277411 +0000 UTC m=+0.080232516 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc.)
Dec 05 09:44:59 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:44:59 np0005546420.localdomain python3.9[255123]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:44:59 np0005546420.localdomain sudo[255121]: pam_unix(sudo:session): session closed for user root
Dec 05 09:44:59 np0005546420.localdomain sudo[255254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ogswvxcplhesvkeawapupbjeacjmctzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927899.494113-999-185986039252438/AnsiballZ_file.py
Dec 05 09:44:59 np0005546420.localdomain sudo[255254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:44:59 np0005546420.localdomain python3.9[255256]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:44:59 np0005546420.localdomain sudo[255254]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:00 np0005546420.localdomain sudo[255309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hgbpsrcasirrftcytwfhpawsrmgsedru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927899.494113-999-185986039252438/AnsiballZ_stat.py
Dec 05 09:45:00 np0005546420.localdomain sudo[255309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:00 np0005546420.localdomain python3.9[255311]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:45:00 np0005546420.localdomain sudo[255309]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19334 DF PROTO=TCP SPT=40784 DPT=9102 SEQ=3297855435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACBD29A0000000001030307) 
Dec 05 09:45:00 np0005546420.localdomain sudo[255418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aaeqorfeuzxdtapxswexupsznhbuqxee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927900.4030666-999-119226136139982/AnsiballZ_copy.py
Dec 05 09:45:00 np0005546420.localdomain sudo[255418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:45:00 np0005546420.localdomain systemd[1]: tmp-crun.3yrT10.mount: Deactivated successfully.
Dec 05 09:45:00 np0005546420.localdomain podman[255421]: 2025-12-05 09:45:00.969050274 +0000 UTC m=+0.092811806 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:45:00 np0005546420.localdomain podman[255421]: 2025-12-05 09:45:00.98117941 +0000 UTC m=+0.104940992 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:45:00 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:45:01 np0005546420.localdomain python3.9[255420]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927900.4030666-999-119226136139982/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:45:01 np0005546420.localdomain sudo[255418]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:01 np0005546420.localdomain sudo[255494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ujiabpdswxigeufmagkbdprebqfqvfun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927900.4030666-999-119226136139982/AnsiballZ_systemd.py
Dec 05 09:45:01 np0005546420.localdomain sudo[255494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:02 np0005546420.localdomain python3.9[255496]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:45:02 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:45:02 np0005546420.localdomain systemd-rc-local-generator[255521]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:45:02 np0005546420.localdomain systemd-sysv-generator[255525]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:45:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:45:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:02 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:02 np0005546420.localdomain sudo[255494]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:02 np0005546420.localdomain sudo[255584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ruzapbgtzfdvkjpssvvpnhmyzbgpcesx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927900.4030666-999-119226136139982/AnsiballZ_systemd.py
Dec 05 09:45:02 np0005546420.localdomain sudo[255584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:03 np0005546420.localdomain python3.9[255586]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:45:03 np0005546420.localdomain systemd-rc-local-generator[255610]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:45:03 np0005546420.localdomain systemd-sysv-generator[255613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: Starting neutron_sriov_agent container...
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:45:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dce727a881f7a19defc505b4fc5c3b728ae521086f5edce53a7b7a4aaf933a7/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:03 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dce727a881f7a19defc505b4fc5c3b728ae521086f5edce53a7b7a4aaf933a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:03 np0005546420.localdomain podman[255627]: 2025-12-05 09:45:03.703661692 +0000 UTC m=+0.144770525 container init 4b4000a135361006a007148c8410f989d1932dfd02fcd2b14a5f4638b438fbbd (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c3e7b7e9b0ea3e65eff717028260216463ba087d3c3e0f525dd22b36c7577d97'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 09:45:03 np0005546420.localdomain podman[255627]: 2025-12-05 09:45:03.712524418 +0000 UTC m=+0.153633241 container start 4b4000a135361006a007148c8410f989d1932dfd02fcd2b14a5f4638b438fbbd (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c3e7b7e9b0ea3e65eff717028260216463ba087d3c3e0f525dd22b36c7577d97'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.vendor=CentOS)
Dec 05 09:45:03 np0005546420.localdomain podman[255627]: neutron_sriov_agent
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: + sudo -E kolla_set_configs
Dec 05 09:45:03 np0005546420.localdomain systemd[1]: Started neutron_sriov_agent container.
Dec 05 09:45:03 np0005546420.localdomain sudo[255584]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Validating config file
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Copying service configuration files
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Writing out command to execute
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: ++ cat /run_command
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: + CMD=/usr/bin/neutron-sriov-nic-agent
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: + ARGS=
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: + sudo kolla_copy_cacerts
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: + [[ ! -n '' ]]
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: + . kolla_extend_start
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: + umask 0022
Dec 05 09:45:03 np0005546420.localdomain neutron_sriov_agent[255641]: + exec /usr/bin/neutron-sriov-nic-agent
Dec 05 09:45:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:45:04.105 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:45:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:45:04.105 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:45:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:45:04.105 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:45:04 np0005546420.localdomain sudo[255763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-meerjecftvvvnxlkqbszegjeyofzmcdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927904.6932094-1083-118103720807435/AnsiballZ_systemd.py
Dec 05 09:45:04 np0005546420.localdomain sudo[255763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:05 np0005546420.localdomain python3.9[255765]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:45:05 np0005546420.localdomain systemd[1]: Stopping neutron_sriov_agent container...
Dec 05 09:45:05 np0005546420.localdomain systemd[1]: libpod-4b4000a135361006a007148c8410f989d1932dfd02fcd2b14a5f4638b438fbbd.scope: Deactivated successfully.
Dec 05 09:45:05 np0005546420.localdomain systemd[1]: libpod-4b4000a135361006a007148c8410f989d1932dfd02fcd2b14a5f4638b438fbbd.scope: Consumed 1.663s CPU time.
Dec 05 09:45:05 np0005546420.localdomain podman[255769]: 2025-12-05 09:45:05.395294579 +0000 UTC m=+0.074614021 container died 4b4000a135361006a007148c8410f989d1932dfd02fcd2b14a5f4638b438fbbd (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c3e7b7e9b0ea3e65eff717028260216463ba087d3c3e0f525dd22b36c7577d97'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:45:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b4000a135361006a007148c8410f989d1932dfd02fcd2b14a5f4638b438fbbd-userdata-shm.mount: Deactivated successfully.
Dec 05 09:45:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-1dce727a881f7a19defc505b4fc5c3b728ae521086f5edce53a7b7a4aaf933a7-merged.mount: Deactivated successfully.
Dec 05 09:45:05 np0005546420.localdomain podman[255769]: 2025-12-05 09:45:05.444527435 +0000 UTC m=+0.123846867 container cleanup 4b4000a135361006a007148c8410f989d1932dfd02fcd2b14a5f4638b438fbbd (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c3e7b7e9b0ea3e65eff717028260216463ba087d3c3e0f525dd22b36c7577d97'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 09:45:05 np0005546420.localdomain podman[255769]: neutron_sriov_agent
Dec 05 09:45:05 np0005546420.localdomain podman[255794]: 2025-12-05 09:45:05.521876781 +0000 UTC m=+0.049706570 container cleanup 4b4000a135361006a007148c8410f989d1932dfd02fcd2b14a5f4638b438fbbd (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c3e7b7e9b0ea3e65eff717028260216463ba087d3c3e0f525dd22b36c7577d97'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:45:05 np0005546420.localdomain podman[255794]: neutron_sriov_agent
Dec 05 09:45:05 np0005546420.localdomain systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully.
Dec 05 09:45:05 np0005546420.localdomain systemd[1]: Stopped neutron_sriov_agent container.
Dec 05 09:45:05 np0005546420.localdomain systemd[1]: Starting neutron_sriov_agent container...
Dec 05 09:45:05 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:45:05 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dce727a881f7a19defc505b4fc5c3b728ae521086f5edce53a7b7a4aaf933a7/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:05 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dce727a881f7a19defc505b4fc5c3b728ae521086f5edce53a7b7a4aaf933a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 09:45:05 np0005546420.localdomain podman[255806]: 2025-12-05 09:45:05.645796571 +0000 UTC m=+0.098050769 container init 4b4000a135361006a007148c8410f989d1932dfd02fcd2b14a5f4638b438fbbd (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_managed=true, container_name=neutron_sriov_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c3e7b7e9b0ea3e65eff717028260216463ba087d3c3e0f525dd22b36c7577d97'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 09:45:05 np0005546420.localdomain podman[255806]: 2025-12-05 09:45:05.655009096 +0000 UTC m=+0.107263324 container start 4b4000a135361006a007148c8410f989d1932dfd02fcd2b14a5f4638b438fbbd (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c3e7b7e9b0ea3e65eff717028260216463ba087d3c3e0f525dd22b36c7577d97'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=neutron_sriov_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 09:45:05 np0005546420.localdomain podman[255806]: neutron_sriov_agent
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: + sudo -E kolla_set_configs
Dec 05 09:45:05 np0005546420.localdomain systemd[1]: Started neutron_sriov_agent container.
Dec 05 09:45:05 np0005546420.localdomain sudo[255763]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Validating config file
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Copying service configuration files
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Writing out command to execute
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: ++ cat /run_command
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: + CMD=/usr/bin/neutron-sriov-nic-agent
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: + ARGS=
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: + sudo kolla_copy_cacerts
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: + [[ ! -n '' ]]
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: + . kolla_extend_start
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: + umask 0022
Dec 05 09:45:05 np0005546420.localdomain neutron_sriov_agent[255821]: + exec /usr/bin/neutron-sriov-nic-agent
Dec 05 09:45:06 np0005546420.localdomain sshd[250758]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:45:06 np0005546420.localdomain systemd[1]: session-56.scope: Deactivated successfully.
Dec 05 09:45:06 np0005546420.localdomain systemd[1]: session-56.scope: Consumed 23.179s CPU time.
Dec 05 09:45:06 np0005546420.localdomain systemd-logind[762]: Session 56 logged out. Waiting for processes to exit.
Dec 05 09:45:06 np0005546420.localdomain systemd-logind[762]: Removed session 56.
Dec 05 09:45:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:45:06 np0005546420.localdomain podman[255853]: 2025-12-05 09:45:06.516322129 +0000 UTC m=+0.091769143 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:45:06 np0005546420.localdomain podman[255853]: 2025-12-05 09:45:06.617657199 +0000 UTC m=+0.193104283 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec 05 09:45:06 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:45:07 np0005546420.localdomain sudo[255878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:45:07 np0005546420.localdomain sudo[255878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:45:07 np0005546420.localdomain sudo[255878]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:07 np0005546420.localdomain sudo[255896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:45:07 np0005546420.localdomain sudo[255896]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:45:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:07.381 2 INFO neutron.common.config [-] Logging enabled!
Dec 05 09:45:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:07.381 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43
Dec 05 09:45:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:07.382 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}
Dec 05 09:45:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:07.382 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}
Dec 05 09:45:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:07.382 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}
Dec 05 09:45:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:07.382 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}
Dec 05 09:45:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:07.382 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005546420.localdomain'}
Dec 05 09:45:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:07.383 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-030c0f5d-1c45-4b04-86b5-c8659de51eec - - - - - -] RPC agent_id: nic-switch-agent.np0005546420.localdomain
Dec 05 09:45:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:07.387 2 INFO neutron.agent.agent_extensions_manager [None req-030c0f5d-1c45-4b04-86b5-c8659de51eec - - - - - -] Loaded agent extensions: ['qos']
Dec 05 09:45:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:07.388 2 INFO neutron.agent.agent_extensions_manager [None req-030c0f5d-1c45-4b04-86b5-c8659de51eec - - - - - -] Initializing agent extension 'qos'
Dec 05 09:45:07 np0005546420.localdomain sudo[255896]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:08 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:08.101 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-030c0f5d-1c45-4b04-86b5-c8659de51eec - - - - - -] Agent initialized successfully, now running... 
Dec 05 09:45:08 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:08.101 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-030c0f5d-1c45-4b04-86b5-c8659de51eec - - - - - -] SRIOV NIC Agent RPC Daemon Started!
Dec 05 09:45:08 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 09:45:08.102 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-030c0f5d-1c45-4b04-86b5-c8659de51eec - - - - - -] Agent out of sync with plugin!
Dec 05 09:45:09 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19335 DF PROTO=TCP SPT=40784 DPT=9102 SEQ=3297855435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACBF3D90000000001030307) 
Dec 05 09:45:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:09.280 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:45:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:09.306 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:45:09 np0005546420.localdomain sudo[255948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:45:09 np0005546420.localdomain sudo[255948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:45:09 np0005546420.localdomain sudo[255948]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:10.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:45:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:10.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:45:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:11.037 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:45:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:11.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:45:11 np0005546420.localdomain sshd[255966]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:45:11 np0005546420.localdomain sshd[255966]: Accepted publickey for zuul from 192.168.122.30 port 43680 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:45:11 np0005546420.localdomain systemd-logind[762]: New session 57 of user zuul.
Dec 05 09:45:11 np0005546420.localdomain systemd[1]: Started Session 57 of User zuul.
Dec 05 09:45:11 np0005546420.localdomain sshd[255966]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:45:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:13.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:45:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:13.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:45:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:13.042 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:45:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:13.064 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:45:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:13.064 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:45:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:13.065 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:45:13 np0005546420.localdomain python3.9[256077]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.066 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.067 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.067 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.068 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.068 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:45:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:45:14 np0005546420.localdomain podman[256138]: 2025-12-05 09:45:14.522009027 +0000 UTC m=+0.093304282 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:45:14 np0005546420.localdomain podman[256138]: 2025-12-05 09:45:14.536442494 +0000 UTC m=+0.107737799 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.540 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:45:14 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:45:14 np0005546420.localdomain sudo[256230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qinilfkgwvohxjzrjrjotjaevdvcorlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927914.400934-66-35356974829118/AnsiballZ_setup.py
Dec 05 09:45:14 np0005546420.localdomain sudo[256230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.760 230124 WARNING nova.virt.libvirt.driver [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.762 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12937MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.762 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.763 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.823 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.824 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:45:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:14.846 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:45:14 np0005546420.localdomain python3.9[256232]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:45:15 np0005546420.localdomain sudo[256230]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:15.291 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:45:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:15.296 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:45:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:15.317 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:45:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:15.322 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:45:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:15.323 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:45:15 np0005546420.localdomain sudo[256315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dkzmrucojnmdtrhboccidkzheidswhbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927914.400934-66-35356974829118/AnsiballZ_dnf.py
Dec 05 09:45:15 np0005546420.localdomain sudo[256315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:15 np0005546420.localdomain python3.9[256317]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:45:16 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:45:16.324 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:45:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:45:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:45:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:45:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144238 "" "Go-http-client/1.1"
Dec 05 09:45:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:45:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16301 "" "Go-http-client/1.1"
Dec 05 09:45:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:45:18 np0005546420.localdomain systemd[1]: tmp-crun.7XDM4K.mount: Deactivated successfully.
Dec 05 09:45:18 np0005546420.localdomain podman[256322]: 2025-12-05 09:45:18.563133471 +0000 UTC m=+0.131816974 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 05 09:45:18 np0005546420.localdomain podman[256322]: 2025-12-05 09:45:18.594428911 +0000 UTC m=+0.163112424 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 09:45:18 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:45:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:45:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:45:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:45:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:45:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:45:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:45:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:45:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:45:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:45:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:45:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:45:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:45:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:45:19 np0005546420.localdomain sudo[256315]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:19 np0005546420.localdomain podman[256341]: 2025-12-05 09:45:19.499156669 +0000 UTC m=+0.077854213 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:45:19 np0005546420.localdomain podman[256341]: 2025-12-05 09:45:19.506406533 +0000 UTC m=+0.085104097 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:45:19 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:45:20 np0005546420.localdomain sudo[256471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sdpcrqrsxgogizpyczebgccznwmorxbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927919.573959-102-175521921762906/AnsiballZ_systemd.py
Dec 05 09:45:20 np0005546420.localdomain sudo[256471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:20 np0005546420.localdomain python3.9[256473]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 09:45:20 np0005546420.localdomain sudo[256471]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:21 np0005546420.localdomain sudo[256584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqbjsltwxgqweemnesexfunxipxipymp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927920.8594716-129-201033270662539/AnsiballZ_file.py
Dec 05 09:45:21 np0005546420.localdomain sudo[256584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:21 np0005546420.localdomain python3.9[256586]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:21 np0005546420.localdomain sudo[256584]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:21 np0005546420.localdomain sudo[256694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-edccpgradqpqlvnklmprjfkecczkyxwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927921.5837348-129-213996436960838/AnsiballZ_file.py
Dec 05 09:45:21 np0005546420.localdomain sudo[256694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:22 np0005546420.localdomain python3.9[256696]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:22 np0005546420.localdomain sudo[256694]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:22 np0005546420.localdomain sudo[256804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btixtgtdxwmtmdatjjqadnfwxxvtrlwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927922.6381326-129-8094340720853/AnsiballZ_file.py
Dec 05 09:45:22 np0005546420.localdomain sudo[256804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:23 np0005546420.localdomain python3.9[256806]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:23 np0005546420.localdomain sudo[256804]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:23 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45088 DF PROTO=TCP SPT=38678 DPT=9102 SEQ=3122669803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACC2C080000000001030307) 
Dec 05 09:45:23 np0005546420.localdomain sudo[256914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uvmukkdujyfdngwfljrsmfhuzyvaxemw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927923.25732-129-94741013638993/AnsiballZ_file.py
Dec 05 09:45:23 np0005546420.localdomain sudo[256914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:23 np0005546420.localdomain python3.9[256916]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:23 np0005546420.localdomain sudo[256914]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45089 DF PROTO=TCP SPT=38678 DPT=9102 SEQ=3122669803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACC301A0000000001030307) 
Dec 05 09:45:24 np0005546420.localdomain sudo[257024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wvcjtfrmmifrzolrlyqqfduqoglgvyfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927923.909895-129-272079687308865/AnsiballZ_file.py
Dec 05 09:45:24 np0005546420.localdomain sudo[257024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:24 np0005546420.localdomain python3.9[257026]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:24 np0005546420.localdomain sudo[257024]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:25 np0005546420.localdomain sudo[257134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yfiqkllrrmtdutjstyaavkekyglfgjkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927924.9514983-129-36958825087782/AnsiballZ_file.py
Dec 05 09:45:25 np0005546420.localdomain sudo[257134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:25 np0005546420.localdomain python3.9[257136]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19336 DF PROTO=TCP SPT=40784 DPT=9102 SEQ=3297855435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACC33D90000000001030307) 
Dec 05 09:45:25 np0005546420.localdomain sudo[257134]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:25 np0005546420.localdomain sudo[257244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hppmajdbdifykvknqwuyafnxjdrrsenc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927925.5804548-129-242579152490721/AnsiballZ_file.py
Dec 05 09:45:25 np0005546420.localdomain sudo[257244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:45:25 np0005546420.localdomain systemd[1]: tmp-crun.qgDqXW.mount: Deactivated successfully.
Dec 05 09:45:25 np0005546420.localdomain podman[257247]: 2025-12-05 09:45:25.970714659 +0000 UTC m=+0.136622644 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 05 09:45:26 np0005546420.localdomain podman[257247]: 2025-12-05 09:45:26.011236314 +0000 UTC m=+0.177144269 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 09:45:26 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:45:26 np0005546420.localdomain python3.9[257246]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:26 np0005546420.localdomain sudo[257244]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:26 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45090 DF PROTO=TCP SPT=38678 DPT=9102 SEQ=3122669803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACC381A0000000001030307) 
Dec 05 09:45:26 np0005546420.localdomain sudo[257374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdzbvmfqgppjeeufbbcxwwxucfgmugsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927926.1970336-279-46429458184783/AnsiballZ_stat.py
Dec 05 09:45:26 np0005546420.localdomain sudo[257374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:26 np0005546420.localdomain python3.9[257376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:26 np0005546420.localdomain sudo[257374]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:27 np0005546420.localdomain sudo[257462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzpowyrbmnqysoiaecnqstqizasakqdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927926.1970336-279-46429458184783/AnsiballZ_copy.py
Dec 05 09:45:27 np0005546420.localdomain sudo[257462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:27 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50930 DF PROTO=TCP SPT=56888 DPT=9102 SEQ=3543656540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACC3BD90000000001030307) 
Dec 05 09:45:27 np0005546420.localdomain python3.9[257464]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927926.1970336-279-46429458184783/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:27 np0005546420.localdomain sudo[257462]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:28 np0005546420.localdomain python3.9[257572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:28 np0005546420.localdomain python3.9[257658]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927927.752341-324-279150176605018/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:29 np0005546420.localdomain python3.9[257766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:45:29 np0005546420.localdomain podman[257785]: 2025-12-05 09:45:29.504623009 +0000 UTC m=+0.082006011 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, architecture=x86_64, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm)
Dec 05 09:45:29 np0005546420.localdomain podman[257785]: 2025-12-05 09:45:29.524485484 +0000 UTC m=+0.101868526 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git)
Dec 05 09:45:29 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:45:30 np0005546420.localdomain python3.9[257872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927928.8737402-324-221430174183674/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45091 DF PROTO=TCP SPT=38678 DPT=9102 SEQ=3122669803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACC47D90000000001030307) 
Dec 05 09:45:30 np0005546420.localdomain python3.9[257980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:31 np0005546420.localdomain python3.9[258066]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927930.1380193-324-137919941612064/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=3f44b386f36a92afa987fbfb793d4a82ce2190a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:45:31 np0005546420.localdomain podman[258084]: 2025-12-05 09:45:31.497302093 +0000 UTC m=+0.078637047 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:45:31 np0005546420.localdomain podman[258084]: 2025-12-05 09:45:31.531033128 +0000 UTC m=+0.112368082 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:45:31 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:45:32 np0005546420.localdomain python3.9[258197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:32 np0005546420.localdomain python3.9[258283]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927931.889601-498-13869558849410/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=75518b9ca1c9a507fba8f4d8f8342e6edd3bf5ee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:33 np0005546420.localdomain python3.9[258391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:34 np0005546420.localdomain python3.9[258477]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927933.2466955-543-245675905872195/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:35 np0005546420.localdomain python3.9[258585]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:35 np0005546420.localdomain python3.9[258671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927934.349667-543-231578107184385/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:36 np0005546420.localdomain python3.9[258779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:36 np0005546420.localdomain python3.9[258834]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:45:36 np0005546420.localdomain systemd[1]: tmp-crun.ZIKWsH.mount: Deactivated successfully.
Dec 05 09:45:36 np0005546420.localdomain podman[258835]: 2025-12-05 09:45:36.872036412 +0000 UTC m=+0.089701490 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 05 09:45:36 np0005546420.localdomain podman[258835]: 2025-12-05 09:45:36.952310559 +0000 UTC m=+0.169975607 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_id=ovn_controller)
Dec 05 09:45:36 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:45:37 np0005546420.localdomain python3.9[258967]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:37 np0005546420.localdomain python3.9[259053]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927936.90639-630-156455875387827/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:38 np0005546420.localdomain python3.9[259161]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:45:38 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45092 DF PROTO=TCP SPT=38678 DPT=9102 SEQ=3122669803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACC67DA0000000001030307) 
Dec 05 09:45:38 np0005546420.localdomain sudo[259271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-emhtnnzxowzmcpqmkzsuqldnfegfwvkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927938.7092183-735-154681277220737/AnsiballZ_file.py
Dec 05 09:45:38 np0005546420.localdomain sudo[259271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:39 np0005546420.localdomain python3.9[259273]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:39 np0005546420.localdomain sudo[259271]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:39 np0005546420.localdomain sudo[259381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptmghzlamwlsmxifyjrgyvgsllldycee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927939.3731248-759-199158844079188/AnsiballZ_stat.py
Dec 05 09:45:39 np0005546420.localdomain sudo[259381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:39 np0005546420.localdomain python3.9[259383]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:39 np0005546420.localdomain sudo[259381]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:40 np0005546420.localdomain sudo[259438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrhxvhcbwjkgtsawiovbrimnouxxtfxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927939.3731248-759-199158844079188/AnsiballZ_file.py
Dec 05 09:45:40 np0005546420.localdomain sudo[259438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:40 np0005546420.localdomain python3.9[259440]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:40 np0005546420.localdomain sudo[259438]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:40 np0005546420.localdomain sudo[259548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-flvrscrtcvfuqvwnfdklexaowypacnrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927940.3960388-759-82942846948608/AnsiballZ_stat.py
Dec 05 09:45:40 np0005546420.localdomain sudo[259548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:40 np0005546420.localdomain python3.9[259550]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:40 np0005546420.localdomain sudo[259548]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:41 np0005546420.localdomain sudo[259605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rqrkawrrphkxzistqulppdsuxmhbksqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927940.3960388-759-82942846948608/AnsiballZ_file.py
Dec 05 09:45:41 np0005546420.localdomain sudo[259605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:41 np0005546420.localdomain python3.9[259607]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:41 np0005546420.localdomain sudo[259605]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:41 np0005546420.localdomain sudo[259715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhjolhgguvgzjoassdppaozxmvwmjpte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927941.4315124-828-234156729631671/AnsiballZ_file.py
Dec 05 09:45:41 np0005546420.localdomain sudo[259715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:41 np0005546420.localdomain python3.9[259717]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:45:41 np0005546420.localdomain sudo[259715]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:42 np0005546420.localdomain sudo[259825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ghsvnvpcyrbdsbpxjnivhljaebuhkrnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927942.2288015-852-232538240541183/AnsiballZ_stat.py
Dec 05 09:45:42 np0005546420.localdomain sudo[259825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:42 np0005546420.localdomain python3.9[259827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:42 np0005546420.localdomain sudo[259825]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:42 np0005546420.localdomain sudo[259882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gvlttrrwpaxbfvvcxkgyyddsojhefpky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927942.2288015-852-232538240541183/AnsiballZ_file.py
Dec 05 09:45:42 np0005546420.localdomain sudo[259882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:43 np0005546420.localdomain python3.9[259884]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:45:43 np0005546420.localdomain sudo[259882]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:43 np0005546420.localdomain sudo[259992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zodlpqtsbupyphmxrroznytactgrpspm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927943.3994267-888-41029180139459/AnsiballZ_stat.py
Dec 05 09:45:43 np0005546420.localdomain sudo[259992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:43 np0005546420.localdomain python3.9[259994]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:43 np0005546420.localdomain sudo[259992]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:44 np0005546420.localdomain sudo[260049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxujmobqzbyrygyxovsogdjonitfglty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927943.3994267-888-41029180139459/AnsiballZ_file.py
Dec 05 09:45:44 np0005546420.localdomain sudo[260049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:44 np0005546420.localdomain python3.9[260051]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:45:44 np0005546420.localdomain sudo[260049]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:44 np0005546420.localdomain sudo[260159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bubtsznvpocwohetktunvgybofldeckf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927944.483045-924-261683129148793/AnsiballZ_systemd.py
Dec 05 09:45:44 np0005546420.localdomain sudo[260159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:45:44 np0005546420.localdomain systemd[1]: tmp-crun.yIUQNe.mount: Deactivated successfully.
Dec 05 09:45:44 np0005546420.localdomain podman[260162]: 2025-12-05 09:45:44.878251875 +0000 UTC m=+0.088868323 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 09:45:44 np0005546420.localdomain podman[260162]: 2025-12-05 09:45:44.890294839 +0000 UTC m=+0.100911357 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 09:45:44 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:45:45 np0005546420.localdomain python3.9[260161]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:45:45 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:45:45 np0005546420.localdomain systemd-rc-local-generator[260201]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:45:45 np0005546420.localdomain systemd-sysv-generator[260205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:45:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:45:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:45 np0005546420.localdomain sudo[260159]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:45 np0005546420.localdomain sudo[260325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lewmnrtjkkqctlasipwxrcxzydigmkor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927945.7074497-948-125223753169085/AnsiballZ_stat.py
Dec 05 09:45:45 np0005546420.localdomain sudo[260325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:46 np0005546420.localdomain python3.9[260327]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:46 np0005546420.localdomain sudo[260325]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:46 np0005546420.localdomain sudo[260382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zbyaleukhcumxnnffgxmayhvsrsxkbsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927945.7074497-948-125223753169085/AnsiballZ_file.py
Dec 05 09:45:46 np0005546420.localdomain sudo[260382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:46 np0005546420.localdomain python3.9[260384]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:45:46 np0005546420.localdomain sudo[260382]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:47 np0005546420.localdomain sudo[260492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xabyprmqbycrxywjmrwdzkiksylfsnnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927946.8110797-984-92803145254825/AnsiballZ_stat.py
Dec 05 09:45:47 np0005546420.localdomain sudo[260492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:45:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:45:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:45:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144238 "" "Go-http-client/1.1"
Dec 05 09:45:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:45:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16306 "" "Go-http-client/1.1"
Dec 05 09:45:47 np0005546420.localdomain python3.9[260494]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:47 np0005546420.localdomain sudo[260492]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:47 np0005546420.localdomain sudo[260549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqpwygwibdtnuoapltzpsrjbyguhgdvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927946.8110797-984-92803145254825/AnsiballZ_file.py
Dec 05 09:45:47 np0005546420.localdomain sudo[260549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:47 np0005546420.localdomain python3.9[260551]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:45:47 np0005546420.localdomain sudo[260549]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:48 np0005546420.localdomain sudo[260659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nxixriushezczmimmygbskfunxzizvhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927948.0165153-1020-68038090000231/AnsiballZ_systemd.py
Dec 05 09:45:48 np0005546420.localdomain sudo[260659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:48 np0005546420.localdomain python3.9[260661]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:45:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:45:48 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:45:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:45:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:45:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:45:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:45:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:45:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:45:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:45:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:45:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:45:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:45:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:45:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:45:48 np0005546420.localdomain systemd-rc-local-generator[260699]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:45:48 np0005546420.localdomain podman[260663]: 2025-12-05 09:45:48.899208975 +0000 UTC m=+0.090614639 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:45:48 np0005546420.localdomain systemd-sysv-generator[260702]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:45:48 np0005546420.localdomain podman[260663]: 2025-12-05 09:45:48.931309249 +0000 UTC m=+0.122714893 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:45:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:45:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:48 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:45:49 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:45:49 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 09:45:49 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 09:45:49 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 09:45:49 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 09:45:49 np0005546420.localdomain sudo[260659]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:49 np0005546420.localdomain sudo[260829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-goojfcywhdykknbcidxreoxgiggdwpee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927949.709672-1050-25609032925057/AnsiballZ_file.py
Dec 05 09:45:49 np0005546420.localdomain sudo[260829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:45:50 np0005546420.localdomain podman[260831]: 2025-12-05 09:45:50.028415428 +0000 UTC m=+0.057924716 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:45:50 np0005546420.localdomain podman[260831]: 2025-12-05 09:45:50.038293503 +0000 UTC m=+0.067802771 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:45:50 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:45:50 np0005546420.localdomain systemd[1]: tmp-crun.d6zzru.mount: Deactivated successfully.
Dec 05 09:45:50 np0005546420.localdomain python3.9[260832]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:45:50 np0005546420.localdomain sudo[260829]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:50 np0005546420.localdomain sudo[260963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ykujurgabzpnjoydhcoxammhjhigjtve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927950.466832-1074-68155860193539/AnsiballZ_stat.py
Dec 05 09:45:50 np0005546420.localdomain sudo[260963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:50 np0005546420.localdomain python3.9[260965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:45:50 np0005546420.localdomain sudo[260963]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:51 np0005546420.localdomain sudo[261051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqxkrokfrtottrgoetkijyqpgqbhormy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927950.466832-1074-68155860193539/AnsiballZ_copy.py
Dec 05 09:45:51 np0005546420.localdomain sudo[261051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:51 np0005546420.localdomain python3.9[261053]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927950.466832-1074-68155860193539/.source.json _original_basename=.6ha95zgy follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:45:51 np0005546420.localdomain sudo[261051]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:51 np0005546420.localdomain sudo[261161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-arhhqwncahtdavxqvcjrljdlbczqohsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927951.723535-1119-41630854219385/AnsiballZ_file.py
Dec 05 09:45:51 np0005546420.localdomain sudo[261161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:52 np0005546420.localdomain python3.9[261163]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:45:52 np0005546420.localdomain sudo[261161]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:52 np0005546420.localdomain sudo[261271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gddlucbrvaimdoblxmztrjetducutxsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927952.430022-1143-24916327694120/AnsiballZ_stat.py
Dec 05 09:45:52 np0005546420.localdomain sudo[261271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:52 np0005546420.localdomain sudo[261271]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:53 np0005546420.localdomain sudo[261359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-apaxlljeuebqxmgriffifcsdnmjagalz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927952.430022-1143-24916327694120/AnsiballZ_copy.py
Dec 05 09:45:53 np0005546420.localdomain sudo[261359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:53 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53680 DF PROTO=TCP SPT=53512 DPT=9102 SEQ=1617023140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACCA1390000000001030307) 
Dec 05 09:45:53 np0005546420.localdomain sudo[261359]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:54 np0005546420.localdomain sudo[261469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wogftvyeinkaqwnudqpmneqfupeynneu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927953.8769145-1194-9116744395247/AnsiballZ_container_config_data.py
Dec 05 09:45:54 np0005546420.localdomain sudo[261469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53681 DF PROTO=TCP SPT=53512 DPT=9102 SEQ=1617023140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACCA5590000000001030307) 
Dec 05 09:45:54 np0005546420.localdomain python3.9[261471]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False
Dec 05 09:45:54 np0005546420.localdomain sudo[261469]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45093 DF PROTO=TCP SPT=38678 DPT=9102 SEQ=3122669803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACCA7DA0000000001030307) 
Dec 05 09:45:55 np0005546420.localdomain sudo[261579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yfafgybbcrbrazhhfdptwuidccbeqxdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927954.775682-1221-63936393024812/AnsiballZ_container_config_hash.py
Dec 05 09:45:55 np0005546420.localdomain sudo[261579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:55 np0005546420.localdomain python3.9[261581]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:45:55 np0005546420.localdomain sudo[261579]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:56 np0005546420.localdomain sudo[261689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ejmefyqbjoazavciugykdfyghkvjcovo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927955.6649718-1248-270565725137907/AnsiballZ_podman_container_info.py
Dec 05 09:45:56 np0005546420.localdomain sudo[261689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:45:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:45:56 np0005546420.localdomain systemd[1]: tmp-crun.beeHaB.mount: Deactivated successfully.
Dec 05 09:45:56 np0005546420.localdomain podman[261691]: 2025-12-05 09:45:56.179124947 +0000 UTC m=+0.104445926 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 05 09:45:56 np0005546420.localdomain podman[261691]: 2025-12-05 09:45:56.21602828 +0000 UTC m=+0.141349329 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd)
Dec 05 09:45:56 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:45:56 np0005546420.localdomain python3.9[261692]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 09:45:56 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53682 DF PROTO=TCP SPT=53512 DPT=9102 SEQ=1617023140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACCAD590000000001030307) 
Dec 05 09:45:56 np0005546420.localdomain sudo[261689]: pam_unix(sudo:session): session closed for user root
Dec 05 09:45:57 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19337 DF PROTO=TCP SPT=40784 DPT=9102 SEQ=3297855435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACCB1DA0000000001030307) 
Dec 05 09:46:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:46:00 np0005546420.localdomain podman[261802]: 2025-12-05 09:46:00.525680864 +0000 UTC m=+0.092311361 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350)
Dec 05 09:46:00 np0005546420.localdomain podman[261802]: 2025-12-05 09:46:00.559550313 +0000 UTC m=+0.126180870 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 09:46:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53683 DF PROTO=TCP SPT=53512 DPT=9102 SEQ=1617023140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACCBD190000000001030307) 
Dec 05 09:46:00 np0005546420.localdomain sudo[261864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdogczgfcxipeepekgqluginhsuwjmbi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764927960.1256897-1287-103521666420671/AnsiballZ_edpm_container_manage.py
Dec 05 09:46:00 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:46:00 np0005546420.localdomain sudo[261864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:00 np0005546420.localdomain python3[261866]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:46:01 np0005546420.localdomain podman[261904]: 
Dec 05 09:46:01 np0005546420.localdomain podman[261904]: 2025-12-05 09:46:01.149418397 +0000 UTC m=+0.086602643 container create d0a958570f4db98def20b18013d970118da9f6ca6912e96ae4ff260b569ce234 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=neutron_dhcp, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=neutron_dhcp_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c591c2bebc185e416bb8472481c989371c8197eeabaaf998e8f72387a4f59dcc'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:46:01 np0005546420.localdomain podman[261904]: 2025-12-05 09:46:01.100572554 +0000 UTC m=+0.037756830 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 09:46:01 np0005546420.localdomain python3[261866]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=c591c2bebc185e416bb8472481c989371c8197eeabaaf998e8f72387a4f59dcc --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c591c2bebc185e416bb8472481c989371c8197eeabaaf998e8f72387a4f59dcc'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 09:46:01 np0005546420.localdomain sudo[261864]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:01 np0005546420.localdomain sudo[262049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttnlpwxynzigctyryizxymoryljugnxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927961.5391674-1311-146604647239521/AnsiballZ_stat.py
Dec 05 09:46:01 np0005546420.localdomain sudo[262049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:46:01 np0005546420.localdomain systemd[1]: tmp-crun.W61N17.mount: Deactivated successfully.
Dec 05 09:46:01 np0005546420.localdomain podman[262052]: 2025-12-05 09:46:01.924227951 +0000 UTC m=+0.102595900 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:46:01 np0005546420.localdomain podman[262052]: 2025-12-05 09:46:01.938503523 +0000 UTC m=+0.116871512 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:46:01 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:46:01 np0005546420.localdomain python3.9[262051]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:46:02 np0005546420.localdomain sudo[262049]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:02 np0005546420.localdomain sudo[262185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cclymgwcuhisvtmmrrntsogcqpggisoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927962.3421113-1338-48794560962155/AnsiballZ_file.py
Dec 05 09:46:02 np0005546420.localdomain sudo[262185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:02 np0005546420.localdomain python3.9[262187]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:02 np0005546420.localdomain sudo[262185]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:03 np0005546420.localdomain sudo[262240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-awzlypztoiumptvitdlxjsbixcapqdwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927962.3421113-1338-48794560962155/AnsiballZ_stat.py
Dec 05 09:46:03 np0005546420.localdomain sudo[262240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:03 np0005546420.localdomain python3.9[262242]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:46:03 np0005546420.localdomain sudo[262240]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:03 np0005546420.localdomain sudo[262349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdqrhfxvlcshyuezgbpaugzsraucuopu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927963.2984352-1338-157787789161431/AnsiballZ_copy.py
Dec 05 09:46:03 np0005546420.localdomain sudo[262349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:03 np0005546420.localdomain python3.9[262351]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927963.2984352-1338-157787789161431/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:03 np0005546420.localdomain sudo[262349]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:46:04.106 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:46:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:46:04.106 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:46:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:46:04.107 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:46:04 np0005546420.localdomain sudo[262404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lohvyogbbtkpoqutgbokuknanspiroeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927963.2984352-1338-157787789161431/AnsiballZ_systemd.py
Dec 05 09:46:04 np0005546420.localdomain sudo[262404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:04 np0005546420.localdomain python3.9[262406]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:46:04 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:46:04 np0005546420.localdomain systemd-sysv-generator[262431]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:46:04 np0005546420.localdomain systemd-rc-local-generator[262428]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:46:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:46:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:04 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:04 np0005546420.localdomain sudo[262404]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:05 np0005546420.localdomain sudo[262495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qfvbsbewuorgvlfaqrcqnltkcrphimeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927963.2984352-1338-157787789161431/AnsiballZ_systemd.py
Dec 05 09:46:05 np0005546420.localdomain sudo[262495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:05 np0005546420.localdomain python3.9[262497]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:46:05 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:46:05 np0005546420.localdomain systemd-sysv-generator[262524]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:46:05 np0005546420.localdomain systemd-rc-local-generator[262520]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:46:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:46:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:46:05 np0005546420.localdomain systemd[1]: Starting neutron_dhcp_agent container...
Dec 05 09:46:06 np0005546420.localdomain systemd[1]: tmp-crun.wjrDyg.mount: Deactivated successfully.
Dec 05 09:46:06 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:46:06 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8b78cbcfa71ed83b697c574a6be1b9fddf7acc7cbb76f3912a2f3285d04eb6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:06 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8b78cbcfa71ed83b697c574a6be1b9fddf7acc7cbb76f3912a2f3285d04eb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:06 np0005546420.localdomain podman[262538]: 2025-12-05 09:46:06.046186769 +0000 UTC m=+0.121841605 container init d0a958570f4db98def20b18013d970118da9f6ca6912e96ae4ff260b569ce234 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c591c2bebc185e416bb8472481c989371c8197eeabaaf998e8f72387a4f59dcc'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_dhcp, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_dhcp_agent)
Dec 05 09:46:06 np0005546420.localdomain podman[262538]: 2025-12-05 09:46:06.052909788 +0000 UTC m=+0.128564584 container start d0a958570f4db98def20b18013d970118da9f6ca6912e96ae4ff260b569ce234 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c591c2bebc185e416bb8472481c989371c8197eeabaaf998e8f72387a4f59dcc'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_dhcp, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=neutron_dhcp_agent)
Dec 05 09:46:06 np0005546420.localdomain podman[262538]: neutron_dhcp_agent
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: + sudo -E kolla_set_configs
Dec 05 09:46:06 np0005546420.localdomain systemd[1]: Started neutron_dhcp_agent container.
Dec 05 09:46:06 np0005546420.localdomain sudo[262495]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Validating config file
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Copying service configuration files
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Writing out command to execute
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: ++ cat /run_command
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: + CMD=/usr/bin/neutron-dhcp-agent
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: + ARGS=
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: + sudo kolla_copy_cacerts
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: + [[ ! -n '' ]]
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: + . kolla_extend_start
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: Running command: '/usr/bin/neutron-dhcp-agent'
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: + umask 0022
Dec 05 09:46:06 np0005546420.localdomain neutron_dhcp_agent[262552]: + exec /usr/bin/neutron-dhcp-agent
Dec 05 09:46:06 np0005546420.localdomain sudo[262674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sxtuwlnbovsshaxpwrzqeiijglsytert ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927966.564129-1422-76646813223977/AnsiballZ_systemd.py
Dec 05 09:46:06 np0005546420.localdomain sudo[262674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:46:07 np0005546420.localdomain systemd[1]: tmp-crun.cAR9a3.mount: Deactivated successfully.
Dec 05 09:46:07 np0005546420.localdomain podman[262677]: 2025-12-05 09:46:07.126136777 +0000 UTC m=+0.111972060 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 09:46:07 np0005546420.localdomain podman[262677]: 2025-12-05 09:46:07.164460854 +0000 UTC m=+0.150296137 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:46:07 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:46:07 np0005546420.localdomain python3.9[262676]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:46:07 np0005546420.localdomain systemd[1]: Stopping neutron_dhcp_agent container...
Dec 05 09:46:07 np0005546420.localdomain systemd[1]: libpod-d0a958570f4db98def20b18013d970118da9f6ca6912e96ae4ff260b569ce234.scope: Deactivated successfully.
Dec 05 09:46:07 np0005546420.localdomain systemd[1]: libpod-d0a958570f4db98def20b18013d970118da9f6ca6912e96ae4ff260b569ce234.scope: Consumed 1.243s CPU time.
Dec 05 09:46:07 np0005546420.localdomain podman[262704]: 2025-12-05 09:46:07.311333634 +0000 UTC m=+0.071760435 container died d0a958570f4db98def20b18013d970118da9f6ca6912e96ae4ff260b569ce234 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c591c2bebc185e416bb8472481c989371c8197eeabaaf998e8f72387a4f59dcc'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 09:46:07 np0005546420.localdomain podman[262704]: 2025-12-05 09:46:07.376248985 +0000 UTC m=+0.136675766 container cleanup d0a958570f4db98def20b18013d970118da9f6ca6912e96ae4ff260b569ce234 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c591c2bebc185e416bb8472481c989371c8197eeabaaf998e8f72387a4f59dcc'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, config_id=neutron_dhcp)
Dec 05 09:46:07 np0005546420.localdomain podman[262704]: neutron_dhcp_agent
Dec 05 09:46:07 np0005546420.localdomain podman[262746]: error opening file `/run/crun/d0a958570f4db98def20b18013d970118da9f6ca6912e96ae4ff260b569ce234/status`: No such file or directory
Dec 05 09:46:07 np0005546420.localdomain podman[262734]: 2025-12-05 09:46:07.453576191 +0000 UTC m=+0.053608882 container cleanup d0a958570f4db98def20b18013d970118da9f6ca6912e96ae4ff260b569ce234 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c591c2bebc185e416bb8472481c989371c8197eeabaaf998e8f72387a4f59dcc'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=neutron_dhcp, io.buildah.version=1.41.3, container_name=neutron_dhcp_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 09:46:07 np0005546420.localdomain podman[262734]: neutron_dhcp_agent
Dec 05 09:46:07 np0005546420.localdomain systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully.
Dec 05 09:46:07 np0005546420.localdomain systemd[1]: Stopped neutron_dhcp_agent container.
Dec 05 09:46:07 np0005546420.localdomain systemd[1]: Starting neutron_dhcp_agent container...
Dec 05 09:46:07 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:46:07 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8b78cbcfa71ed83b697c574a6be1b9fddf7acc7cbb76f3912a2f3285d04eb6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:07 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d8b78cbcfa71ed83b697c574a6be1b9fddf7acc7cbb76f3912a2f3285d04eb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 09:46:07 np0005546420.localdomain podman[262749]: 2025-12-05 09:46:07.578730468 +0000 UTC m=+0.096282714 container init d0a958570f4db98def20b18013d970118da9f6ca6912e96ae4ff260b569ce234 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c591c2bebc185e416bb8472481c989371c8197eeabaaf998e8f72387a4f59dcc'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, config_id=neutron_dhcp)
Dec 05 09:46:07 np0005546420.localdomain podman[262749]: 2025-12-05 09:46:07.585714304 +0000 UTC m=+0.103266540 container start d0a958570f4db98def20b18013d970118da9f6ca6912e96ae4ff260b569ce234 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c591c2bebc185e416bb8472481c989371c8197eeabaaf998e8f72387a4f59dcc'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=neutron_dhcp, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:46:07 np0005546420.localdomain podman[262749]: neutron_dhcp_agent
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: + sudo -E kolla_set_configs
Dec 05 09:46:07 np0005546420.localdomain systemd[1]: Started neutron_dhcp_agent container.
Dec 05 09:46:07 np0005546420.localdomain sudo[262674]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Validating config file
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Copying service configuration files
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Writing out command to execute
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: ++ cat /run_command
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: + CMD=/usr/bin/neutron-dhcp-agent
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: + ARGS=
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: + sudo kolla_copy_cacerts
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: + [[ ! -n '' ]]
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: + . kolla_extend_start
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: Running command: '/usr/bin/neutron-dhcp-agent'
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: + umask 0022
Dec 05 09:46:07 np0005546420.localdomain neutron_dhcp_agent[262765]: + exec /usr/bin/neutron-dhcp-agent
Dec 05 09:46:08 np0005546420.localdomain sshd[255966]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:46:08 np0005546420.localdomain systemd[1]: session-57.scope: Deactivated successfully.
Dec 05 09:46:08 np0005546420.localdomain systemd[1]: session-57.scope: Consumed 35.585s CPU time.
Dec 05 09:46:08 np0005546420.localdomain systemd-logind[762]: Session 57 logged out. Waiting for processes to exit.
Dec 05 09:46:08 np0005546420.localdomain systemd-logind[762]: Removed session 57.
Dec 05 09:46:08 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53684 DF PROTO=TCP SPT=53512 DPT=9102 SEQ=1617023140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACCDDDA0000000001030307) 
Dec 05 09:46:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 09:46:09.045 262769 INFO neutron.common.config [-] Logging enabled!
Dec 05 09:46:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 09:46:09.045 262769 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43
Dec 05 09:46:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 09:46:09.448 262769 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Dec 05 09:46:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 09:46:09.658 262769 INFO neutron.agent.dhcp.agent [None req-eb019a1f-adc5-4e59-ba34-4fa8b928c7ea - - - - - -] All active networks have been fetched through RPC.
Dec 05 09:46:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 09:46:09.658 262769 INFO neutron.agent.dhcp.agent [None req-eb019a1f-adc5-4e59-ba34-4fa8b928c7ea - - - - - -] Synchronizing state complete
Dec 05 09:46:09 np0005546420.localdomain sudo[262798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:46:09 np0005546420.localdomain sudo[262798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:46:09 np0005546420.localdomain sudo[262798]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 09:46:09.724 262769 INFO neutron.agent.dhcp.agent [None req-eb019a1f-adc5-4e59-ba34-4fa8b928c7ea - - - - - -] DHCP agent started
Dec 05 09:46:09 np0005546420.localdomain sudo[262816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 09:46:09 np0005546420.localdomain sudo[262816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:46:10 np0005546420.localdomain sudo[262816]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:10 np0005546420.localdomain sudo[262855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:46:10 np0005546420.localdomain sudo[262855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:46:10 np0005546420.localdomain sudo[262855]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:10 np0005546420.localdomain sudo[262873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:46:10 np0005546420.localdomain sudo[262873]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:46:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:46:10.451 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 09:46:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:46:10.453 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 09:46:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:46:10.454 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 09:46:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:11.036 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:46:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:11.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:46:11 np0005546420.localdomain sudo[262873]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:12.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:46:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:12.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:46:12 np0005546420.localdomain sudo[262922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:46:12 np0005546420.localdomain sudo[262922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:46:12 np0005546420.localdomain sudo[262922]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:46:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:46:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:13.042 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:46:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:13.043 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:46:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:14.042 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:46:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:15.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:46:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:15.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:46:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:15.042 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:46:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:46:15 np0005546420.localdomain podman[262940]: 2025-12-05 09:46:15.510481405 +0000 UTC m=+0.079045970 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm)
Dec 05 09:46:15 np0005546420.localdomain podman[262940]: 2025-12-05 09:46:15.519203595 +0000 UTC m=+0.087768100 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:46:15 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:46:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:46:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:46:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:46:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:46:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:46:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16746 "" "Go-http-client/1.1"
Dec 05 09:46:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:46:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:46:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:46:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:46:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:46:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:46:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:46:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:46:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:46:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:46:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:46:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:46:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:46:19 np0005546420.localdomain podman[262961]: 2025-12-05 09:46:19.491126457 +0000 UTC m=+0.068507524 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 05 09:46:19 np0005546420.localdomain podman[262961]: 2025-12-05 09:46:19.496114812 +0000 UTC m=+0.073495899 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:46:19 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.336 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.337 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.338 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.358 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.358 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.358 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.359 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.359 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:46:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:46:20 np0005546420.localdomain systemd[1]: tmp-crun.ghhyQx.mount: Deactivated successfully.
Dec 05 09:46:20 np0005546420.localdomain podman[262981]: 2025-12-05 09:46:20.507509645 +0000 UTC m=+0.088098871 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:46:20 np0005546420.localdomain podman[262981]: 2025-12-05 09:46:20.51930928 +0000 UTC m=+0.099898576 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:46:20 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.818 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.981 230124 WARNING nova.virt.libvirt.driver [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.982 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12817MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.982 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:46:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:20.983 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:46:21 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:21.064 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:46:21 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:21.064 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:46:21 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:21.077 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:46:21 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:21.508 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:46:21 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:21.514 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:46:21 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:21.531 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:46:21 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:21.534 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:46:21 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:46:21.534 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:46:21 np0005546420.localdomain sshd[263047]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:46:21 np0005546420.localdomain sshd[263047]: Accepted publickey for zuul from 192.168.122.30 port 38842 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:46:21 np0005546420.localdomain systemd-logind[762]: New session 58 of user zuul.
Dec 05 09:46:22 np0005546420.localdomain systemd[1]: Started Session 58 of User zuul.
Dec 05 09:46:22 np0005546420.localdomain sshd[263047]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:46:22 np0005546420.localdomain python3.9[263158]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:46:23 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56507 DF PROTO=TCP SPT=43144 DPT=9102 SEQ=3980410648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD16680000000001030307) 
Dec 05 09:46:24 np0005546420.localdomain python3.9[263270]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:46:24 np0005546420.localdomain network[263287]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:46:24 np0005546420.localdomain network[263288]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:46:24 np0005546420.localdomain network[263289]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:46:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56508 DF PROTO=TCP SPT=43144 DPT=9102 SEQ=3980410648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD1A590000000001030307) 
Dec 05 09:46:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53685 DF PROTO=TCP SPT=53512 DPT=9102 SEQ=1617023140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD1DDA0000000001030307) 
Dec 05 09:46:25 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:46:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:46:26 np0005546420.localdomain podman[263360]: 2025-12-05 09:46:26.359345245 +0000 UTC m=+0.092638322 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:46:26 np0005546420.localdomain podman[263360]: 2025-12-05 09:46:26.37534773 +0000 UTC m=+0.108640777 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:46:26 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:46:26 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56509 DF PROTO=TCP SPT=43144 DPT=9102 SEQ=3980410648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD22590000000001030307) 
Dec 05 09:46:27 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45094 DF PROTO=TCP SPT=38678 DPT=9102 SEQ=3122669803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD25D90000000001030307) 
Dec 05 09:46:28 np0005546420.localdomain sudo[263539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lslxaythqosirnyjqiqnigmmuddmqvvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927987.7374477-102-210216511508465/AnsiballZ_setup.py
Dec 05 09:46:28 np0005546420.localdomain sudo[263539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:28 np0005546420.localdomain python3.9[263541]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 09:46:28 np0005546420.localdomain sudo[263539]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:29 np0005546420.localdomain sudo[263602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-syrqtitdgtkxitxtncqpnihdelqnpetl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927987.7374477-102-210216511508465/AnsiballZ_dnf.py
Dec 05 09:46:29 np0005546420.localdomain sudo[263602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:29 np0005546420.localdomain python3.9[263604]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:46:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56510 DF PROTO=TCP SPT=43144 DPT=9102 SEQ=3980410648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD32190000000001030307) 
Dec 05 09:46:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:46:31 np0005546420.localdomain podman[263607]: 2025-12-05 09:46:31.661648371 +0000 UTC m=+0.233991550 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 09:46:31 np0005546420.localdomain podman[263607]: 2025-12-05 09:46:31.68194244 +0000 UTC m=+0.254285599 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, config_id=edpm, architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 09:46:31 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:46:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:46:32 np0005546420.localdomain sudo[263602]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:32 np0005546420.localdomain podman[263629]: 2025-12-05 09:46:32.51605947 +0000 UTC m=+0.090885206 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:46:32 np0005546420.localdomain podman[263629]: 2025-12-05 09:46:32.523695327 +0000 UTC m=+0.098520993 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:46:32 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:46:33 np0005546420.localdomain sudo[263759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zngllhgmxnpfwjctmqpzsyloobvldabi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927992.668908-138-89080869101085/AnsiballZ_stat.py
Dec 05 09:46:33 np0005546420.localdomain sudo[263759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:33 np0005546420.localdomain python3.9[263761]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:46:33 np0005546420.localdomain sudo[263759]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:33 np0005546420.localdomain sudo[263869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gusraygfxfrpenvtskjaivifhrtuught ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927993.5540466-168-32888004594455/AnsiballZ_command.py
Dec 05 09:46:33 np0005546420.localdomain sudo[263869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:34 np0005546420.localdomain python3.9[263871]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:34 np0005546420.localdomain sudo[263869]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:34 np0005546420.localdomain sudo[263980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ffqbkmvnanrkkeypagjrwbnhvipnzkbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927994.5305-198-246604983450439/AnsiballZ_stat.py
Dec 05 09:46:34 np0005546420.localdomain sudo[263980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:34 np0005546420.localdomain python3.9[263982]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:46:35 np0005546420.localdomain sudo[263980]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:35 np0005546420.localdomain sudo[264092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bosmlknounbaldwucyvdtjrrhhhuibsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927995.352763-231-275006945218278/AnsiballZ_lineinfile.py
Dec 05 09:46:35 np0005546420.localdomain sudo[264092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:36 np0005546420.localdomain python3.9[264094]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:36 np0005546420.localdomain sudo[264092]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:36 np0005546420.localdomain sudo[264202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-akbkcdatflnnifzrbfilhcsfubeezgxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927996.3495963-258-257288470545788/AnsiballZ_systemd_service.py
Dec 05 09:46:36 np0005546420.localdomain sudo[264202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:37 np0005546420.localdomain python3.9[264204]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:46:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:46:37 np0005546420.localdomain sudo[264202]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:37 np0005546420.localdomain podman[264206]: 2025-12-05 09:46:37.412317807 +0000 UTC m=+0.081652131 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 09:46:37 np0005546420.localdomain podman[264206]: 2025-12-05 09:46:37.44565984 +0000 UTC m=+0.114994114 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 09:46:37 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:46:37 np0005546420.localdomain sudo[264339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sblserzfrjjeyijxjrtguufvhtmiqfin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927997.5564003-282-38925331748217/AnsiballZ_systemd_service.py
Dec 05 09:46:37 np0005546420.localdomain sudo[264339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:38 np0005546420.localdomain python3.9[264341]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:46:38 np0005546420.localdomain sudo[264339]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:38 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56511 DF PROTO=TCP SPT=43144 DPT=9102 SEQ=3980410648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD51DA0000000001030307) 
Dec 05 09:46:38 np0005546420.localdomain sudo[264451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhzqjvcbbeznapkwegpfokasqybetqdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764927998.682743-315-190415075884029/AnsiballZ_service_facts.py
Dec 05 09:46:38 np0005546420.localdomain sudo[264451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:39 np0005546420.localdomain python3.9[264453]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:46:39 np0005546420.localdomain network[264470]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:46:39 np0005546420.localdomain network[264471]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:46:39 np0005546420.localdomain network[264472]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:46:40 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:46:43 np0005546420.localdomain sudo[264451]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:45 np0005546420.localdomain sudo[264704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-garnsfdafsbcirdrardzqwpkqlogcfyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928005.4264343-345-260488061313504/AnsiballZ_file.py
Dec 05 09:46:45 np0005546420.localdomain sudo[264704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:46:45 np0005546420.localdomain podman[264707]: 2025-12-05 09:46:45.902567106 +0000 UTC m=+0.064283862 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 05 09:46:45 np0005546420.localdomain podman[264707]: 2025-12-05 09:46:45.934688621 +0000 UTC m=+0.096405357 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:46:45 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:46:46 np0005546420.localdomain python3.9[264706]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 09:46:46 np0005546420.localdomain sudo[264704]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:46 np0005546420.localdomain sudo[264833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtvehaueizpxljkqggfoeeufwbmwcagp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928006.2302082-369-260369075440176/AnsiballZ_modprobe.py
Dec 05 09:46:46 np0005546420.localdomain sudo[264833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:46 np0005546420.localdomain python3.9[264835]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 05 09:46:46 np0005546420.localdomain sudo[264833]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:46:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:46:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:46:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:46:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:46:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16744 "" "Go-http-client/1.1"
Dec 05 09:46:47 np0005546420.localdomain sudo[264944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yawliljgnncazgutrzketscjmaveopwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928007.0624063-393-4697460710512/AnsiballZ_stat.py
Dec 05 09:46:47 np0005546420.localdomain sudo[264944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:47 np0005546420.localdomain python3.9[264946]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:46:47 np0005546420.localdomain sudo[264944]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:47 np0005546420.localdomain sudo[265001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-olbwipoehiyskiegjzwelctpwgbkdfpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928007.0624063-393-4697460710512/AnsiballZ_file.py
Dec 05 09:46:47 np0005546420.localdomain sudo[265001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:47 np0005546420.localdomain python3.9[265003]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:48 np0005546420.localdomain sudo[265001]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:48 np0005546420.localdomain sudo[265111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ppkzjumbddemtomgvpykwxuhgciqzxyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928008.3199425-432-76936321932206/AnsiballZ_lineinfile.py
Dec 05 09:46:48 np0005546420.localdomain sudo[265111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:48 np0005546420.localdomain python3.9[265113]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:48 np0005546420.localdomain sudo[265111]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:46:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:46:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:46:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:46:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:46:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:46:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:46:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:46:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:46:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:46:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:46:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:46:49 np0005546420.localdomain sudo[265221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cujvxuixuqwqppmlhpieibufrusrqrrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928009.072148-459-153808867237764/AnsiballZ_file.py
Dec 05 09:46:49 np0005546420.localdomain sudo[265221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:49 np0005546420.localdomain python3.9[265223]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:46:49 np0005546420.localdomain sudo[265221]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:50 np0005546420.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:ed:1b:d3 MACPROTO=0800 SRC=167.94.138.159 DST=38.102.83.241 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=59631 PROTO=TCP SPT=37325 DPT=9090 SEQ=4230835102 ACK=0 WINDOW=42340 RES=0x00 SYN URGP=0 OPT (020405B40402080A68F688DE000000000103030A) 
Dec 05 09:46:50 np0005546420.localdomain sudo[265331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxifdbxeoirkmmdilasamrzpdajpgblw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928009.871992-486-271176721731627/AnsiballZ_stat.py
Dec 05 09:46:50 np0005546420.localdomain sudo[265331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:46:50 np0005546420.localdomain systemd[1]: tmp-crun.xrDrMH.mount: Deactivated successfully.
Dec 05 09:46:50 np0005546420.localdomain podman[265334]: 2025-12-05 09:46:50.231215828 +0000 UTC m=+0.085151649 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 09:46:50 np0005546420.localdomain podman[265334]: 2025-12-05 09:46:50.265405967 +0000 UTC m=+0.119341798 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 09:46:50 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:46:50 np0005546420.localdomain python3.9[265333]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:46:50 np0005546420.localdomain sudo[265331]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:50 np0005546420.localdomain sudo[265461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-khjhjznknycgluhbnpwxusbpblgrurwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928010.6250482-513-82967116932370/AnsiballZ_stat.py
Dec 05 09:46:50 np0005546420.localdomain sudo[265461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:46:51 np0005546420.localdomain systemd[1]: tmp-crun.tC1DL4.mount: Deactivated successfully.
Dec 05 09:46:51 np0005546420.localdomain podman[265464]: 2025-12-05 09:46:51.007643482 +0000 UTC m=+0.088829274 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:46:51 np0005546420.localdomain podman[265464]: 2025-12-05 09:46:51.041666346 +0000 UTC m=+0.122852138 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:46:51 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:46:51 np0005546420.localdomain python3.9[265463]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:46:51 np0005546420.localdomain sudo[265461]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:51 np0005546420.localdomain sudo[265596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gyzvuglsqjfawihfjebwxwlcunghorno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928011.379214-540-134444710333719/AnsiballZ_command.py
Dec 05 09:46:51 np0005546420.localdomain sudo[265596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:51 np0005546420.localdomain python3.9[265598]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:46:51 np0005546420.localdomain sudo[265596]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:52 np0005546420.localdomain sudo[265707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aghbxlgvhygumhuaqfelowsuzntsfhtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928012.2110105-570-206742207381792/AnsiballZ_replace.py
Dec 05 09:46:52 np0005546420.localdomain sudo[265707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:52 np0005546420.localdomain python3.9[265709]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:52 np0005546420.localdomain sudo[265707]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:53 np0005546420.localdomain sudo[265817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwqpnsxwuxynmwxazmecltolpcnvybdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928013.0734708-597-3595686931195/AnsiballZ_lineinfile.py
Dec 05 09:46:53 np0005546420.localdomain sudo[265817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:53 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42758 DF PROTO=TCP SPT=59894 DPT=9102 SEQ=1706899342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD8B990000000001030307) 
Dec 05 09:46:53 np0005546420.localdomain python3.9[265819]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:53 np0005546420.localdomain sudo[265817]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:53 np0005546420.localdomain sudo[265927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cntvopboesgsufitcmpfmolvctkciwgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928013.6612096-597-122176008426645/AnsiballZ_lineinfile.py
Dec 05 09:46:53 np0005546420.localdomain sudo[265927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:54 np0005546420.localdomain python3.9[265929]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:54 np0005546420.localdomain sudo[265927]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42759 DF PROTO=TCP SPT=59894 DPT=9102 SEQ=1706899342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD8F990000000001030307) 
Dec 05 09:46:54 np0005546420.localdomain sudo[266037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofngpqsdbdicmxdoptduwspxakikymph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928014.2556036-597-109946540071267/AnsiballZ_lineinfile.py
Dec 05 09:46:54 np0005546420.localdomain sudo[266037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:54 np0005546420.localdomain python3.9[266039]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:54 np0005546420.localdomain sudo[266037]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56512 DF PROTO=TCP SPT=43144 DPT=9102 SEQ=3980410648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD91DE0000000001030307) 
Dec 05 09:46:55 np0005546420.localdomain sudo[266147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xueogsbfhkjmhdprntbmzvlcskxrdajq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928014.8655388-597-45940390942626/AnsiballZ_lineinfile.py
Dec 05 09:46:55 np0005546420.localdomain sudo[266147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:55 np0005546420.localdomain python3.9[266149]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:55 np0005546420.localdomain sudo[266147]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:55 np0005546420.localdomain sudo[266257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xgednjemsdzuoifdntpspukmsjwelasf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928015.5662808-684-12781903980844/AnsiballZ_stat.py
Dec 05 09:46:55 np0005546420.localdomain sudo[266257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:56 np0005546420.localdomain python3.9[266259]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:46:56 np0005546420.localdomain sudo[266257]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:46:56 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42760 DF PROTO=TCP SPT=59894 DPT=9102 SEQ=1706899342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD979A0000000001030307) 
Dec 05 09:46:56 np0005546420.localdomain podman[266290]: 2025-12-05 09:46:56.543755608 +0000 UTC m=+0.112970123 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:46:56 np0005546420.localdomain podman[266290]: 2025-12-05 09:46:56.555362724 +0000 UTC m=+0.124577179 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:46:56 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:46:56 np0005546420.localdomain sudo[266386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dfstpcavkwkgtettdnurzfhfemwxjcyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928016.4421022-714-219801840086372/AnsiballZ_file.py
Dec 05 09:46:56 np0005546420.localdomain sudo[266386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:56 np0005546420.localdomain python3.9[266388]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:46:56 np0005546420.localdomain sudo[266386]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:57 np0005546420.localdomain sudo[266496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-agtcksnzuphgnyfutuhmaetqrlsroolp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928017.109119-738-202215805974905/AnsiballZ_stat.py
Dec 05 09:46:57 np0005546420.localdomain sudo[266496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:57 np0005546420.localdomain python3.9[266498]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:46:57 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53686 DF PROTO=TCP SPT=53512 DPT=9102 SEQ=1617023140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACD9BDA0000000001030307) 
Dec 05 09:46:57 np0005546420.localdomain sudo[266496]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:57 np0005546420.localdomain sudo[266553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hwjgtzwcdncehrjajldwtzahpyimwpbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928017.109119-738-202215805974905/AnsiballZ_file.py
Dec 05 09:46:57 np0005546420.localdomain sudo[266553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:57 np0005546420.localdomain python3.9[266555]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:46:57 np0005546420.localdomain sudo[266553]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:58 np0005546420.localdomain sudo[266663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wylewqohqgplawqkveluxofwxkuvxfit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928018.1228576-738-254444372040604/AnsiballZ_stat.py
Dec 05 09:46:58 np0005546420.localdomain sudo[266663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:58 np0005546420.localdomain python3.9[266665]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:46:58 np0005546420.localdomain sudo[266663]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:58 np0005546420.localdomain sudo[266720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-quaufvrvclxhdlqkakrdztahvbcmuyvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928018.1228576-738-254444372040604/AnsiballZ_file.py
Dec 05 09:46:58 np0005546420.localdomain sudo[266720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:59 np0005546420.localdomain python3.9[266722]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:46:59 np0005546420.localdomain sudo[266720]: pam_unix(sudo:session): session closed for user root
Dec 05 09:46:59 np0005546420.localdomain sudo[266830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-osaurbrdykvvsbritgavofytwjqtsoox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928019.274273-808-154371567555718/AnsiballZ_file.py
Dec 05 09:46:59 np0005546420.localdomain sudo[266830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:46:59 np0005546420.localdomain python3.9[266832]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:46:59 np0005546420.localdomain sudo[266830]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:00 np0005546420.localdomain sudo[266940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktfpxylbvliafqhpoutkamxgdcxmffuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928019.9482517-831-105398576320965/AnsiballZ_stat.py
Dec 05 09:47:00 np0005546420.localdomain sudo[266940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:00 np0005546420.localdomain python3.9[266942]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:47:00 np0005546420.localdomain sudo[266940]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42761 DF PROTO=TCP SPT=59894 DPT=9102 SEQ=1706899342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACDA7590000000001030307) 
Dec 05 09:47:00 np0005546420.localdomain sudo[266997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nidzakfqtgnuwfksjzcxdnyetkajmywr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928019.9482517-831-105398576320965/AnsiballZ_file.py
Dec 05 09:47:00 np0005546420.localdomain sudo[266997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:00 np0005546420.localdomain python3.9[266999]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:00 np0005546420.localdomain sudo[266997]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:01 np0005546420.localdomain sudo[267107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ivjmklpespaalhdtrenxgrcehfnbjyer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928021.0785465-867-104650715176471/AnsiballZ_stat.py
Dec 05 09:47:01 np0005546420.localdomain sudo[267107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:01 np0005546420.localdomain python3.9[267109]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:47:01 np0005546420.localdomain sudo[267107]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:01 np0005546420.localdomain sudo[267164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avrkqgzsqzdyigkxtztnevbvmvzqekrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928021.0785465-867-104650715176471/AnsiballZ_file.py
Dec 05 09:47:01 np0005546420.localdomain sudo[267164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:47:01 np0005546420.localdomain podman[267167]: 2025-12-05 09:47:01.937775526 +0000 UTC m=+0.083018006 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Dec 05 09:47:01 np0005546420.localdomain podman[267167]: 2025-12-05 09:47:01.951537268 +0000 UTC m=+0.096779758 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:47:01 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:47:02 np0005546420.localdomain python3.9[267166]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:02 np0005546420.localdomain sudo[267164]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:02 np0005546420.localdomain sudo[267294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qvhdodunybfjyfblpxdyadzllnbvaesb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928022.209087-903-162403948874480/AnsiballZ_systemd.py
Dec 05 09:47:02 np0005546420.localdomain sudo[267294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:47:02 np0005546420.localdomain podman[267297]: 2025-12-05 09:47:02.85987635 +0000 UTC m=+0.081831459 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:47:02 np0005546420.localdomain podman[267297]: 2025-12-05 09:47:02.870259458 +0000 UTC m=+0.092214597 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:47:02 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:47:03 np0005546420.localdomain python3.9[267296]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:47:03 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:47:03 np0005546420.localdomain systemd-sysv-generator[267344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:47:03 np0005546420.localdomain systemd-rc-local-generator[267340]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:47:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:47:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:03 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:03 np0005546420.localdomain sudo[267294]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:03 np0005546420.localdomain sudo[267463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qgzgbpfynddigrdoahjgcmwmhrsznhqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928023.6625597-927-280614525116454/AnsiballZ_stat.py
Dec 05 09:47:03 np0005546420.localdomain sudo[267463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:47:04.107 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:47:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:47:04.109 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:47:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:47:04.109 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:47:04 np0005546420.localdomain python3.9[267465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:47:04 np0005546420.localdomain sudo[267463]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:04 np0005546420.localdomain sudo[267520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jxktvtwyocvxladidhhhkizxfgwygtyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928023.6625597-927-280614525116454/AnsiballZ_file.py
Dec 05 09:47:04 np0005546420.localdomain sudo[267520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:04 np0005546420.localdomain python3.9[267522]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:04 np0005546420.localdomain sudo[267520]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:05 np0005546420.localdomain sudo[267630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lgorjhylcrrbhiqqsqkpdlpjdopeplty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928024.7633593-963-44424361384726/AnsiballZ_stat.py
Dec 05 09:47:05 np0005546420.localdomain sudo[267630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:05 np0005546420.localdomain python3.9[267632]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:47:05 np0005546420.localdomain sudo[267630]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:05 np0005546420.localdomain sudo[267687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hmmkmwlwfhrrhyqlradweptlwkewkdln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928024.7633593-963-44424361384726/AnsiballZ_file.py
Dec 05 09:47:05 np0005546420.localdomain sudo[267687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:05 np0005546420.localdomain python3.9[267689]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:05 np0005546420.localdomain sudo[267687]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:06 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:06.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:06 np0005546420.localdomain sudo[267797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pyqkxxnymagrnswpgrcadpyomavjxnra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928025.863866-999-230161690974481/AnsiballZ_systemd.py
Dec 05 09:47:06 np0005546420.localdomain sudo[267797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:06 np0005546420.localdomain python3.9[267799]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:47:06 np0005546420.localdomain systemd-rc-local-generator[267823]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:47:06 np0005546420.localdomain systemd-sysv-generator[267827]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: Starting Create netns directory...
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 09:47:06 np0005546420.localdomain systemd[1]: Finished Create netns directory.
Dec 05 09:47:06 np0005546420.localdomain sudo[267797]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:07 np0005546420.localdomain sudo[267949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdtrxorzcgbkbaoqtqgcabeqiaagllmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928027.3189518-1029-141958918656555/AnsiballZ_file.py
Dec 05 09:47:07 np0005546420.localdomain sudo[267949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:47:07 np0005546420.localdomain podman[267952]: 2025-12-05 09:47:07.706394225 +0000 UTC m=+0.088446682 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 09:47:07 np0005546420.localdomain podman[267952]: 2025-12-05 09:47:07.800291944 +0000 UTC m=+0.182344401 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 09:47:07 np0005546420.localdomain python3.9[267951]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:47:07 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:47:07 np0005546420.localdomain sudo[267949]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:08 np0005546420.localdomain sudo[268086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-munzwbfqdndtccfhnbwayvqjlrabmzcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928028.020583-1053-142727797099638/AnsiballZ_stat.py
Dec 05 09:47:08 np0005546420.localdomain sudo[268086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:08 np0005546420.localdomain python3.9[268088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:47:08 np0005546420.localdomain sudo[268086]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:08 np0005546420.localdomain sudo[268143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oujrgmvibjywldiuiolxquxhyanflkqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928028.020583-1053-142727797099638/AnsiballZ_file.py
Dec 05 09:47:08 np0005546420.localdomain sudo[268143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:08 np0005546420.localdomain python3.9[268145]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:47:08 np0005546420.localdomain sudo[268143]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:08 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42762 DF PROTO=TCP SPT=59894 DPT=9102 SEQ=1706899342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACDC7D90000000001030307) 
Dec 05 09:47:09 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:09.052 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:09 np0005546420.localdomain sudo[268253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cwnnxguyyugarzeajdnaebweqzawfulf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928029.4560733-1095-43951094839417/AnsiballZ_file.py
Dec 05 09:47:09 np0005546420.localdomain sudo[268253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:09 np0005546420.localdomain python3.9[268255]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:47:09 np0005546420.localdomain sudo[268253]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:10.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:10.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 09:47:10 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:10.058 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 09:47:10 np0005546420.localdomain sudo[268363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nskavqnklrxinvoyhmkzcscdunelckfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928030.1449602-1119-243169070864427/AnsiballZ_stat.py
Dec 05 09:47:10 np0005546420.localdomain sudo[268363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:10 np0005546420.localdomain python3.9[268365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:47:10 np0005546420.localdomain sudo[268363]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:10 np0005546420.localdomain sudo[268420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lylenpguhlmgewgtyljjqexjnfsbmtof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928030.1449602-1119-243169070864427/AnsiballZ_file.py
Dec 05 09:47:10 np0005546420.localdomain sudo[268420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:11 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:11.054 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:11 np0005546420.localdomain python3.9[268422]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.1a5d5uj3 recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:11 np0005546420.localdomain sudo[268420]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:11 np0005546420.localdomain sudo[268530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmwnqopdfjvmyrexbtpyeclpartezuyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928031.6297445-1155-261504316265077/AnsiballZ_file.py
Dec 05 09:47:11 np0005546420.localdomain sudo[268530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:12.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:12 np0005546420.localdomain python3.9[268532]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:12 np0005546420.localdomain sudo[268530]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:12 np0005546420.localdomain sudo[268533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:47:12 np0005546420.localdomain sudo[268533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:47:12 np0005546420.localdomain sudo[268533]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:12 np0005546420.localdomain sudo[268568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:47:12 np0005546420.localdomain sudo[268568]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:47:12 np0005546420.localdomain sudo[268676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdvkqujhkxxvycmjpgbbsiqcnrkqbjkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928032.3709676-1179-145281145677933/AnsiballZ_stat.py
Dec 05 09:47:12 np0005546420.localdomain sudo[268676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:12 np0005546420.localdomain sudo[268676]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:13 np0005546420.localdomain sudo[268760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vepjhawmgurzeaiodfayrujutidyxwqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928032.3709676-1179-145281145677933/AnsiballZ_file.py
Dec 05 09:47:13 np0005546420.localdomain sudo[268760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:13.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:13 np0005546420.localdomain sudo[268568]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:13 np0005546420.localdomain sudo[268760]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:13 np0005546420.localdomain sudo[268784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:47:13 np0005546420.localdomain sudo[268784]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:47:13 np0005546420.localdomain sudo[268784]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:14.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:14 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:14.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:47:14 np0005546420.localdomain sudo[268892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uuxbkpiozengsrsafflxhmdukpyfsech ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928033.6304126-1221-92319776403687/AnsiballZ_container_config_data.py
Dec 05 09:47:14 np0005546420.localdomain sudo[268892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:14 np0005546420.localdomain python3.9[268894]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 05 09:47:14 np0005546420.localdomain sudo[268892]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:14 np0005546420.localdomain sudo[269002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pylkbqrgcglsmqugqczrhsbxomwojzdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928034.5361214-1248-66435515146271/AnsiballZ_container_config_hash.py
Dec 05 09:47:14 np0005546420.localdomain sudo[269002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:15.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:15.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:15.042 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 09:47:15 np0005546420.localdomain python3.9[269004]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:47:15 np0005546420.localdomain sudo[269002]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:15 np0005546420.localdomain sudo[269112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxamppjrkwcorptsgqshdejjjnrdjxnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928035.5562832-1275-249751416961726/AnsiballZ_podman_container_info.py
Dec 05 09:47:15 np0005546420.localdomain sudo[269112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:16 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:16.060 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:16 np0005546420.localdomain python3.9[269114]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 09:47:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:47:16 np0005546420.localdomain podman[269142]: 2025-12-05 09:47:16.527291833 +0000 UTC m=+0.105093762 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute)
Dec 05 09:47:16 np0005546420.localdomain podman[269142]: 2025-12-05 09:47:16.568413433 +0000 UTC m=+0.146215322 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:47:16 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:47:16 np0005546420.localdomain sudo[269112]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.055 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.056 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.057 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.077 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.078 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.078 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.078 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.079 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:47:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:47:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:47:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:47:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:47:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:47:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16744 "" "Go-http-client/1.1"
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.547 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.740 230124 WARNING nova.virt.libvirt.driver [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.741 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12789MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.741 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.742 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.827 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.828 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.875 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.918 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.918 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.943 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.972 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 09:47:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:17.993 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:47:18 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:18.424 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:47:18 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:18.431 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:47:18 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:18.448 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:47:18 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:18.450 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:47:18 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:18.451 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:47:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:47:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:47:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:47:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:47:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:47:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:47:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:47:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:47:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:47:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:47:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:47:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:47:20 np0005546420.localdomain sudo[269312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wghzklclhgggbpnczcubcupxdhtxtyyr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764928039.902249-1314-210581631043402/AnsiballZ_edpm_container_manage.py
Dec 05 09:47:20 np0005546420.localdomain sudo[269312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:47:20 np0005546420.localdomain podman[269315]: 2025-12-05 09:47:20.409998306 +0000 UTC m=+0.065530350 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 09:47:20 np0005546420.localdomain podman[269315]: 2025-12-05 09:47:20.439935944 +0000 UTC m=+0.095467968 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:47:20 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:47:20 np0005546420.localdomain python3[269314]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:47:20 np0005546420.localdomain python3[269314]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7",
                                                                    "Digest": "sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:11:02.031267563Z",
                                                                    "Config": {
                                                                         "User": "root",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 249482216,
                                                                    "VirtualSize": 249482216,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:8c448567789503f6c5be645a12473dfc27734872532d528b6ee764c214f9f2f3"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "root",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:24.212273596Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:11:01.523582443Z",
                                                                              "created_by": "/bin/sh -c dnf -y install device-mapper-multipath iscsi-initiator-utils && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:11:03.162365736Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 05 09:47:20 np0005546420.localdomain sudo[269312]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:47:21 np0005546420.localdomain sudo[269502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iizfrfpeanukyfeoplyjwxrnamgncaeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928041.1756005-1338-274264675653674/AnsiballZ_stat.py
Dec 05 09:47:21 np0005546420.localdomain sudo[269502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:21 np0005546420.localdomain systemd[1]: tmp-crun.tk2QaY.mount: Deactivated successfully.
Dec 05 09:47:21 np0005546420.localdomain podman[269504]: 2025-12-05 09:47:21.506737023 +0000 UTC m=+0.080079965 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:47:21 np0005546420.localdomain podman[269504]: 2025-12-05 09:47:21.546878024 +0000 UTC m=+0.120220946 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:47:21 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:47:21 np0005546420.localdomain python3.9[269510]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:47:21 np0005546420.localdomain sudo[269502]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:22 np0005546420.localdomain sudo[269636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-quqohhhywxtavumsmaqlfuevwcnorxqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928041.9195602-1365-36281508621665/AnsiballZ_file.py
Dec 05 09:47:22 np0005546420.localdomain sudo[269636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:22 np0005546420.localdomain python3.9[269638]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:22 np0005546420.localdomain sudo[269636]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:22 np0005546420.localdomain sudo[269691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cjqpvevbjkjvfqkkeocrmwyptpbnugfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928041.9195602-1365-36281508621665/AnsiballZ_stat.py
Dec 05 09:47:22 np0005546420.localdomain sudo[269691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:22 np0005546420.localdomain python3.9[269693]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:47:22 np0005546420.localdomain sudo[269691]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:23 np0005546420.localdomain sudo[269800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-absrufycgrefksjbbybnyjjvkkbxnnus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928042.8857741-1365-157450635449930/AnsiballZ_copy.py
Dec 05 09:47:23 np0005546420.localdomain sudo[269800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:23 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30597 DF PROTO=TCP SPT=53858 DPT=9102 SEQ=2038520506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE00C80000000001030307) 
Dec 05 09:47:23 np0005546420.localdomain python3.9[269802]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764928042.8857741-1365-157450635449930/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:23 np0005546420.localdomain sudo[269800]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:23 np0005546420.localdomain sudo[269855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqojohbcvallhuvrwgeqlvnrdtzpokah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928042.8857741-1365-157450635449930/AnsiballZ_systemd.py
Dec 05 09:47:23 np0005546420.localdomain sudo[269855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:24 np0005546420.localdomain python3.9[269857]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:47:24 np0005546420.localdomain sudo[269855]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30598 DF PROTO=TCP SPT=53858 DPT=9102 SEQ=2038520506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE04D90000000001030307) 
Dec 05 09:47:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42763 DF PROTO=TCP SPT=59894 DPT=9102 SEQ=1706899342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE07D90000000001030307) 
Dec 05 09:47:25 np0005546420.localdomain python3.9[269967]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:47:26 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30599 DF PROTO=TCP SPT=53858 DPT=9102 SEQ=2038520506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE0CD90000000001030307) 
Dec 05 09:47:26 np0005546420.localdomain sudo[270075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlaosbwoluwmrilbpdbfowyyploptccp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928046.269142-1467-37463817710042/AnsiballZ_file.py
Dec 05 09:47:26 np0005546420.localdomain sudo[270075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:26 np0005546420.localdomain python3.9[270077]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:26 np0005546420.localdomain sudo[270075]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:27 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56513 DF PROTO=TCP SPT=43144 DPT=9102 SEQ=3980410648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE0FD90000000001030307) 
Dec 05 09:47:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:47:27 np0005546420.localdomain podman[270132]: 2025-12-05 09:47:27.523871731 +0000 UTC m=+0.088053390 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 05 09:47:27 np0005546420.localdomain podman[270132]: 2025-12-05 09:47:27.569349865 +0000 UTC m=+0.133531494 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:47:27 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:47:27 np0005546420.localdomain sudo[270204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grwtachisucdsturuobgjlszhhmrqese ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928047.3701212-1503-277207388466703/AnsiballZ_file.py
Dec 05 09:47:27 np0005546420.localdomain sudo[270204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:27 np0005546420.localdomain python3.9[270206]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 09:47:27 np0005546420.localdomain sudo[270204]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:28 np0005546420.localdomain sudo[270314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxqavtokkcngpurqdxamgzsigadyrmch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928048.0967834-1527-179086468697438/AnsiballZ_modprobe.py
Dec 05 09:47:28 np0005546420.localdomain sudo[270314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:28 np0005546420.localdomain python3.9[270316]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 05 09:47:28 np0005546420.localdomain sudo[270314]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:29 np0005546420.localdomain sudo[270424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-juqlqlerupxhugpcagnzgiadvarvujwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928048.8457942-1551-111587205557983/AnsiballZ_stat.py
Dec 05 09:47:29 np0005546420.localdomain sudo[270424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:29 np0005546420.localdomain python3.9[270426]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:47:29 np0005546420.localdomain sudo[270424]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:29 np0005546420.localdomain sudo[270481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mmwmyvcgfovpyyhobwpueopmtkbffknd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928048.8457942-1551-111587205557983/AnsiballZ_file.py
Dec 05 09:47:29 np0005546420.localdomain sudo[270481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:29 np0005546420.localdomain python3.9[270483]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:29 np0005546420.localdomain sudo[270481]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:30 np0005546420.localdomain sudo[270591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmflmuthkofgakvfhwyxssihemephkpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928050.0947692-1590-256881144222063/AnsiballZ_lineinfile.py
Dec 05 09:47:30 np0005546420.localdomain sudo[270591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30600 DF PROTO=TCP SPT=53858 DPT=9102 SEQ=2038520506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE1C990000000001030307) 
Dec 05 09:47:30 np0005546420.localdomain python3.9[270593]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:30 np0005546420.localdomain sudo[270591]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:31 np0005546420.localdomain sudo[270701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xppftqjofpndaqjfufskxpyjhvzpcmbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928050.897095-1617-226305282612978/AnsiballZ_dnf.py
Dec 05 09:47:31 np0005546420.localdomain sudo[270701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:31 np0005546420.localdomain python3.9[270703]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 09:47:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:47:32 np0005546420.localdomain systemd[1]: tmp-crun.U274LQ.mount: Deactivated successfully.
Dec 05 09:47:32 np0005546420.localdomain podman[270706]: 2025-12-05 09:47:32.530825984 +0000 UTC m=+0.109150117 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter)
Dec 05 09:47:32 np0005546420.localdomain podman[270706]: 2025-12-05 09:47:32.54440916 +0000 UTC m=+0.122733343 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.expose-services=)
Dec 05 09:47:32 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:47:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:47:33 np0005546420.localdomain podman[270726]: 2025-12-05 09:47:33.50123853 +0000 UTC m=+0.076043122 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:47:33 np0005546420.localdomain podman[270726]: 2025-12-05 09:47:33.511208375 +0000 UTC m=+0.086013027 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:47:33 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:47:34 np0005546420.localdomain sudo[270701]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:35 np0005546420.localdomain python3.9[270856]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 09:47:36 np0005546420.localdomain sudo[270968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ughptszetzxkkbimflnvhdbmskxkwfsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928056.6701076-1669-57723173570691/AnsiballZ_file.py
Dec 05 09:47:36 np0005546420.localdomain sudo[270968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:37 np0005546420.localdomain python3.9[270970]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:37 np0005546420.localdomain sudo[270968]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:37 np0005546420.localdomain sudo[271078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hmgwukspdvhldsorbsszfqwfebjhgoxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928057.6848805-1702-194943073396492/AnsiballZ_systemd_service.py
Dec 05 09:47:37 np0005546420.localdomain sudo[271078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: tmp-crun.fBbVvp.mount: Deactivated successfully.
Dec 05 09:47:38 np0005546420.localdomain podman[271081]: 2025-12-05 09:47:38.064452192 +0000 UTC m=+0.092912489 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 09:47:38 np0005546420.localdomain podman[271081]: 2025-12-05 09:47:38.132568719 +0000 UTC m=+0.161029006 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:47:38 np0005546420.localdomain python3.9[271080]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:47:38 np0005546420.localdomain systemd-rc-local-generator[271125]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:47:38 np0005546420.localdomain systemd-sysv-generator[271135]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:38 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:47:38 np0005546420.localdomain sudo[271078]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:39 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30601 DF PROTO=TCP SPT=53858 DPT=9102 SEQ=2038520506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE3DD90000000001030307) 
Dec 05 09:47:39 np0005546420.localdomain python3.9[271248]: ansible-ansible.builtin.service_facts Invoked
Dec 05 09:47:39 np0005546420.localdomain network[271265]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 09:47:39 np0005546420.localdomain network[271266]: 'network-scripts' will be removed from distribution in near future.
Dec 05 09:47:39 np0005546420.localdomain network[271267]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 09:47:41 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:47:45 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:47:45.007 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:47:45 np0005546420.localdomain sudo[271499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbekalzsieavuhnyccrmneyvqjvgoyyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928065.068671-1759-173623379480092/AnsiballZ_systemd_service.py
Dec 05 09:47:45 np0005546420.localdomain sudo[271499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:45 np0005546420.localdomain python3.9[271501]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:47:45 np0005546420.localdomain sudo[271499]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:46 np0005546420.localdomain sudo[271610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfawtpizijenjglbgatnibagkdycsbbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928065.8011365-1759-19541579251423/AnsiballZ_systemd_service.py
Dec 05 09:47:46 np0005546420.localdomain sudo[271610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:46 np0005546420.localdomain python3.9[271612]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:47:46 np0005546420.localdomain sudo[271610]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:46 np0005546420.localdomain sudo[271721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aboyaqyakrgtoeqdpfckkrzfdudnbujm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928066.5598364-1759-254161739557660/AnsiballZ_systemd_service.py
Dec 05 09:47:46 np0005546420.localdomain sudo[271721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:47:46 np0005546420.localdomain podman[271724]: 2025-12-05 09:47:46.93270636 +0000 UTC m=+0.081966943 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 05 09:47:46 np0005546420.localdomain podman[271724]: 2025-12-05 09:47:46.947507135 +0000 UTC m=+0.096767738 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:47:46 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:47:47 np0005546420.localdomain python3.9[271723]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:47:47 np0005546420.localdomain sudo[271721]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:47:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:47:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:47:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:47:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:47:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16747 "" "Go-http-client/1.1"
Dec 05 09:47:47 np0005546420.localdomain sudo[271851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gcdulrtgvimdgzarubviwueougmzwinx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928067.2860734-1759-21958170277959/AnsiballZ_systemd_service.py
Dec 05 09:47:47 np0005546420.localdomain sudo[271851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:47 np0005546420.localdomain python3.9[271853]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:47:47 np0005546420.localdomain sudo[271851]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:48 np0005546420.localdomain sudo[271962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-atbgeqnltovqejejivvtggykelhxgcyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928068.055462-1759-235727260853000/AnsiballZ_systemd_service.py
Dec 05 09:47:48 np0005546420.localdomain sudo[271962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:48 np0005546420.localdomain python3.9[271964]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:47:48 np0005546420.localdomain sudo[271962]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:47:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:47:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:47:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:47:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:47:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:47:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:47:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:47:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:47:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:47:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:47:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:47:49 np0005546420.localdomain sudo[272073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxbnzeawhkstkmdsqgimbluywfakuers ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928068.7256587-1759-210684624637597/AnsiballZ_systemd_service.py
Dec 05 09:47:49 np0005546420.localdomain sudo[272073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:49 np0005546420.localdomain python3.9[272075]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:47:49 np0005546420.localdomain sudo[272073]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:49 np0005546420.localdomain sudo[272184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-smgoqlphcmbhcmoltnrfcvqxhfdunuxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928069.497089-1759-40636424108660/AnsiballZ_systemd_service.py
Dec 05 09:47:49 np0005546420.localdomain sudo[272184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:50 np0005546420.localdomain python3.9[272186]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:47:50 np0005546420.localdomain sudo[272184]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:50 np0005546420.localdomain sudo[272295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhuevqlqxicpixvfpbaugmqgmyhaacbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928070.2084463-1759-81754974172875/AnsiballZ_systemd_service.py
Dec 05 09:47:50 np0005546420.localdomain sudo[272295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:47:50 np0005546420.localdomain podman[272298]: 2025-12-05 09:47:50.615186646 +0000 UTC m=+0.087568815 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:47:50 np0005546420.localdomain podman[272298]: 2025-12-05 09:47:50.647406763 +0000 UTC m=+0.119788902 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 05 09:47:50 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:47:50 np0005546420.localdomain python3.9[272297]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:47:50 np0005546420.localdomain sudo[272295]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:47:52 np0005546420.localdomain podman[272334]: 2025-12-05 09:47:52.517402843 +0000 UTC m=+0.090095123 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:47:52 np0005546420.localdomain podman[272334]: 2025-12-05 09:47:52.55320455 +0000 UTC m=+0.125896850 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:47:52 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:47:53 np0005546420.localdomain sudo[272447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymucvlozfnzjquqnmlrztnvnnqlumdrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928072.8109097-1936-53876914962528/AnsiballZ_file.py
Dec 05 09:47:53 np0005546420.localdomain sudo[272447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:53 np0005546420.localdomain python3.9[272449]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:53 np0005546420.localdomain sudo[272447]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:53 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58778 DF PROTO=TCP SPT=32962 DPT=9102 SEQ=518669569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE75F90000000001030307) 
Dec 05 09:47:53 np0005546420.localdomain sudo[272557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxvdladqsazacaawgrdscqfqynyeruqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928073.4417818-1936-187932047906860/AnsiballZ_file.py
Dec 05 09:47:53 np0005546420.localdomain sudo[272557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:53 np0005546420.localdomain python3.9[272559]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:53 np0005546420.localdomain sudo[272557]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:54 np0005546420.localdomain sudo[272667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-curjzfdooihtwdwpyvldhjolatmpdpxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928074.0345547-1936-30247445931514/AnsiballZ_file.py
Dec 05 09:47:54 np0005546420.localdomain sudo[272667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58779 DF PROTO=TCP SPT=32962 DPT=9102 SEQ=518669569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE7A190000000001030307) 
Dec 05 09:47:54 np0005546420.localdomain python3.9[272669]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:54 np0005546420.localdomain sudo[272667]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:54 np0005546420.localdomain sudo[272777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btnqjmyfgfhqdptprbuqggycqqrgohlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928074.6321344-1936-202698896468484/AnsiballZ_file.py
Dec 05 09:47:54 np0005546420.localdomain sudo[272777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:55 np0005546420.localdomain python3.9[272779]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:55 np0005546420.localdomain sudo[272777]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:55 np0005546420.localdomain sudo[272887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-almnbedcngppottkkhbzkzxwqrxnpalr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928075.203281-1936-233489112764931/AnsiballZ_file.py
Dec 05 09:47:55 np0005546420.localdomain sudo[272887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30602 DF PROTO=TCP SPT=53858 DPT=9102 SEQ=2038520506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE7DD90000000001030307) 
Dec 05 09:47:55 np0005546420.localdomain python3.9[272889]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:55 np0005546420.localdomain sudo[272887]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:56 np0005546420.localdomain sudo[272997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vekwpvxezgjvlbfhzsmqefgsjlanjlbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928075.8094385-1936-97369198253597/AnsiballZ_file.py
Dec 05 09:47:56 np0005546420.localdomain sudo[272997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:56 np0005546420.localdomain python3.9[272999]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:56 np0005546420.localdomain sudo[272997]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:56 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58780 DF PROTO=TCP SPT=32962 DPT=9102 SEQ=518669569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE821A0000000001030307) 
Dec 05 09:47:56 np0005546420.localdomain sudo[273107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovenigbzajhydwdkyhewhdxipuwbwqci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928076.435213-1936-22794305310192/AnsiballZ_file.py
Dec 05 09:47:56 np0005546420.localdomain sudo[273107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:56 np0005546420.localdomain python3.9[273109]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:56 np0005546420.localdomain sudo[273107]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:57 np0005546420.localdomain sudo[273217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sniuitaapurgkjwswtextfgodkdxhfdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928077.0326016-1936-243638335208161/AnsiballZ_file.py
Dec 05 09:47:57 np0005546420.localdomain sudo[273217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:57 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42764 DF PROTO=TCP SPT=59894 DPT=9102 SEQ=1706899342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE85DA0000000001030307) 
Dec 05 09:47:57 np0005546420.localdomain python3.9[273219]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:57 np0005546420.localdomain sudo[273217]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:58 np0005546420.localdomain sudo[273327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scbnyfxcuefupaamdkucomrylhyhnble ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928077.9735487-2107-130424658037013/AnsiballZ_file.py
Dec 05 09:47:58 np0005546420.localdomain sudo[273327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:47:58 np0005546420.localdomain podman[273330]: 2025-12-05 09:47:58.324883333 +0000 UTC m=+0.092026192 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 09:47:58 np0005546420.localdomain podman[273330]: 2025-12-05 09:47:58.341369809 +0000 UTC m=+0.108512618 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 09:47:58 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:47:58 np0005546420.localdomain python3.9[273329]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:58 np0005546420.localdomain sudo[273327]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:58 np0005546420.localdomain sudo[273455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgqbehgmhdokwuuiausxrhgzcselqmtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928078.541065-2107-274924736713485/AnsiballZ_file.py
Dec 05 09:47:58 np0005546420.localdomain sudo[273455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:59 np0005546420.localdomain python3.9[273457]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:59 np0005546420.localdomain sudo[273455]: pam_unix(sudo:session): session closed for user root
Dec 05 09:47:59 np0005546420.localdomain sudo[273565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-juwwsgskaxsitkprxfywpztyljkpezou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928079.1636772-2107-163313247619046/AnsiballZ_file.py
Dec 05 09:47:59 np0005546420.localdomain sudo[273565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:47:59 np0005546420.localdomain python3.9[273567]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:47:59 np0005546420.localdomain sudo[273565]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 np0005546420.localdomain sudo[273675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jhsxssdkcreomtxdtgbiuhsnmmeuqizj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928079.778919-2107-261729977568897/AnsiballZ_file.py
Dec 05 09:48:00 np0005546420.localdomain sudo[273675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:00 np0005546420.localdomain python3.9[273677]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:48:00 np0005546420.localdomain sudo[273675]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58781 DF PROTO=TCP SPT=32962 DPT=9102 SEQ=518669569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACE91DA0000000001030307) 
Dec 05 09:48:00 np0005546420.localdomain sudo[273785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oqnugtbvaroofrelfzzxlsmjnceirdat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928080.3955898-2107-258129710618256/AnsiballZ_file.py
Dec 05 09:48:00 np0005546420.localdomain sudo[273785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:00 np0005546420.localdomain python3.9[273787]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:48:00 np0005546420.localdomain sudo[273785]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:01 np0005546420.localdomain sudo[273895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ufhllnbvlcyvlrqokfjrnaosyomkgkod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928080.9819257-2107-41958104072166/AnsiballZ_file.py
Dec 05 09:48:01 np0005546420.localdomain sudo[273895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:01 np0005546420.localdomain python3.9[273897]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:48:01 np0005546420.localdomain sudo[273895]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:01 np0005546420.localdomain sudo[274005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zlkocnihwivswjvxdxauouhtgyqplaqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928081.517522-2107-63190546284420/AnsiballZ_file.py
Dec 05 09:48:01 np0005546420.localdomain sudo[274005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:01 np0005546420.localdomain python3.9[274007]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:48:01 np0005546420.localdomain sudo[274005]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:02 np0005546420.localdomain sudo[274115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hyqjwcdoanquvsohsrhsqxxdqpyfdvhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928082.012318-2107-32599156753416/AnsiballZ_file.py
Dec 05 09:48:02 np0005546420.localdomain sudo[274115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:02 np0005546420.localdomain python3.9[274117]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:48:02 np0005546420.localdomain sudo[274115]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:03 np0005546420.localdomain sudo[274225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bqpnnvzxvttsimheuwfwiehuaciilgfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928083.169651-2281-160927953823639/AnsiballZ_command.py
Dec 05 09:48:03 np0005546420.localdomain sudo[274225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:48:03 np0005546420.localdomain podman[274227]: 2025-12-05 09:48:03.486724385 +0000 UTC m=+0.069038558 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 09:48:03 np0005546420.localdomain podman[274227]: 2025-12-05 09:48:03.502330982 +0000 UTC m=+0.084645185 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.33.7)
Dec 05 09:48:03 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:48:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:48:03 np0005546420.localdomain python3.9[274228]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:03 np0005546420.localdomain sudo[274225]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:03 np0005546420.localdomain podman[274249]: 2025-12-05 09:48:03.653873717 +0000 UTC m=+0.078931880 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:48:03 np0005546420.localdomain podman[274249]: 2025-12-05 09:48:03.666493435 +0000 UTC m=+0.091551588 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:48:03 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:48:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:48:04.108 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:48:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:48:04.108 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:48:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:48:04.108 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:48:04 np0005546420.localdomain python3.9[274380]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 09:48:04 np0005546420.localdomain sudo[274488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dotssyebkzowrtmudpkidjxjxdyxogty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928084.72322-2335-170442136510252/AnsiballZ_systemd_service.py
Dec 05 09:48:04 np0005546420.localdomain sudo[274488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:05 np0005546420.localdomain python3.9[274490]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 09:48:05 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:48:05 np0005546420.localdomain systemd-rc-local-generator[274513]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:48:05 np0005546420.localdomain systemd-sysv-generator[274516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:48:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:48:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:48:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:48:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:48:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:48:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:48:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:48:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:48:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:48:05 np0005546420.localdomain sudo[274488]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:06 np0005546420.localdomain sudo[274634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btrcwibimwbaxsjifpfkdywvqzmchfrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928085.9802072-2359-97815186216248/AnsiballZ_command.py
Dec 05 09:48:06 np0005546420.localdomain sudo[274634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:06 np0005546420.localdomain python3.9[274636]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:06 np0005546420.localdomain sudo[274634]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:06 np0005546420.localdomain sudo[274745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttwlwaaxocmbnjgjzwjrhdrgonwcycmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928086.5766292-2359-238515976899960/AnsiballZ_command.py
Dec 05 09:48:06 np0005546420.localdomain sudo[274745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:07 np0005546420.localdomain python3.9[274747]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:07 np0005546420.localdomain sudo[274745]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:07 np0005546420.localdomain sudo[274856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-czbzpsfhqjqktooqcaufdliswfxvqvrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928087.212744-2359-102988368845627/AnsiballZ_command.py
Dec 05 09:48:07 np0005546420.localdomain sudo[274856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:07 np0005546420.localdomain python3.9[274858]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:07 np0005546420.localdomain sudo[274856]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:08 np0005546420.localdomain sudo[274967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zkjyxaztfghgdnjgiqrqgdktsryofylx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928087.80651-2359-14838778422226/AnsiballZ_command.py
Dec 05 09:48:08 np0005546420.localdomain sudo[274967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:08 np0005546420.localdomain python3.9[274969]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:48:08 np0005546420.localdomain sudo[274967]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:08 np0005546420.localdomain podman[274971]: 2025-12-05 09:48:08.520372606 +0000 UTC m=+0.220070427 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 09:48:08 np0005546420.localdomain podman[274971]: 2025-12-05 09:48:08.69275265 +0000 UTC m=+0.392450441 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:48:08 np0005546420.localdomain sudo[275103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nahzxvogctyenhmbbqbqzsruntaoothj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928088.417292-2359-72031758230194/AnsiballZ_command.py
Dec 05 09:48:08 np0005546420.localdomain sudo[275103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:08 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:48:08 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58782 DF PROTO=TCP SPT=32962 DPT=9102 SEQ=518669569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACEB1D90000000001030307) 
Dec 05 09:48:08 np0005546420.localdomain python3.9[275105]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:08 np0005546420.localdomain sudo[275103]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:09 np0005546420.localdomain sudo[275214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kovyuodnvqdckrdydshuoitsqvrqzmrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928089.0632498-2359-114270307254311/AnsiballZ_command.py
Dec 05 09:48:09 np0005546420.localdomain sudo[275214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:09 np0005546420.localdomain python3.9[275216]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:09 np0005546420.localdomain sudo[275214]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:09 np0005546420.localdomain sudo[275325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rikbkovphkkdxdhqvbiqnmvolivaavaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928089.6761546-2359-86105636833159/AnsiballZ_command.py
Dec 05 09:48:09 np0005546420.localdomain sudo[275325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:10 np0005546420.localdomain python3.9[275327]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:10 np0005546420.localdomain sudo[275325]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:10 np0005546420.localdomain sudo[275436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gwuwsjdqjlewzyxrclbsvrevbtqecadl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928090.326412-2359-8727013597089/AnsiballZ_command.py
Dec 05 09:48:10 np0005546420.localdomain sudo[275436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:10 np0005546420.localdomain python3.9[275438]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:48:10 np0005546420.localdomain sudo[275436]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:12.141 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:48:12 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:12.142 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.946 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:48:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:48:13 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:13.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:48:13 np0005546420.localdomain sudo[275547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nrrvdqogiwuhsxkqeavsqjcgcqrnutsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928093.0272608-2566-253898270864036/AnsiballZ_file.py
Dec 05 09:48:13 np0005546420.localdomain sudo[275547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:13 np0005546420.localdomain python3.9[275549]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:13 np0005546420.localdomain sudo[275547]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:13 np0005546420.localdomain sudo[275590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:48:13 np0005546420.localdomain sudo[275590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:48:13 np0005546420.localdomain sudo[275590]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:13 np0005546420.localdomain sudo[275639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:48:13 np0005546420.localdomain sudo[275639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:48:13 np0005546420.localdomain sudo[275693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sihrdcstdkbvcssxzzvtkfbdozpbqvsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928093.6622243-2566-116174447029113/AnsiballZ_file.py
Dec 05 09:48:13 np0005546420.localdomain sudo[275693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:14 np0005546420.localdomain python3.9[275695]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:14 np0005546420.localdomain sudo[275693]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:14 np0005546420.localdomain sudo[275862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpdfvnwlwblstedfllddruewesoqhxuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928094.2213557-2566-180881359483620/AnsiballZ_file.py
Dec 05 09:48:14 np0005546420.localdomain sudo[275862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:14 np0005546420.localdomain python3.9[275875]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:14 np0005546420.localdomain sudo[275862]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:14 np0005546420.localdomain podman[275876]: 2025-12-05 09:48:14.689260095 +0000 UTC m=+0.229308020 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., version=7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-type=git)
Dec 05 09:48:14 np0005546420.localdomain podman[275911]: 2025-12-05 09:48:14.866126466 +0000 UTC m=+0.063379434 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, RELEASE=main, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7)
Dec 05 09:48:15 np0005546420.localdomain podman[275876]: 2025-12-05 09:48:15.02448137 +0000 UTC m=+0.564529315 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, RELEASE=main)
Dec 05 09:48:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:15.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:48:15 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:15.041 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:48:15 np0005546420.localdomain sudo[276032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zconqmcqxldkvicbilurnmhiwvokgvhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928094.9827216-2632-240287820582138/AnsiballZ_file.py
Dec 05 09:48:15 np0005546420.localdomain sudo[276032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:15 np0005546420.localdomain sudo[275639]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:15 np0005546420.localdomain python3.9[276043]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:15 np0005546420.localdomain sudo[276032]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:15 np0005546420.localdomain sudo[276123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:48:15 np0005546420.localdomain sudo[276123]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:48:15 np0005546420.localdomain sudo[276123]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:15 np0005546420.localdomain sudo[276159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:48:15 np0005546420.localdomain sudo[276159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:48:15 np0005546420.localdomain sudo[276194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sbfectnejohbrqgeluydhwiqwbfbwfyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928095.572668-2632-113479716523709/AnsiballZ_file.py
Dec 05 09:48:15 np0005546420.localdomain sudo[276194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:15 np0005546420.localdomain python3.9[276197]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:15 np0005546420.localdomain sudo[276194]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:16 np0005546420.localdomain sudo[276333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdvumwoafzyfflteegwdxwnbzwkjknyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928096.0925043-2632-250049822033907/AnsiballZ_file.py
Dec 05 09:48:16 np0005546420.localdomain sudo[276333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:16 np0005546420.localdomain sudo[276159]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:16 np0005546420.localdomain python3.9[276337]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:16 np0005546420.localdomain sudo[276333]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:16 np0005546420.localdomain sudo[276445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-imrxqproqpdcbeynbsjiviadpcmqrgkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928096.62828-2632-174405788263869/AnsiballZ_file.py
Dec 05 09:48:16 np0005546420.localdomain sudo[276445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:16 np0005546420.localdomain sudo[276448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:48:16 np0005546420.localdomain sudo[276448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:48:16 np0005546420.localdomain sudo[276448]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:48:17 np0005546420.localdomain python3.9[276447]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:17 np0005546420.localdomain sudo[276445]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.040 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.041 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.061 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.061 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.062 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.062 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.063 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:48:17 np0005546420.localdomain podman[276466]: 2025-12-05 09:48:17.083056759 +0000 UTC m=+0.078606630 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 09:48:17 np0005546420.localdomain podman[276466]: 2025-12-05 09:48:17.096426729 +0000 UTC m=+0.091976660 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 05 09:48:17 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:48:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:48:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:48:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:48:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:48:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:48:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16753 "" "Go-http-client/1.1"
Dec 05 09:48:17 np0005546420.localdomain sudo[276613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpbqhsxyhsogauhvgiqvtzztfqjdfurk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928097.1921847-2632-20830032585929/AnsiballZ_file.py
Dec 05 09:48:17 np0005546420.localdomain sudo[276613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.543 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:48:17 np0005546420.localdomain python3.9[276615]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:17 np0005546420.localdomain sudo[276613]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.730 230124 WARNING nova.virt.libvirt.driver [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.732 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12821MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.732 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.732 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.811 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.811 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:48:17 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:17.825 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:48:17 np0005546420.localdomain sudo[276728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dgculxopqfytlzyrelhlwbybwqozyzfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928097.7454805-2632-108729790566365/AnsiballZ_file.py
Dec 05 09:48:17 np0005546420.localdomain sudo[276728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:18 np0005546420.localdomain python3.9[276747]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:18 np0005546420.localdomain sudo[276728]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:18 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:18.305 230124 DEBUG oslo_concurrency.processutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:48:18 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:18.310 230124 DEBUG nova.compute.provider_tree [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:48:18 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:18.324 230124 DEBUG nova.scheduler.client.report [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:48:18 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:18.325 230124 DEBUG nova.compute.resource_tracker [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:48:18 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:18.325 230124 DEBUG oslo_concurrency.lockutils [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:48:18 np0005546420.localdomain sudo[276857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zentbkuntiktpawuaftpavxaubpwuhar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928098.3094234-2632-207711539812820/AnsiballZ_file.py
Dec 05 09:48:18 np0005546420.localdomain sudo[276857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:18 np0005546420.localdomain python3.9[276859]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:18 np0005546420.localdomain sudo[276857]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:48:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:48:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:48:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:48:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:48:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:48:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:48:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:48:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:48:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:48:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:48:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:48:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:20.325 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:48:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:20.327 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:48:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:20.327 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:48:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:20.343 230124 DEBUG nova.compute.manager [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:48:20 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:20.343 230124 DEBUG oslo_service.periodic_task [None req-4ca884b4-0891-4785-be55-f0d8c801633c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:48:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:48:21 np0005546420.localdomain podman[276877]: 2025-12-05 09:48:21.483415629 +0000 UTC m=+0.065265711 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec 05 09:48:21 np0005546420.localdomain podman[276877]: 2025-12-05 09:48:21.512300074 +0000 UTC m=+0.094150076 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:48:21 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:48:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:48:23 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16072 DF PROTO=TCP SPT=41624 DPT=9102 SEQ=3105712715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACEEB280000000001030307) 
Dec 05 09:48:23 np0005546420.localdomain podman[276895]: 2025-12-05 09:48:23.490863061 +0000 UTC m=+0.070056078 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:48:23 np0005546420.localdomain podman[276895]: 2025-12-05 09:48:23.501444745 +0000 UTC m=+0.080637792 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:48:23 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:48:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16073 DF PROTO=TCP SPT=41624 DPT=9102 SEQ=3105712715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACEEF190000000001030307) 
Dec 05 09:48:24 np0005546420.localdomain sudo[277008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phmqfbsmfhtlkmgduvmqnzhbodxntxfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928104.6479812-2958-168495106952160/AnsiballZ_getent.py
Dec 05 09:48:24 np0005546420.localdomain sudo[277008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58783 DF PROTO=TCP SPT=32962 DPT=9102 SEQ=518669569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACEF1DA0000000001030307) 
Dec 05 09:48:25 np0005546420.localdomain python3.9[277010]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 05 09:48:25 np0005546420.localdomain sudo[277008]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:26 np0005546420.localdomain sshd[277029]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:48:26 np0005546420.localdomain sshd[277029]: Accepted publickey for zuul from 192.168.122.30 port 54302 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:48:26 np0005546420.localdomain systemd-logind[762]: New session 59 of user zuul.
Dec 05 09:48:26 np0005546420.localdomain systemd[1]: Started Session 59 of User zuul.
Dec 05 09:48:26 np0005546420.localdomain sshd[277029]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:48:26 np0005546420.localdomain sshd[277032]: Received disconnect from 192.168.122.30 port 54302:11: disconnected by user
Dec 05 09:48:26 np0005546420.localdomain sshd[277032]: Disconnected from user zuul 192.168.122.30 port 54302
Dec 05 09:48:26 np0005546420.localdomain sshd[277029]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:48:26 np0005546420.localdomain systemd[1]: session-59.scope: Deactivated successfully.
Dec 05 09:48:26 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16074 DF PROTO=TCP SPT=41624 DPT=9102 SEQ=3105712715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACEF7190000000001030307) 
Dec 05 09:48:26 np0005546420.localdomain systemd-logind[762]: Session 59 logged out. Waiting for processes to exit.
Dec 05 09:48:26 np0005546420.localdomain systemd-logind[762]: Removed session 59.
Dec 05 09:48:27 np0005546420.localdomain python3.9[277140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:48:27 np0005546420.localdomain python3.9[277226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928106.6670935-3038-39125601851279/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:27 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30603 DF PROTO=TCP SPT=53858 DPT=9102 SEQ=2038520506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACEFBD90000000001030307) 
Dec 05 09:48:27 np0005546420.localdomain python3.9[277334]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:48:28 np0005546420.localdomain python3.9[277389]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:48:28 np0005546420.localdomain podman[277390]: 2025-12-05 09:48:28.488124617 +0000 UTC m=+0.066373584 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:48:28 np0005546420.localdomain podman[277390]: 2025-12-05 09:48:28.531623421 +0000 UTC m=+0.109872388 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 09:48:28 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:48:28 np0005546420.localdomain python3.9[277518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:48:29 np0005546420.localdomain python3.9[277604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928108.5274196-3038-217413472488696/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:29 np0005546420.localdomain python3.9[277712]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:48:30 np0005546420.localdomain python3.9[277798]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928109.5586002-3038-25813007213861/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=a84d6f6effa9a5ffb33218dbf52341ee4c9a75da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16075 DF PROTO=TCP SPT=41624 DPT=9102 SEQ=3105712715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACF06D90000000001030307) 
Dec 05 09:48:30 np0005546420.localdomain python3.9[277906]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:48:31 np0005546420.localdomain python3.9[277992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928110.4563801-3038-78760014985168/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:31 np0005546420.localdomain python3.9[278100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:48:32 np0005546420.localdomain python3.9[278186]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928111.4201384-3038-70130663871943/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:33 np0005546420.localdomain sudo[278294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kyhyyrhgckrvgirbpzvpupsnxggbizcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928113.1861825-3287-93032334799492/AnsiballZ_file.py
Dec 05 09:48:33 np0005546420.localdomain sudo[278294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:33 np0005546420.localdomain python3.9[278296]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:48:33 np0005546420.localdomain sudo[278294]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:34 np0005546420.localdomain sudo[278404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-texycbnvlgcpquklaleihiydnpncjwjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928113.8938692-3311-33105016823188/AnsiballZ_copy.py
Dec 05 09:48:34 np0005546420.localdomain sudo[278404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:48:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:48:34 np0005546420.localdomain podman[278407]: 2025-12-05 09:48:34.251809687 +0000 UTC m=+0.070310086 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git)
Dec 05 09:48:34 np0005546420.localdomain podman[278407]: 2025-12-05 09:48:34.266344452 +0000 UTC m=+0.084844841 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, name=ubi9-minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.buildah.version=1.33.7)
Dec 05 09:48:34 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:48:34 np0005546420.localdomain podman[278408]: 2025-12-05 09:48:34.310457875 +0000 UTC m=+0.123979072 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:48:34 np0005546420.localdomain podman[278408]: 2025-12-05 09:48:34.315869121 +0000 UTC m=+0.129390338 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:48:34 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:48:34 np0005546420.localdomain python3.9[278406]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:48:34 np0005546420.localdomain sudo[278404]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:34 np0005546420.localdomain sudo[278557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-osbbtivazlvebojjazeaqhkjlpkrgyhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928114.5910804-3336-83806649620895/AnsiballZ_stat.py
Dec 05 09:48:34 np0005546420.localdomain sudo[278557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:34 np0005546420.localdomain python3.9[278559]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:48:35 np0005546420.localdomain sudo[278557]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:35 np0005546420.localdomain sudo[278669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okgmelytrjebzxksynoogqnznsymbnql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928115.3252585-3362-184601471146291/AnsiballZ_file.py
Dec 05 09:48:35 np0005546420.localdomain sudo[278669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:35 np0005546420.localdomain python3.9[278671]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:48:35 np0005546420.localdomain sudo[278669]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:36 np0005546420.localdomain python3.9[278779]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:48:37 np0005546420.localdomain python3.9[278889]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:48:37 np0005546420.localdomain python3.9[278944]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:38 np0005546420.localdomain python3.9[279052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 09:48:38 np0005546420.localdomain python3.9[279107]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 09:48:38 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16076 DF PROTO=TCP SPT=41624 DPT=9102 SEQ=3105712715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACF27D90000000001030307) 
Dec 05 09:48:39 np0005546420.localdomain sudo[279215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xbwkwcfhfwqbtsgttqjlimlbgbiketjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928119.0234263-3491-24864027126810/AnsiballZ_container_config_data.py
Dec 05 09:48:39 np0005546420.localdomain sudo[279215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:48:39 np0005546420.localdomain systemd[1]: tmp-crun.qB1Pq8.mount: Deactivated successfully.
Dec 05 09:48:39 np0005546420.localdomain podman[279218]: 2025-12-05 09:48:39.41341812 +0000 UTC m=+0.097963294 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:48:39 np0005546420.localdomain podman[279218]: 2025-12-05 09:48:39.45352016 +0000 UTC m=+0.138065354 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 05 09:48:39 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:48:39 np0005546420.localdomain python3.9[279217]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 05 09:48:39 np0005546420.localdomain sudo[279215]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:40 np0005546420.localdomain sudo[279350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vywgomtljuvhjnekbmeesljoaqfsilpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928119.9139013-3518-42747632384094/AnsiballZ_container_config_hash.py
Dec 05 09:48:40 np0005546420.localdomain sudo[279350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:40 np0005546420.localdomain python3.9[279352]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:48:40 np0005546420.localdomain sudo[279350]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:41 np0005546420.localdomain sudo[279460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbdefhjsrfraoysjhsktwexwaobufxpg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764928120.7758555-3548-11147659671881/AnsiballZ_edpm_container_manage.py
Dec 05 09:48:41 np0005546420.localdomain sudo[279460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:41 np0005546420.localdomain python3[279462]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:48:41 np0005546420.localdomain python3[279462]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",
                                                                    "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:31:10.62653219Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1211779450,
                                                                    "VirtualSize": 1211779450,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",
                                                                              "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:53.072482982Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:02.761216507Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:03.785234187Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:17.194997182Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:24.14458279Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:30.048641643Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:09.707360362Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.208898452Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624465805Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624514176Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:18.661822382Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 05 09:48:41 np0005546420.localdomain sudo[279460]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:42 np0005546420.localdomain sudo[279631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sghrnqtsgbbraczfgykzwowgandnfgfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928122.0338204-3572-40369906576519/AnsiballZ_stat.py
Dec 05 09:48:42 np0005546420.localdomain sudo[279631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:42 np0005546420.localdomain python3.9[279633]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:48:42 np0005546420.localdomain sudo[279631]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:43 np0005546420.localdomain sudo[279743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tpkynwjtmvywuxnlbklcgjwdzaemuvtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928123.092976-3608-179309441779540/AnsiballZ_container_config_data.py
Dec 05 09:48:43 np0005546420.localdomain sudo[279743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:43 np0005546420.localdomain python3.9[279745]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 05 09:48:43 np0005546420.localdomain sudo[279743]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:44 np0005546420.localdomain sudo[279853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktarxlvxlmyohgtcxoexeygjogdkmgfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928123.895326-3635-13173897177161/AnsiballZ_container_config_hash.py
Dec 05 09:48:44 np0005546420.localdomain sudo[279853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:44 np0005546420.localdomain python3.9[279855]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 09:48:44 np0005546420.localdomain sudo[279853]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:45 np0005546420.localdomain sudo[279963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dkfefdzruqurjnulchfbrdpwqrteatfn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764928124.809088-3665-82592898619177/AnsiballZ_edpm_container_manage.py
Dec 05 09:48:45 np0005546420.localdomain sudo[279963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:45 np0005546420.localdomain python3[279965]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 09:48:45 np0005546420.localdomain python3[279965]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",
                                                                    "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:31:10.62653219Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1211779450,
                                                                    "VirtualSize": 1211779450,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",
                                                                              "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:53.072482982Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:02.761216507Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:03.785234187Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:17.194997182Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:24.14458279Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:30.048641643Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:09.707360362Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.208898452Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624465805Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624514176Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:18.661822382Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 05 09:48:45 np0005546420.localdomain sudo[279963]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:46 np0005546420.localdomain sudo[280134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nymdwsitnorlqbltnqvuwwutjhnqlhma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928125.9937541-3689-48773032434507/AnsiballZ_stat.py
Dec 05 09:48:46 np0005546420.localdomain sudo[280134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:46 np0005546420.localdomain python3.9[280136]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:48:46 np0005546420.localdomain sudo[280134]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:47 np0005546420.localdomain sudo[280246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvshsvcsarkrbfgcllxqwpjcmabvbmkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928126.7590008-3716-242655445939315/AnsiballZ_file.py
Dec 05 09:48:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:48:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:48:47 np0005546420.localdomain sudo[280246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:48:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:48:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:48:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:48:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16750 "" "Go-http-client/1.1"
Dec 05 09:48:47 np0005546420.localdomain podman[280249]: 2025-12-05 09:48:47.311263024 +0000 UTC m=+0.095657473 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:48:47 np0005546420.localdomain podman[280249]: 2025-12-05 09:48:47.351496988 +0000 UTC m=+0.135891467 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 09:48:47 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:48:47 np0005546420.localdomain python3.9[280248]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:48:47 np0005546420.localdomain sudo[280246]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:47 np0005546420.localdomain sudo[280375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ntisxwuyxmotyjefnzonnonmvjpzrjnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928127.4787407-3716-181848652333435/AnsiballZ_copy.py
Dec 05 09:48:47 np0005546420.localdomain sudo[280375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:48 np0005546420.localdomain python3.9[280377]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764928127.4787407-3716-181848652333435/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:48:48 np0005546420.localdomain sudo[280375]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:48 np0005546420.localdomain sudo[280430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-diobydcvwjvwtapnmzzjupolkjqnqqyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928127.4787407-3716-181848652333435/AnsiballZ_systemd.py
Dec 05 09:48:48 np0005546420.localdomain sudo[280430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:48 np0005546420.localdomain python3.9[280432]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:48:48 np0005546420.localdomain sudo[280430]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:48:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:48:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:48:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:48:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:48:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:48:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:48:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:48:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:48:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:48:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:48:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:48:50 np0005546420.localdomain python3.9[280542]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:48:51 np0005546420.localdomain python3.9[280650]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:48:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:48:52 np0005546420.localdomain systemd[1]: tmp-crun.kcYG8i.mount: Deactivated successfully.
Dec 05 09:48:52 np0005546420.localdomain podman[280759]: 2025-12-05 09:48:52.501100053 +0000 UTC m=+0.077943030 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:48:52 np0005546420.localdomain podman[280759]: 2025-12-05 09:48:52.530896667 +0000 UTC m=+0.107739644 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent)
Dec 05 09:48:52 np0005546420.localdomain python3.9[280758]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 09:48:52 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:48:53 np0005546420.localdomain sudo[280884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-epwskoravmlnilydqlmetpdcsymbjevo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928132.898683-3884-212194220639468/AnsiballZ_podman_container.py
Dec 05 09:48:53 np0005546420.localdomain sudo[280884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:53 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35942 DF PROTO=TCP SPT=51348 DPT=9102 SEQ=3984928611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACF60590000000001030307) 
Dec 05 09:48:53 np0005546420.localdomain python3.9[280886]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 05 09:48:53 np0005546420.localdomain sudo[280884]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:53 np0005546420.localdomain systemd-journald[48245]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 119.8 (399 of 333 items), suggesting rotation.
Dec 05 09:48:53 np0005546420.localdomain systemd-journald[48245]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 05 09:48:53 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:48:53 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:48:54 np0005546420.localdomain sudo[281017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjykqspouvedpenrbtqfxgtvtzrmnowz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928134.0615063-3908-211381816228584/AnsiballZ_systemd.py
Dec 05 09:48:54 np0005546420.localdomain sudo[281017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:48:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:48:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35943 DF PROTO=TCP SPT=51348 DPT=9102 SEQ=3984928611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACF645A0000000001030307) 
Dec 05 09:48:54 np0005546420.localdomain podman[281020]: 2025-12-05 09:48:54.467134867 +0000 UTC m=+0.092708733 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:48:54 np0005546420.localdomain podman[281020]: 2025-12-05 09:48:54.479394212 +0000 UTC m=+0.104968148 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:48:54 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:48:54 np0005546420.localdomain python3.9[281019]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 09:48:54 np0005546420.localdomain systemd[1]: Stopping nova_compute container...
Dec 05 09:48:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16077 DF PROTO=TCP SPT=41624 DPT=9102 SEQ=3105712715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACF67D90000000001030307) 
Dec 05 09:48:56 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35944 DF PROTO=TCP SPT=51348 DPT=9102 SEQ=3984928611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACF6C590000000001030307) 
Dec 05 09:48:56 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:56.629 230124 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Dec 05 09:48:56 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:56.631 230124 DEBUG oslo_concurrency.lockutils [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 09:48:56 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:56.632 230124 DEBUG oslo_concurrency.lockutils [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 09:48:56 np0005546420.localdomain nova_compute[230120]: 2025-12-05 09:48:56.633 230124 DEBUG oslo_concurrency.lockutils [None req-d0ea1ff6-d79a-44e9-bbe4-909aebd5c1ae - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 09:48:57 np0005546420.localdomain virtqemud[229316]: End of file while reading data: Input/output error
Dec 05 09:48:57 np0005546420.localdomain systemd[1]: libpod-2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86.scope: Deactivated successfully.
Dec 05 09:48:57 np0005546420.localdomain systemd[1]: libpod-2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86.scope: Consumed 17.273s CPU time.
Dec 05 09:48:57 np0005546420.localdomain podman[281045]: 2025-12-05 09:48:57.222563106 +0000 UTC m=+2.491568183 container died 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 09:48:57 np0005546420.localdomain systemd[1]: tmp-crun.E6SGSD.mount: Deactivated successfully.
Dec 05 09:48:57 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58784 DF PROTO=TCP SPT=32962 DPT=9102 SEQ=518669569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACF6FDA0000000001030307) 
Dec 05 09:48:57 np0005546420.localdomain podman[281045]: 2025-12-05 09:48:57.418207753 +0000 UTC m=+2.687212810 container cleanup 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:48:57 np0005546420.localdomain podman[281045]: nova_compute
Dec 05 09:48:57 np0005546420.localdomain podman[281080]: error opening file `/run/crun/2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86/status`: No such file or directory
Dec 05 09:48:57 np0005546420.localdomain podman[281069]: 2025-12-05 09:48:57.52642263 +0000 UTC m=+0.074025361 container cleanup 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 09:48:57 np0005546420.localdomain podman[281069]: nova_compute
Dec 05 09:48:57 np0005546420.localdomain systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 05 09:48:57 np0005546420.localdomain systemd[1]: Stopped nova_compute container.
Dec 05 09:48:57 np0005546420.localdomain systemd[1]: Starting nova_compute container...
Dec 05 09:48:57 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:48:57 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:57 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:57 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:57 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:57 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0babf0ef187f4f775d0ef7e95650edb998bf40e2f12d7c6c8772957d851517/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 09:48:57 np0005546420.localdomain podman[281083]: 2025-12-05 09:48:57.69021896 +0000 UTC m=+0.125641191 container init 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute)
Dec 05 09:48:57 np0005546420.localdomain podman[281083]: 2025-12-05 09:48:57.698193615 +0000 UTC m=+0.133615846 container start 2058f7a4b1327c6cb4780f32e7a8a4e4b9fd08f08dbcfce2a55be21e7ea43b86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=nova_compute)
Dec 05 09:48:57 np0005546420.localdomain podman[281083]: nova_compute
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: + sudo -E kolla_set_configs
Dec 05 09:48:57 np0005546420.localdomain systemd[1]: Started nova_compute container.
Dec 05 09:48:57 np0005546420.localdomain sudo[281017]: pam_unix(sudo:session): session closed for user root
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Validating config file
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying service configuration files
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Deleting /etc/ceph
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Creating directory /etc/ceph
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /etc/ceph
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Writing out command to execute
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: ++ cat /run_command
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: + CMD=nova-compute
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: + ARGS=
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: + sudo kolla_copy_cacerts
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: + [[ ! -n '' ]]
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: + . kolla_extend_start
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: Running command: 'nova-compute'
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: + echo 'Running command: '\''nova-compute'\'''
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: + umask 0022
Dec 05 09:48:57 np0005546420.localdomain nova_compute[281099]: + exec nova-compute
Dec 05 09:48:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:48:59 np0005546420.localdomain podman[281129]: 2025-12-05 09:48:59.510413793 +0000 UTC m=+0.083337496 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 09:48:59 np0005546420.localdomain podman[281129]: 2025-12-05 09:48:59.523179345 +0000 UTC m=+0.096103058 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 09:48:59 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:48:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:48:59.658 281103 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 09:48:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:48:59.658 281103 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 09:48:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:48:59.658 281103 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 09:48:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:48:59.659 281103 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 05 09:48:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:48:59.838 281103 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:48:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:48:59.861 281103 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:48:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:48:59.862 281103 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 05 09:49:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35945 DF PROTO=TCP SPT=51348 DPT=9102 SEQ=3984928611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACF7C190000000001030307) 
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.581 281103 INFO nova.virt.driver [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.705 281103 INFO nova.compute.provider_config [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.719 281103 DEBUG oslo_concurrency.lockutils [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.719 281103 DEBUG oslo_concurrency.lockutils [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.719 281103 DEBUG oslo_concurrency.lockutils [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.719 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.720 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.720 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.720 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.720 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.720 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.720 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.720 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.721 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.721 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.721 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.721 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.721 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.722 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.722 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.722 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.722 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.722 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.722 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.722 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] console_host                   = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.723 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.723 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.723 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.723 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.723 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.723 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.723 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.724 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.724 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.724 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.724 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.724 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.724 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.725 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.725 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.725 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.725 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.725 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.725 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] host                           = np0005546420.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.725 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.726 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.726 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.726 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.726 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.726 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.726 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.726 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.727 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.727 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.727 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.727 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.727 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.727 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.727 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.728 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.728 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.728 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.728 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.728 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.728 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.728 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.729 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.729 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.729 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.729 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.729 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.729 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.729 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.729 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.730 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.730 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.730 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.730 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.730 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.730 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.730 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.731 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.731 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.731 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.731 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.731 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] my_block_storage_ip            = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.731 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] my_ip                          = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.731 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.732 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.732 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.732 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.732 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.732 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.732 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.732 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.733 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.733 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.733 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.733 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.733 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.733 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.733 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.733 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.734 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.734 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.734 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.734 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.734 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.734 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.734 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.734 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.735 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.735 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.735 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.735 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.735 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.735 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.735 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.735 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.736 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.736 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.736 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.736 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.736 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.736 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.736 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.736 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.737 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.737 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.737 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.737 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.737 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.737 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.737 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.737 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.738 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.738 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.738 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.738 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.738 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.738 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.738 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.738 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.739 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.739 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.739 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.739 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.739 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.739 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.739 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.739 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.740 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.740 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.740 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.740 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.740 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.740 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.740 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.741 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.741 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.741 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.741 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.741 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.741 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.741 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.742 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.742 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.742 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.742 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.742 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.742 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.742 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.742 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.743 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.743 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.743 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.743 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.743 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.743 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.743 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.744 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.744 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.744 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.744 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.744 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.744 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.744 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.745 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.745 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.745 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.745 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.745 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.745 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.745 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.745 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.746 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.746 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.746 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.746 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.746 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.746 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.746 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.747 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.747 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.747 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.747 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.747 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.747 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.747 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.748 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.748 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.748 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.748 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.748 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.748 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.748 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.748 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.749 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.749 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.749 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.749 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.749 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.749 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.749 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.750 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.750 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.750 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.750 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.750 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.750 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.750 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.750 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.751 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.751 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.751 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.751 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.751 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.751 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.751 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.751 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.752 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.752 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.752 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.752 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.752 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.752 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.753 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.753 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.753 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.753 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.753 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.753 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.753 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.753 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.754 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.754 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.754 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.754 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.754 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.754 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.754 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.754 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.755 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.755 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.755 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.755 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.755 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.755 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.756 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.756 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.756 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.756 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.756 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.756 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.756 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.757 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.757 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.757 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.757 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.757 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.757 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.757 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.758 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.758 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.758 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.758 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.758 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.758 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.758 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.759 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.759 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.759 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.759 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.759 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.759 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.759 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.760 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.760 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.760 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.760 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.760 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.760 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.760 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.760 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.761 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.761 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.761 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.761 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.761 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.761 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.761 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.762 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.762 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.762 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.762 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.762 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.762 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.763 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.763 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.763 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.763 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.763 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.763 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.763 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.764 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.764 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.764 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.764 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.764 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.764 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.764 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.765 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.765 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.765 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.765 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.765 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.765 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.765 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.765 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.766 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.766 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.766 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.766 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.766 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.766 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.767 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.767 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.767 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.767 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.767 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.767 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.767 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.768 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.768 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.768 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.768 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.768 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.768 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.768 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.768 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.769 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.769 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.769 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.769 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.769 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.770 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.770 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.770 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.770 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.770 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.770 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.770 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.771 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.771 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.771 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.771 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.771 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.771 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.771 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.772 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.772 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.772 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.772 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.772 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.772 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.772 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.773 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.773 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.773 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.773 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.773 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.773 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.773 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.774 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.774 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.774 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.774 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.774 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.774 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.774 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.775 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.775 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.775 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.775 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.775 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.775 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.775 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.776 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.776 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.776 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.776 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.776 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.776 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.776 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.777 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.777 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.777 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.777 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.777 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.777 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.777 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.777 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.778 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.778 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.778 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.778 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.778 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.778 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.778 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.779 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.779 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.779 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.779 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.779 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.779 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.779 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.780 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.780 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.780 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.780 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.780 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.780 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.780 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.781 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.781 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.781 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.781 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.781 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.781 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.781 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.782 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.782 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.782 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.782 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.782 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.782 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.782 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.783 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.783 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.783 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.783 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.783 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.783 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.783 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.784 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.784 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.784 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.784 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.784 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.784 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.784 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.785 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.785 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.785 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.785 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.785 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.785 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.785 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.786 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.786 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.786 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.786 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.786 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.786 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.786 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.787 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.787 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.787 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.787 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.788 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.788 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.789 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.789 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.789 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.789 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.789 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.789 281103 WARNING oslo_config.cfg [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: and ``live_migration_inbound_addr`` respectively.
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: ).  Its value may be silently ignored in the future.
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.790 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.790 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.790 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.790 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.790 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.790 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.791 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.791 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.791 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.791 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.791 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.791 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.791 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.792 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.792 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.792 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.792 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.792 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.792 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.rbd_secret_uuid        = 79feddb1-4bfc-557f-83b9-0d57c9f66c1b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.792 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.793 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.793 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.793 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.793 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.793 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.793 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.793 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.794 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.794 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.794 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.794 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.794 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.794 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.794 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.795 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.795 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.795 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.795 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.795 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.795 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.795 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.796 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.796 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.796 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.796 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.796 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.796 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.797 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.797 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.797 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.797 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.797 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.797 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.797 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.798 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.798 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.798 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.798 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.798 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.798 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.798 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.799 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.799 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.799 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.799 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.799 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.799 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.799 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.800 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.800 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.800 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.800 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.800 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.800 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.800 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.801 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.801 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.801 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.801 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.801 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.801 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.801 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.802 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.802 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.802 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.802 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.802 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.802 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.802 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.803 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.803 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.803 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.803 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.803 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.803 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.803 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.804 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.804 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.804 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.804 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.804 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.804 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.804 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.805 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.805 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.805 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.805 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.805 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.805 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.805 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.806 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.806 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.806 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.806 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.806 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.806 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.806 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.807 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.807 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.807 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.807 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.807 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.807 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.807 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.807 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.808 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.808 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.808 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.808 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.808 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.808 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.808 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.809 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.809 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.809 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.809 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.809 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.809 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.809 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.810 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.810 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.810 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.810 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.810 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.810 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.811 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.811 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.811 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.811 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.811 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.811 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.812 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.812 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.812 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.812 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.812 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.812 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.813 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.813 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.813 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.813 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.813 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.813 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.813 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.814 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.814 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.814 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.814 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.814 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.814 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.814 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.815 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.815 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.815 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.815 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.815 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.815 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.815 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.816 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.816 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.816 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.816 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.816 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.816 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.817 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.817 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.817 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.817 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.817 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.817 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.817 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.818 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.818 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.818 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.818 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.818 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.818 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.818 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.819 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.819 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.819 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.819 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.819 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.819 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.819 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.820 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.820 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.820 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.820 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.820 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.820 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.820 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.821 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.821 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.821 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.821 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.821 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.821 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.821 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.822 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.822 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.822 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.822 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.822 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.822 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.823 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.823 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.823 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.823 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.823 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.823 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.823 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.823 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.824 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.824 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.824 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.824 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.824 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.824 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.824 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.825 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.825 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.825 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.825 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.825 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.825 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.825 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.826 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.826 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.826 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.826 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.826 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.826 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.827 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.827 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.827 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.827 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.827 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.827 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.827 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.827 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.828 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.828 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.828 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.828 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.828 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.828 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.828 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.829 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.829 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.829 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.829 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.829 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.829 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.829 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.829 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.830 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.830 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.830 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.830 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.830 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.830 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.830 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.831 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.831 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.831 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.831 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.831 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.831 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.831 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.832 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.832 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.832 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.832 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.832 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.832 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.833 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.833 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.833 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.833 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.833 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.833 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.833 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.833 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.834 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.834 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.834 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.834 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.834 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.834 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.834 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.835 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.835 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.835 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.835 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.835 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.835 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.835 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.836 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.836 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.836 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.836 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.836 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.836 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.836 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.837 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.837 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.837 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.837 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.837 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.837 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.837 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.838 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.838 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.838 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.838 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.838 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.838 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.838 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.839 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.839 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.839 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.839 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.839 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.839 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.839 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.840 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.840 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.840 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.840 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.840 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.840 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.840 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.841 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.841 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.841 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.841 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.841 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.841 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.841 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.842 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.842 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.842 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.842 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.842 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.842 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.842 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.843 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.843 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.843 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.843 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.843 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.843 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.843 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.843 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.844 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.844 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.844 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.844 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.844 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.844 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.845 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.845 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.845 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.845 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.845 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.845 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.845 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.846 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.846 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.846 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.846 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.846 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.846 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.846 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.847 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.847 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.847 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.847 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.847 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.847 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.847 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.848 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.848 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.848 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.848 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.848 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.848 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.848 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.849 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.849 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.849 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.849 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.849 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.849 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.849 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.850 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.850 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.850 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.850 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.850 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.850 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.850 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.850 281103 DEBUG oslo_service.service [None req-ad8486fb-86b2-4b84-9233-11e425ae8558 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.852 281103 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.867 281103 INFO nova.virt.node [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Determined node identity 2850b2c4-8d07-40ab-9d82-672172ca70fc from /var/lib/nova/compute_id
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.868 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.869 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.869 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.869 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.880 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd001352a00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.883 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd001352a00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.884 281103 INFO nova.virt.libvirt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Connection event '1' reason 'None'
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.889 281103 INFO nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Libvirt host capabilities <capabilities>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <host>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <uuid>38a014e5-f211-4fa1-8868-c362af7c3bc6</uuid>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <cpu>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <arch>x86_64</arch>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model>EPYC-Rome-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <vendor>AMD</vendor>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <microcode version='16777317'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <signature family='23' model='49' stepping='0'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='x2apic'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='tsc-deadline'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='osxsave'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='hypervisor'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='tsc_adjust'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='spec-ctrl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='stibp'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='arch-capabilities'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='ssbd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='cmp_legacy'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='topoext'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='virt-ssbd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='lbrv'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='tsc-scale'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='vmcb-clean'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='pause-filter'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='pfthreshold'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='svme-addr-chk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='rdctl-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='skip-l1dfl-vmentry'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='mds-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature name='pschange-mc-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <pages unit='KiB' size='4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <pages unit='KiB' size='2048'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <pages unit='KiB' size='1048576'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </cpu>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <power_management>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <suspend_mem/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <suspend_disk/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <suspend_hybrid/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </power_management>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <iommu support='no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <migration_features>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <live/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <uri_transports>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <uri_transport>tcp</uri_transport>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <uri_transport>rdma</uri_transport>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </uri_transports>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </migration_features>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <topology>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <cells num='1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <cell id='0'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:           <memory unit='KiB'>16116612</memory>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:           <pages unit='KiB' size='4'>4029153</pages>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:           <pages unit='KiB' size='2048'>0</pages>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:           <distances>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:             <sibling id='0' value='10'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:           </distances>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:           <cpus num='8'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:           </cpus>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         </cell>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </cells>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </topology>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <cache>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </cache>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <secmodel>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model>selinux</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <doi>0</doi>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </secmodel>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <secmodel>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model>dac</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <doi>0</doi>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </secmodel>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   </host>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <guest>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <os_type>hvm</os_type>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <arch name='i686'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <wordsize>32</wordsize>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <domain type='qemu'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <domain type='kvm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </arch>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <features>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <pae/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <nonpae/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <acpi default='on' toggle='yes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <apic default='on' toggle='no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <cpuselection/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <deviceboot/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <disksnapshot default='on' toggle='no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <externalSnapshot/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </features>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   </guest>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <guest>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <os_type>hvm</os_type>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <arch name='x86_64'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <wordsize>64</wordsize>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <domain type='qemu'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <domain type='kvm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </arch>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <features>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <acpi default='on' toggle='yes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <apic default='on' toggle='no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <cpuselection/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <deviceboot/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <disksnapshot default='on' toggle='no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <externalSnapshot/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </features>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   </guest>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: </capabilities>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.893 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.896 281103 DEBUG nova.virt.libvirt.volume.mount [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.897 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: <domainCapabilities>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <domain>kvm</domain>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <arch>i686</arch>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <vcpu max='240'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <iothreads supported='yes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <os supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <enum name='firmware'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <loader supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>rom</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>pflash</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='readonly'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>yes</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>no</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='secure'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>no</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </loader>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   </os>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <cpu>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>on</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>off</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <mode name='maximum' supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='maximumMigratable'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>on</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>off</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <mode name='host-model' supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <vendor>AMD</vendor>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='x2apic'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='stibp'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='ssbd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='succor'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='ibrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='lbrv'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <mode name='custom' supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Dhyana-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Genoa'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='auto-ibrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='auto-ibrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-128'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-256'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-512'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-noTSX'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='KnightsMill'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4fmaps'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4vnniw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512er'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512pf'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='KnightsMill-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4fmaps'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4vnniw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512er'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512pf'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G5'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tbm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tbm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SierraForest'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ne-convert'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cmpccxadd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SierraForest-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ne-convert'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cmpccxadd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='athlon'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='athlon-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='core2duo'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='core2duo-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='coreduo'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='coreduo-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='n270'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='n270-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='phenom'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='phenom-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   </cpu>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <memoryBacking supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <enum name='sourceType'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <value>file</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <value>anonymous</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <value>memfd</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   </memoryBacking>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <devices>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <disk supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='diskDevice'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>disk</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>cdrom</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>floppy</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>lun</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='bus'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>ide</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>fdc</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>scsi</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>sata</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>virtio-transitional</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>virtio-non-transitional</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </disk>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <graphics supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>vnc</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>egl-headless</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>dbus</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </graphics>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <video supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='modelType'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>vga</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>cirrus</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>none</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>bochs</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>ramfb</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </video>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <hostdev supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='mode'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>subsystem</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='startupPolicy'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>default</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>mandatory</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>requisite</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>optional</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='subsysType'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>pci</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>scsi</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='capsType'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='pciBackend'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </hostdev>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <rng supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>virtio-transitional</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>virtio-non-transitional</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>random</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>egd</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>builtin</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </rng>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <filesystem supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='driverType'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>path</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>handle</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>virtiofs</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </filesystem>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <tpm supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>tpm-tis</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>tpm-crb</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>emulator</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>external</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='backendVersion'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>2.0</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </tpm>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <redirdev supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='bus'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </redirdev>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <channel supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>pty</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>unix</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </channel>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <crypto supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='model'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>qemu</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>builtin</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </crypto>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <interface supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='backendType'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>default</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>passt</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </interface>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <panic supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>isa</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>hyperv</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </panic>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <console supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>null</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>vc</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>pty</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>dev</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>file</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>pipe</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>stdio</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>udp</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>tcp</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>unix</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>qemu-vdagent</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>dbus</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </console>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   </devices>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <features>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <gic supported='no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <vmcoreinfo supported='yes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <genid supported='yes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <backingStoreInput supported='yes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <backup supported='yes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <async-teardown supported='yes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <ps2 supported='yes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <sev supported='no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <sgx supported='no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <hyperv supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='features'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>relaxed</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>vapic</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>spinlocks</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>vpindex</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>runtime</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>synic</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>stimer</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>reset</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>vendor_id</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>frequencies</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>reenlightenment</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>tlbflush</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>ipi</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>avic</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>emsr_bitmap</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>xmm_input</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <defaults>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <spinlocks>4095</spinlocks>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <stimer_direct>on</stimer_direct>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </defaults>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </hyperv>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <launchSecurity supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='sectype'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>tdx</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </launchSecurity>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   </features>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: </domainCapabilities>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.902 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]: <domainCapabilities>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <domain>kvm</domain>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <arch>i686</arch>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <vcpu max='1024'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <iothreads supported='yes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <os supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <enum name='firmware'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <loader supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>rom</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>pflash</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='readonly'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>yes</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>no</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='secure'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>no</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </loader>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   </os>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:   <cpu>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>on</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>off</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <mode name='maximum' supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <enum name='maximumMigratable'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>on</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <value>off</value>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <mode name='host-model' supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <vendor>AMD</vendor>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='x2apic'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='stibp'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='ssbd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='succor'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='ibrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='lbrv'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:     <mode name='custom' supported='yes'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Dhyana-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Genoa'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='auto-ibrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='auto-ibrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-128'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-256'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-512'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-noTSX'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='KnightsMill'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4fmaps'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4vnniw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512er'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512pf'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='KnightsMill-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4fmaps'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4vnniw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512er'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512pf'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G5'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tbm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tbm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SierraForest'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ne-convert'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cmpccxadd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='SierraForest-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ifma'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ne-convert'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni-int8'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cmpccxadd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v1'>
Dec 05 09:49:00 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='athlon'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='athlon-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='core2duo'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='core2duo-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='coreduo'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='coreduo-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='n270'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='n270-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='phenom'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='phenom-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </cpu>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <memoryBacking supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <enum name='sourceType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>file</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>anonymous</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>memfd</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </memoryBacking>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <devices>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <disk supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='diskDevice'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>disk</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>cdrom</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>floppy</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>lun</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='bus'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>fdc</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>scsi</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>sata</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-non-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </disk>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <graphics supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vnc</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>egl-headless</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>dbus</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </graphics>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <video supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='modelType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vga</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>cirrus</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>none</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>bochs</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>ramfb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </video>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <hostdev supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='mode'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>subsystem</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='startupPolicy'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>default</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>mandatory</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>requisite</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>optional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='subsysType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pci</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>scsi</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='capsType'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='pciBackend'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </hostdev>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <rng supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-non-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>random</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>egd</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>builtin</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </rng>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <filesystem supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='driverType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>path</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>handle</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtiofs</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </filesystem>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <tpm supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tpm-tis</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tpm-crb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>emulator</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>external</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendVersion'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>2.0</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </tpm>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <redirdev supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='bus'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </redirdev>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <channel supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pty</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>unix</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </channel>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <crypto supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>qemu</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>builtin</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </crypto>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <interface supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>default</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>passt</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </interface>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <panic supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>isa</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>hyperv</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </panic>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <console supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>null</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vc</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pty</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>dev</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>file</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pipe</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>stdio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>udp</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tcp</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>unix</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>qemu-vdagent</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>dbus</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </console>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </devices>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <features>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <gic supported='no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <vmcoreinfo supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <genid supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <backingStoreInput supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <backup supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <async-teardown supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <ps2 supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <sev supported='no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <sgx supported='no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <hyperv supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='features'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>relaxed</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vapic</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>spinlocks</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vpindex</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>runtime</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>synic</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>stimer</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>reset</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vendor_id</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>frequencies</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>reenlightenment</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tlbflush</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>ipi</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>avic</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>emsr_bitmap</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>xmm_input</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <defaults>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <spinlocks>4095</spinlocks>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <stimer_direct>on</stimer_direct>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </defaults>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </hyperv>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <launchSecurity supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='sectype'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tdx</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </launchSecurity>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </features>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: </domainCapabilities>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.941 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:00.945 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: <domainCapabilities>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <domain>kvm</domain>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <arch>x86_64</arch>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <vcpu max='240'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <iothreads supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <os supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <enum name='firmware'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <loader supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>rom</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pflash</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='readonly'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>yes</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>no</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='secure'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>no</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </loader>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </os>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <cpu>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>on</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>off</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <mode name='maximum' supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='maximumMigratable'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>on</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>off</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <mode name='host-model' supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <vendor>AMD</vendor>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='x2apic'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='stibp'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='ssbd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='succor'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='ibrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='lbrv'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <mode name='custom' supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Dhyana-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Genoa'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='auto-ibrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='auto-ibrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-128'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-256'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-512'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-noTSX'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='KnightsMill'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4fmaps'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4vnniw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512er'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512pf'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='KnightsMill-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4fmaps'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4vnniw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512er'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512pf'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G5'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tbm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tbm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SierraForest'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ne-convert'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cmpccxadd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SierraForest-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ne-convert'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cmpccxadd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='athlon'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='athlon-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='core2duo'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='core2duo-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='coreduo'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='coreduo-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='n270'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='n270-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='phenom'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='phenom-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </cpu>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <memoryBacking supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <enum name='sourceType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>file</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>anonymous</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>memfd</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </memoryBacking>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <devices>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <disk supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='diskDevice'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>disk</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>cdrom</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>floppy</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>lun</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='bus'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>ide</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>fdc</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>scsi</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>sata</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-non-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </disk>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <graphics supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vnc</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>egl-headless</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>dbus</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </graphics>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <video supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='modelType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vga</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>cirrus</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>none</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>bochs</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>ramfb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </video>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <hostdev supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='mode'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>subsystem</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='startupPolicy'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>default</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>mandatory</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>requisite</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>optional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='subsysType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pci</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>scsi</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='capsType'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='pciBackend'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </hostdev>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <rng supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-non-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>random</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>egd</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>builtin</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </rng>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <filesystem supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='driverType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>path</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>handle</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtiofs</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </filesystem>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <tpm supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tpm-tis</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tpm-crb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>emulator</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>external</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendVersion'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>2.0</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </tpm>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <redirdev supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='bus'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </redirdev>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <channel supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pty</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>unix</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </channel>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <crypto supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>qemu</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>builtin</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </crypto>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <interface supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>default</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>passt</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </interface>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <panic supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>isa</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>hyperv</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </panic>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <console supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>null</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vc</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pty</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>dev</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>file</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pipe</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>stdio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>udp</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tcp</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>unix</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>qemu-vdagent</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>dbus</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </console>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </devices>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <features>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <gic supported='no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <vmcoreinfo supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <genid supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <backingStoreInput supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <backup supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <async-teardown supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <ps2 supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <sev supported='no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <sgx supported='no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <hyperv supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='features'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>relaxed</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vapic</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>spinlocks</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vpindex</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>runtime</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>synic</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>stimer</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>reset</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vendor_id</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>frequencies</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>reenlightenment</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tlbflush</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>ipi</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>avic</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>emsr_bitmap</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>xmm_input</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <defaults>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <spinlocks>4095</spinlocks>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <stimer_direct>on</stimer_direct>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </defaults>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </hyperv>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <launchSecurity supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='sectype'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tdx</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </launchSecurity>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </features>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: </domainCapabilities>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.008 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: <domainCapabilities>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <domain>kvm</domain>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <arch>x86_64</arch>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <vcpu max='1024'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <iothreads supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <os supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <enum name='firmware'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>efi</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <loader supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>rom</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pflash</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='readonly'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>yes</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>no</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='secure'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>yes</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>no</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </loader>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </os>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <cpu>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <mode name='host-passthrough' supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='hostPassthroughMigratable'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>on</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>off</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <mode name='maximum' supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='maximumMigratable'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>on</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>off</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <mode name='host-model' supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <vendor>AMD</vendor>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='x2apic'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='hypervisor'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='stibp'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='ssbd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='overflow-recov'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='succor'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='ibrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='lbrv'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='tsc-scale'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='pause-filter'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='pfthreshold'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <feature policy='disable' name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <mode name='custom' supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-noTSX'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Broadwell-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Cooperlake-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Denverton-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Dhyana-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Genoa'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='auto-ibrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='auto-ibrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Milan-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amd-psfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='no-nested-data-bp'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='null-sel-clr-base'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='stibp-always-on'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-Rome-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='EPYC-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='GraniteRapids-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-128'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-256'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx10-512'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='prefetchiti'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-noTSX'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Haswell-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v5'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v6'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Icelake-Server-v7'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='IvyBridge-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='KnightsMill'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4fmaps'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4vnniw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512er'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512pf'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='KnightsMill-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4fmaps'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-4vnniw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512er'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512pf'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G4-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G5'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tbm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Opteron_G5-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fma4'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tbm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xop'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SapphireRapids-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='amx-tile'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-bf16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-fp16'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512-vpopcntdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bitalg'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vbmi2'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrc'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fzrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='la57'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='taa-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='tsx-ldtrk'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xfd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SierraForest'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ne-convert'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cmpccxadd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='SierraForest-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ifma'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-ne-convert'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx-vnni-int8'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='bus-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cmpccxadd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fbsdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='fsrs'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ibrs-all'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mcdt-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pbrsb-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='psdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='sbdr-ssdp-no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='serialize'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vaes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='vpclmulqdq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Client-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='hle'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='rtm'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Skylake-Server-v5'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512bw'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512cd'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512dq'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512f'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='avx512vl'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='invpcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pcid'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='pku'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='mpx'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v2'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v3'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='core-capability'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='split-lock-detect'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='Snowridge-v4'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='cldemote'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='erms'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='gfni'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdir64b'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='movdiri'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='xsaves'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='athlon'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='athlon-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='core2duo'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='core2duo-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='coreduo'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='coreduo-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='n270'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='n270-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='ss'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='phenom'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <blockers model='phenom-v1'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnow'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <feature name='3dnowext'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </blockers>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </mode>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </cpu>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <memoryBacking supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <enum name='sourceType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>file</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>anonymous</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <value>memfd</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </memoryBacking>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <devices>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <disk supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='diskDevice'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>disk</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>cdrom</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>floppy</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>lun</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='bus'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>fdc</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>scsi</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>sata</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-non-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </disk>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <graphics supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vnc</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>egl-headless</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>dbus</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </graphics>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <video supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='modelType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vga</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>cirrus</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>none</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>bochs</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>ramfb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </video>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <hostdev supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='mode'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>subsystem</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='startupPolicy'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>default</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>mandatory</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>requisite</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>optional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='subsysType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pci</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>scsi</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='capsType'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='pciBackend'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </hostdev>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <rng supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtio-non-transitional</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>random</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>egd</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>builtin</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </rng>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <filesystem supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='driverType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>path</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>handle</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>virtiofs</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </filesystem>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <tpm supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tpm-tis</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tpm-crb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>emulator</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>external</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendVersion'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>2.0</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </tpm>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <redirdev supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='bus'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>usb</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </redirdev>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <channel supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pty</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>unix</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </channel>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <crypto supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>qemu</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendModel'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>builtin</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </crypto>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <interface supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='backendType'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>default</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>passt</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </interface>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <panic supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='model'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>isa</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>hyperv</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </panic>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <console supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='type'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>null</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vc</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pty</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>dev</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>file</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>pipe</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>stdio</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>udp</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tcp</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>unix</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>qemu-vdagent</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>dbus</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </console>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </devices>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   <features>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <gic supported='no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <vmcoreinfo supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <genid supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <backingStoreInput supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <backup supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <async-teardown supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <ps2 supported='yes'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <sev supported='no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <sgx supported='no'/>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <hyperv supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='features'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>relaxed</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vapic</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>spinlocks</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vpindex</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>runtime</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>synic</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>stimer</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>reset</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>vendor_id</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>frequencies</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>reenlightenment</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tlbflush</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>ipi</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>avic</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>emsr_bitmap</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>xmm_input</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <defaults>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <spinlocks>4095</spinlocks>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <stimer_direct>on</stimer_direct>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <tlbflush_direct>off</tlbflush_direct>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <tlbflush_extended>off</tlbflush_extended>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </defaults>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </hyperv>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     <launchSecurity supported='yes'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       <enum name='sectype'>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:         <value>tdx</value>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:       </enum>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:     </launchSecurity>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:   </features>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: </domainCapabilities>
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.070 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.071 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.071 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.071 281103 INFO nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Secure Boot support detected
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.075 281103 INFO nova.virt.libvirt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.076 281103 INFO nova.virt.libvirt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.090 281103 DEBUG nova.virt.libvirt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.117 281103 INFO nova.virt.node [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Determined node identity 2850b2c4-8d07-40ab-9d82-672172ca70fc from /var/lib/nova/compute_id
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.142 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Verified node 2850b2c4-8d07-40ab-9d82-672172ca70fc matches my host np0005546420.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.176 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.249 281103 DEBUG oslo_concurrency.lockutils [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.250 281103 DEBUG oslo_concurrency.lockutils [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.250 281103 DEBUG oslo_concurrency.lockutils [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.251 281103 DEBUG nova.compute.resource_tracker [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.251 281103 DEBUG oslo_concurrency.processutils [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:49:01 np0005546420.localdomain sudo[281262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lcwrjxmrfzdzklvofupgavczpqegpooz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764928141.1107016-3935-23223615661427/AnsiballZ_podman_container.py
Dec 05 09:49:01 np0005546420.localdomain sudo[281262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 09:49:01 np0005546420.localdomain python3.9[281281]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.747 281103 DEBUG oslo_concurrency.processutils [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.920 281103 WARNING nova.virt.libvirt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.922 281103 DEBUG nova.compute.resource_tracker [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12791MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.922 281103 DEBUG oslo_concurrency.lockutils [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:49:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:01.922 281103 DEBUG oslo_concurrency.lockutils [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:49:02 np0005546420.localdomain systemd[1]: Started libpod-conmon-aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86.scope.
Dec 05 09:49:02 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:49:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8da57bc6a9d99cca22f68324e2779eeb1b50a532dd39067fc53d8e0f9f160f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8da57bc6a9d99cca22f68324e2779eeb1b50a532dd39067fc53d8e0f9f160f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:02 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8da57bc6a9d99cca22f68324e2779eeb1b50a532dd39067fc53d8e0f9f160f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.041 281103 DEBUG nova.compute.resource_tracker [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.042 281103 DEBUG nova.compute.resource_tracker [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:49:02 np0005546420.localdomain podman[281310]: 2025-12-05 09:49:02.042079728 +0000 UTC m=+0.186040198 container init aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 09:49:02 np0005546420.localdomain podman[281310]: 2025-12-05 09:49:02.054339545 +0000 UTC m=+0.198300005 container start aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 09:49:02 np0005546420.localdomain python3.9[281281]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.096 281103 DEBUG nova.scheduler.client.report [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.114 281103 DEBUG nova.scheduler.client.report [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.115 281103 DEBUG nova.compute.provider_tree [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Applying nova statedir ownership
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673
Dec 05 09:49:02 np0005546420.localdomain nova_compute_init[281330]: INFO:nova_statedir:Nova statedir ownership complete
Dec 05 09:49:02 np0005546420.localdomain systemd[1]: libpod-aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86.scope: Deactivated successfully.
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.131 281103 DEBUG nova.scheduler.client.report [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 09:49:02 np0005546420.localdomain podman[281331]: 2025-12-05 09:49:02.133024833 +0000 UTC m=+0.059358625 container died aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm)
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.158 281103 DEBUG nova.scheduler.client.report [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.179 281103 DEBUG oslo_concurrency.processutils [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:49:02 np0005546420.localdomain podman[281343]: 2025-12-05 09:49:02.243540279 +0000 UTC m=+0.101611734 container cleanup aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 05 09:49:02 np0005546420.localdomain systemd[1]: libpod-conmon-aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86.scope: Deactivated successfully.
Dec 05 09:49:02 np0005546420.localdomain sudo[281262]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.628 281103 DEBUG oslo_concurrency.processutils [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.637 281103 DEBUG nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.638 281103 INFO nova.virt.libvirt.host [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] kernel doesn't support AMD SEV
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.640 281103 DEBUG nova.compute.provider_tree [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.640 281103 DEBUG nova.virt.libvirt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.665 281103 DEBUG nova.scheduler.client.report [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.697 281103 DEBUG nova.compute.resource_tracker [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.698 281103 DEBUG oslo_concurrency.lockutils [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.698 281103 DEBUG nova.service [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.723 281103 DEBUG nova.service [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 05 09:49:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:02.724 281103 DEBUG nova.servicegroup.drivers.db [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] DB_Driver: join new ServiceGroup member np0005546420.localdomain to the compute group, service = <Service: host=np0005546420.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 05 09:49:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0e8da57bc6a9d99cca22f68324e2779eeb1b50a532dd39067fc53d8e0f9f160f-merged.mount: Deactivated successfully.
Dec 05 09:49:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa0880b84cb9b18903504e1c85c74c12ba913aee991b6b99026565ae46f45e86-userdata-shm.mount: Deactivated successfully.
Dec 05 09:49:02 np0005546420.localdomain sshd[263047]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:49:02 np0005546420.localdomain systemd[1]: session-58.scope: Deactivated successfully.
Dec 05 09:49:02 np0005546420.localdomain systemd[1]: session-58.scope: Consumed 1min 31.339s CPU time.
Dec 05 09:49:02 np0005546420.localdomain systemd-logind[762]: Session 58 logged out. Waiting for processes to exit.
Dec 05 09:49:02 np0005546420.localdomain systemd-logind[762]: Removed session 58.
Dec 05 09:49:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:49:04.109 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:49:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:49:04.110 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:49:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:49:04.110 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:49:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:49:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:49:04 np0005546420.localdomain podman[281404]: 2025-12-05 09:49:04.523998037 +0000 UTC m=+0.095572648 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Dec 05 09:49:04 np0005546420.localdomain podman[281404]: 2025-12-05 09:49:04.53648886 +0000 UTC m=+0.108063461 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 09:49:04 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:49:04 np0005546420.localdomain podman[281405]: 2025-12-05 09:49:04.621652188 +0000 UTC m=+0.190767263 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:49:04 np0005546420.localdomain podman[281405]: 2025-12-05 09:49:04.630461119 +0000 UTC m=+0.199576214 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:49:04 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:49:08 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35946 DF PROTO=TCP SPT=51348 DPT=9102 SEQ=3984928611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACF9BD90000000001030307) 
Dec 05 09:49:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:49:09 np0005546420.localdomain podman[281448]: 2025-12-05 09:49:09.826718847 +0000 UTC m=+0.079811084 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller)
Dec 05 09:49:09 np0005546420.localdomain podman[281448]: 2025-12-05 09:49:09.866363095 +0000 UTC m=+0.119455312 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:49:09 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:49:17 np0005546420.localdomain sudo[281473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:49:17 np0005546420.localdomain sudo[281473]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:49:17 np0005546420.localdomain sudo[281473]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:49:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:49:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:49:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:49:17 np0005546420.localdomain sudo[281491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:49:17 np0005546420.localdomain sudo[281491]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:49:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:49:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16742 "" "Go-http-client/1.1"
Dec 05 09:49:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:49:17 np0005546420.localdomain systemd[1]: tmp-crun.w3aBnZ.mount: Deactivated successfully.
Dec 05 09:49:17 np0005546420.localdomain podman[281509]: 2025-12-05 09:49:17.501025034 +0000 UTC m=+0.087137948 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Dec 05 09:49:17 np0005546420.localdomain podman[281509]: 2025-12-05 09:49:17.513265211 +0000 UTC m=+0.099378065 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:49:17 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:49:17 np0005546420.localdomain sudo[281491]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:18 np0005546420.localdomain sudo[281559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:49:18 np0005546420.localdomain sudo[281559]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:49:18 np0005546420.localdomain sudo[281559]: pam_unix(sudo:session): session closed for user root
Dec 05 09:49:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:49:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:49:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:49:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:49:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:49:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:49:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:49:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:49:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:49:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:49:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:49:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:49:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:49:23 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31372 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=3644770264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACFD5890000000001030307) 
Dec 05 09:49:23 np0005546420.localdomain podman[281577]: 2025-12-05 09:49:23.491268532 +0000 UTC m=+0.070294011 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:49:23 np0005546420.localdomain podman[281577]: 2025-12-05 09:49:23.499442013 +0000 UTC m=+0.078467432 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 05 09:49:23 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:49:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31373 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=3644770264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACFD99A0000000001030307) 
Dec 05 09:49:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35947 DF PROTO=TCP SPT=51348 DPT=9102 SEQ=3984928611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACFDBD90000000001030307) 
Dec 05 09:49:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:49:25 np0005546420.localdomain podman[281595]: 2025-12-05 09:49:25.490999074 +0000 UTC m=+0.068719902 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:49:25 np0005546420.localdomain podman[281595]: 2025-12-05 09:49:25.499416272 +0000 UTC m=+0.077137050 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:49:25 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:49:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:49:26.412 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 09:49:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:49:26.413 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 09:49:26 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31374 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=3644770264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACFE19A0000000001030307) 
Dec 05 09:49:27 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16078 DF PROTO=TCP SPT=41624 DPT=9102 SEQ=3105712715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACFE5D90000000001030307) 
Dec 05 09:49:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:49:29.415 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 09:49:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:49:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31375 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=3644770264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2ACFF1590000000001030307) 
Dec 05 09:49:30 np0005546420.localdomain systemd[1]: tmp-crun.fMqWdU.mount: Deactivated successfully.
Dec 05 09:49:30 np0005546420.localdomain podman[281617]: 2025-12-05 09:49:30.626468985 +0000 UTC m=+0.213421819 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:49:30 np0005546420.localdomain podman[281617]: 2025-12-05 09:49:30.665548656 +0000 UTC m=+0.252501490 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 09:49:30 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:49:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:49:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:49:35 np0005546420.localdomain systemd[1]: tmp-crun.3vXIWt.mount: Deactivated successfully.
Dec 05 09:49:35 np0005546420.localdomain podman[281635]: 2025-12-05 09:49:35.510763169 +0000 UTC m=+0.089409309 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, config_id=edpm, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=)
Dec 05 09:49:35 np0005546420.localdomain podman[281635]: 2025-12-05 09:49:35.525640706 +0000 UTC m=+0.104286846 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 09:49:35 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:49:35 np0005546420.localdomain podman[281636]: 2025-12-05 09:49:35.602513958 +0000 UTC m=+0.178864668 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:49:35 np0005546420.localdomain podman[281636]: 2025-12-05 09:49:35.636634527 +0000 UTC m=+0.212985177 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:49:35 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:49:38 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31376 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=3644770264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD011D90000000001030307) 
Dec 05 09:49:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:49:40 np0005546420.localdomain systemd[1]: tmp-crun.ivEQ98.mount: Deactivated successfully.
Dec 05 09:49:40 np0005546420.localdomain podman[281679]: 2025-12-05 09:49:40.499647536 +0000 UTC m=+0.083096725 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 09:49:40 np0005546420.localdomain podman[281679]: 2025-12-05 09:49:40.539413718 +0000 UTC m=+0.122862857 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 05 09:49:40 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:49:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:46.726 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:49:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:46.893 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:49:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:49:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:49:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:49:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:49:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:49:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16742 "" "Go-http-client/1.1"
Dec 05 09:49:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:49:48 np0005546420.localdomain podman[281704]: 2025-12-05 09:49:48.479565037 +0000 UTC m=+0.058120476 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 09:49:48 np0005546420.localdomain podman[281704]: 2025-12-05 09:49:48.517332758 +0000 UTC m=+0.095888207 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec 05 09:49:48 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:49:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:49:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:49:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:49:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:49:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:49:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:49:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:49:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:49:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:49:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:49:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:49:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:49:53 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64341 DF PROTO=TCP SPT=56820 DPT=9102 SEQ=231358354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD04AB80000000001030307) 
Dec 05 09:49:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:49:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64342 DF PROTO=TCP SPT=56820 DPT=9102 SEQ=231358354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD04EDA0000000001030307) 
Dec 05 09:49:54 np0005546420.localdomain podman[281723]: 2025-12-05 09:49:54.50509697 +0000 UTC m=+0.082924238 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:49:54 np0005546420.localdomain systemd[1]: tmp-crun.0f5PYL.mount: Deactivated successfully.
Dec 05 09:49:54 np0005546420.localdomain podman[281723]: 2025-12-05 09:49:54.534796473 +0000 UTC m=+0.112623721 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 09:49:54 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:49:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31377 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=3644770264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD051DA0000000001030307) 
Dec 05 09:49:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:49:56 np0005546420.localdomain podman[281741]: 2025-12-05 09:49:56.484774886 +0000 UTC m=+0.064140783 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:49:56 np0005546420.localdomain podman[281741]: 2025-12-05 09:49:56.494438742 +0000 UTC m=+0.073804719 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:49:56 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:49:56 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64343 DF PROTO=TCP SPT=56820 DPT=9102 SEQ=231358354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD056DA0000000001030307) 
Dec 05 09:49:57 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35948 DF PROTO=TCP SPT=51348 DPT=9102 SEQ=3984928611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD059DA0000000001030307) 
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.874 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.874 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.889 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.890 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.891 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.891 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.892 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.892 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.892 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.893 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.894 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.911 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.911 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.912 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.912 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:49:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:49:59.913 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:50:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:00.371 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:50:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64344 DF PROTO=TCP SPT=56820 DPT=9102 SEQ=231358354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD066990000000001030307) 
Dec 05 09:50:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:00.561 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:50:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:00.563 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12829MB free_disk=41.837059020996094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:50:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:00.563 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:50:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:00.564 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:50:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:00.640 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:50:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:00.640 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:50:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:00.661 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:50:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:01.139 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:50:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:01.145 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:50:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:01.161 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:50:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:01.164 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:50:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:50:01.165 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:50:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:50:01 np0005546420.localdomain systemd[1]: tmp-crun.4JasSe.mount: Deactivated successfully.
Dec 05 09:50:01 np0005546420.localdomain podman[281808]: 2025-12-05 09:50:01.507896195 +0000 UTC m=+0.087892052 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible)
Dec 05 09:50:01 np0005546420.localdomain podman[281808]: 2025-12-05 09:50:01.542610841 +0000 UTC m=+0.122606748 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 09:50:01 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:50:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:50:04.110 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:50:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:50:04.110 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:50:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:50:04.111 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:50:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:50:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:50:06 np0005546420.localdomain podman[281827]: 2025-12-05 09:50:06.500627449 +0000 UTC m=+0.077026297 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64)
Dec 05 09:50:06 np0005546420.localdomain podman[281827]: 2025-12-05 09:50:06.514221447 +0000 UTC m=+0.090620285 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 09:50:06 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:50:06 np0005546420.localdomain systemd[1]: tmp-crun.MZ4pty.mount: Deactivated successfully.
Dec 05 09:50:06 np0005546420.localdomain podman[281828]: 2025-12-05 09:50:06.552863235 +0000 UTC m=+0.124780676 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:50:06 np0005546420.localdomain podman[281828]: 2025-12-05 09:50:06.585311793 +0000 UTC m=+0.157229244 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:50:06 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:50:09 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64345 DF PROTO=TCP SPT=56820 DPT=9102 SEQ=231358354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD087DB0000000001030307) 
Dec 05 09:50:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:50:11 np0005546420.localdomain podman[281870]: 2025-12-05 09:50:11.5576224 +0000 UTC m=+0.136648540 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 09:50:11 np0005546420.localdomain podman[281870]: 2025-12-05 09:50:11.623410622 +0000 UTC m=+0.202436762 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 05 09:50:11 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:50:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:50:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:50:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:50:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:50:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:50:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:50:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16744 "" "Go-http-client/1.1"
Dec 05 09:50:18 np0005546420.localdomain sudo[281895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:50:18 np0005546420.localdomain sudo[281895]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:50:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:50:18 np0005546420.localdomain sudo[281895]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:18 np0005546420.localdomain sudo[281921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:50:18 np0005546420.localdomain podman[281912]: 2025-12-05 09:50:18.753480407 +0000 UTC m=+0.090032898 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 09:50:18 np0005546420.localdomain sudo[281921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:50:18 np0005546420.localdomain podman[281912]: 2025-12-05 09:50:18.787599865 +0000 UTC m=+0.124152156 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 09:50:18 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:50:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:50:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:50:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:50:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:50:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:50:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:50:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:50:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:50:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:50:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:50:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:50:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:50:19 np0005546420.localdomain sudo[281921]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:20 np0005546420.localdomain sudo[281985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:50:20 np0005546420.localdomain sudo[281985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:50:20 np0005546420.localdomain sudo[281985]: pam_unix(sudo:session): session closed for user root
Dec 05 09:50:23 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30441 DF PROTO=TCP SPT=45326 DPT=9102 SEQ=1991022629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD0BFE90000000001030307) 
Dec 05 09:50:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30442 DF PROTO=TCP SPT=45326 DPT=9102 SEQ=1991022629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD0C3D90000000001030307) 
Dec 05 09:50:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:50:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64346 DF PROTO=TCP SPT=56820 DPT=9102 SEQ=231358354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD0C7D90000000001030307) 
Dec 05 09:50:25 np0005546420.localdomain podman[282003]: 2025-12-05 09:50:25.492333221 +0000 UTC m=+0.068857837 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:50:25 np0005546420.localdomain podman[282003]: 2025-12-05 09:50:25.523638153 +0000 UTC m=+0.100162809 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:50:25 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:50:26 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30443 DF PROTO=TCP SPT=45326 DPT=9102 SEQ=1991022629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD0CBD90000000001030307) 
Dec 05 09:50:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:50:27 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31378 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=3644770264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD0CFDA0000000001030307) 
Dec 05 09:50:27 np0005546420.localdomain podman[282022]: 2025-12-05 09:50:27.511104106 +0000 UTC m=+0.086034154 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:50:27 np0005546420.localdomain podman[282022]: 2025-12-05 09:50:27.547309259 +0000 UTC m=+0.122239297 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:50:27 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:50:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30444 DF PROTO=TCP SPT=45326 DPT=9102 SEQ=1991022629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD0DB990000000001030307) 
Dec 05 09:50:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:50:32 np0005546420.localdomain podman[282045]: 2025-12-05 09:50:32.542679426 +0000 UTC m=+0.115609904 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 09:50:32 np0005546420.localdomain podman[282045]: 2025-12-05 09:50:32.554643224 +0000 UTC m=+0.127573692 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:50:32 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:50:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:50:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:50:37 np0005546420.localdomain podman[282066]: 2025-12-05 09:50:37.510781494 +0000 UTC m=+0.084625582 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 09:50:37 np0005546420.localdomain podman[282066]: 2025-12-05 09:50:37.527501507 +0000 UTC m=+0.101345565 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 09:50:37 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:50:37 np0005546420.localdomain podman[282067]: 2025-12-05 09:50:37.570186119 +0000 UTC m=+0.140174829 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:50:37 np0005546420.localdomain podman[282067]: 2025-12-05 09:50:37.605259897 +0000 UTC m=+0.175248607 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:50:37 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:50:38 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30445 DF PROTO=TCP SPT=45326 DPT=9102 SEQ=1991022629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD0FBD90000000001030307) 
Dec 05 09:50:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:50:42 np0005546420.localdomain podman[282105]: 2025-12-05 09:50:42.499233467 +0000 UTC m=+0.073897453 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 05 09:50:42 np0005546420.localdomain podman[282105]: 2025-12-05 09:50:42.616494439 +0000 UTC m=+0.191158405 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec 05 09:50:42 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:50:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:50:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:50:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:50:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:50:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:50:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16746 "" "Go-http-client/1.1"
Dec 05 09:50:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:50:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:50:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:50:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:50:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:50:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:50:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:50:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:50:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:50:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:50:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:50:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:50:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:50:49 np0005546420.localdomain podman[282131]: 2025-12-05 09:50:49.512008116 +0000 UTC m=+0.090296035 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 09:50:49 np0005546420.localdomain podman[282131]: 2025-12-05 09:50:49.521127826 +0000 UTC m=+0.099415765 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:50:49 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:50:53 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29004 DF PROTO=TCP SPT=57728 DPT=9102 SEQ=4231890149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD135190000000001030307) 
Dec 05 09:50:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29005 DF PROTO=TCP SPT=57728 DPT=9102 SEQ=4231890149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD139190000000001030307) 
Dec 05 09:50:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30446 DF PROTO=TCP SPT=45326 DPT=9102 SEQ=1991022629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD13BDA0000000001030307) 
Dec 05 09:50:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:50:56 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29006 DF PROTO=TCP SPT=57728 DPT=9102 SEQ=4231890149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD141190000000001030307) 
Dec 05 09:50:56 np0005546420.localdomain podman[282151]: 2025-12-05 09:50:56.51511223 +0000 UTC m=+0.090578825 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 09:50:56 np0005546420.localdomain podman[282151]: 2025-12-05 09:50:56.551510119 +0000 UTC m=+0.126976734 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:50:56 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:50:57 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64347 DF PROTO=TCP SPT=56820 DPT=9102 SEQ=231358354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD145D90000000001030307) 
Dec 05 09:50:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:50:58 np0005546420.localdomain podman[282169]: 2025-12-05 09:50:58.479073222 +0000 UTC m=+0.058871130 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:50:58 np0005546420.localdomain podman[282169]: 2025-12-05 09:50:58.514696376 +0000 UTC m=+0.094494364 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:50:58 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:51:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29007 DF PROTO=TCP SPT=57728 DPT=9102 SEQ=4231890149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD150DA0000000001030307) 
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.158 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.173 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.173 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.173 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.192 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.192 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.193 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.193 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.887 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.888 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.888 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.888 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:51:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:01.888 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:51:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:02.307 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:51:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:02.488 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:51:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:02.489 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12834MB free_disk=41.83700180053711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:51:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:02.490 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:51:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:02.490 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:51:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:02.560 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:51:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:02.561 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:51:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:02.581 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:51:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:03.047 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:51:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:03.052 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:51:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:03.067 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:51:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:03.068 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:51:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:51:03.068 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:51:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:51:03 np0005546420.localdomain podman[282237]: 2025-12-05 09:51:03.503264303 +0000 UTC m=+0.082843236 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Dec 05 09:51:03 np0005546420.localdomain podman[282237]: 2025-12-05 09:51:03.513804177 +0000 UTC m=+0.093383150 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 09:51:03 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:51:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:51:04.111 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:51:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:51:04.111 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:51:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:51:04.111 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:51:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:51:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:51:08 np0005546420.localdomain podman[282258]: 2025-12-05 09:51:08.475777098 +0000 UTC m=+0.050647218 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:51:08 np0005546420.localdomain podman[282258]: 2025-12-05 09:51:08.482202266 +0000 UTC m=+0.057072416 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:51:08 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:51:08 np0005546420.localdomain podman[282257]: 2025-12-05 09:51:08.540387363 +0000 UTC m=+0.117866453 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 09:51:08 np0005546420.localdomain podman[282257]: 2025-12-05 09:51:08.553362622 +0000 UTC m=+0.130841752 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 09:51:08 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:51:08 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29008 DF PROTO=TCP SPT=57728 DPT=9102 SEQ=4231890149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD171D90000000001030307) 
Dec 05 09:51:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:51:13 np0005546420.localdomain podman[282300]: 2025-12-05 09:51:13.491301876 +0000 UTC m=+0.072498887 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 09:51:13 np0005546420.localdomain podman[282300]: 2025-12-05 09:51:13.536442628 +0000 UTC m=+0.117639699 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 09:51:13 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:51:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:51:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:51:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:51:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:51:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:51:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16745 "" "Go-http-client/1.1"
Dec 05 09:51:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:51:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:51:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:51:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:51:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:51:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:51:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:51:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:51:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:51:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:51:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:51:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:51:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:51:20 np0005546420.localdomain podman[282325]: 2025-12-05 09:51:20.487123044 +0000 UTC m=+0.072006312 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 09:51:20 np0005546420.localdomain podman[282325]: 2025-12-05 09:51:20.497541425 +0000 UTC m=+0.082424763 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible)
Dec 05 09:51:20 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:51:20 np0005546420.localdomain sudo[282344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:51:20 np0005546420.localdomain sudo[282344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:51:20 np0005546420.localdomain sudo[282344]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:20 np0005546420.localdomain sudo[282362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:51:20 np0005546420.localdomain sudo[282362]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:51:21 np0005546420.localdomain sudo[282362]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:21 np0005546420.localdomain sudo[282413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:51:21 np0005546420.localdomain sudo[282413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:51:21 np0005546420.localdomain sudo[282413]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:21 np0005546420.localdomain sudo[282431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 05 09:51:21 np0005546420.localdomain sudo[282431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:51:22 np0005546420.localdomain sudo[282431]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:23 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37116 DF PROTO=TCP SPT=59010 DPT=9102 SEQ=1459906025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD1AA490000000001030307) 
Dec 05 09:51:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37117 DF PROTO=TCP SPT=59010 DPT=9102 SEQ=1459906025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD1AE5A0000000001030307) 
Dec 05 09:51:24 np0005546420.localdomain sudo[282467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:51:24 np0005546420.localdomain sudo[282467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:51:24 np0005546420.localdomain sudo[282467]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29009 DF PROTO=TCP SPT=57728 DPT=9102 SEQ=4231890149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD1B1DA0000000001030307) 
Dec 05 09:51:26 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37118 DF PROTO=TCP SPT=59010 DPT=9102 SEQ=1459906025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD1B6590000000001030307) 
Dec 05 09:51:27 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30447 DF PROTO=TCP SPT=45326 DPT=9102 SEQ=1991022629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD1B9DB0000000001030307) 
Dec 05 09:51:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:51:27 np0005546420.localdomain podman[282485]: 2025-12-05 09:51:27.508824065 +0000 UTC m=+0.085621770 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 05 09:51:27 np0005546420.localdomain podman[282485]: 2025-12-05 09:51:27.541420771 +0000 UTC m=+0.118218436 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 09:51:27 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:51:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:51:29 np0005546420.localdomain podman[282502]: 2025-12-05 09:51:29.527167658 +0000 UTC m=+0.076768809 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:51:29 np0005546420.localdomain podman[282502]: 2025-12-05 09:51:29.536805024 +0000 UTC m=+0.086406175 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:51:29 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:51:30 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37119 DF PROTO=TCP SPT=59010 DPT=9102 SEQ=1459906025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD1C6190000000001030307) 
Dec 05 09:51:32 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 05 09:51:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:51:34 np0005546420.localdomain systemd[1]: tmp-crun.cMC62u.mount: Deactivated successfully.
Dec 05 09:51:34 np0005546420.localdomain podman[282525]: 2025-12-05 09:51:34.63042273 +0000 UTC m=+0.213779464 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Dec 05 09:51:34 np0005546420.localdomain podman[282525]: 2025-12-05 09:51:34.730747913 +0000 UTC m=+0.314104587 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 09:51:34 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:51:38 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37120 DF PROTO=TCP SPT=59010 DPT=9102 SEQ=1459906025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD1E5DA0000000001030307) 
Dec 05 09:51:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:51:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:51:39 np0005546420.localdomain systemd[1]: tmp-crun.rFdLjq.mount: Deactivated successfully.
Dec 05 09:51:39 np0005546420.localdomain podman[282545]: 2025-12-05 09:51:39.524401879 +0000 UTC m=+0.090516403 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:51:39 np0005546420.localdomain podman[282545]: 2025-12-05 09:51:39.531389564 +0000 UTC m=+0.097504108 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:51:39 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:51:39 np0005546420.localdomain podman[282544]: 2025-12-05 09:51:39.612132224 +0000 UTC m=+0.189083053 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:51:39 np0005546420.localdomain podman[282544]: 2025-12-05 09:51:39.623491694 +0000 UTC m=+0.200442523 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc.)
Dec 05 09:51:39 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:51:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:51:44 np0005546420.localdomain systemd[1]: tmp-crun.SC8Jw9.mount: Deactivated successfully.
Dec 05 09:51:44 np0005546420.localdomain podman[282587]: 2025-12-05 09:51:44.510154028 +0000 UTC m=+0.087736206 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 09:51:44 np0005546420.localdomain podman[282587]: 2025-12-05 09:51:44.622045669 +0000 UTC m=+0.199627857 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:51:44 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:51:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:51:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:51:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:51:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:51:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:51:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16747 "" "Go-http-client/1.1"
Dec 05 09:51:48 np0005546420.localdomain sshd[282612]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:51:48 np0005546420.localdomain sshd[282612]: Accepted publickey for zuul from 38.102.83.114 port 51074 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:51:48 np0005546420.localdomain systemd-logind[762]: New session 60 of user zuul.
Dec 05 09:51:48 np0005546420.localdomain systemd[1]: Started Session 60 of User zuul.
Dec 05 09:51:48 np0005546420.localdomain sshd[282612]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 09:51:48 np0005546420.localdomain sudo[282632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yedzglkblsrspsqtrsjhskpybarwsmrj ; /usr/bin/python3
Dec 05 09:51:48 np0005546420.localdomain sudo[282632]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 09:51:48 np0005546420.localdomain python3[282634]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:51:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:51:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:51:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:51:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:51:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:51:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:51:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:51:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:51:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:51:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:51:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:51:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:51:49 np0005546420.localdomain subscription-manager[282635]: Unregistered machine with identity: c9187bef-3dbe-4968-b437-f5e9b8b2ee84
Dec 05 09:51:49 np0005546420.localdomain sudo[282632]: pam_unix(sudo:session): session closed for user root
Dec 05 09:51:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:51:51 np0005546420.localdomain podman[282637]: 2025-12-05 09:51:51.491332053 +0000 UTC m=+0.066382888 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 09:51:51 np0005546420.localdomain podman[282637]: 2025-12-05 09:51:51.523269098 +0000 UTC m=+0.098319903 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 09:51:51 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:51:53 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44694 DF PROTO=TCP SPT=44032 DPT=9102 SEQ=2454602220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD21F780000000001030307) 
Dec 05 09:51:54 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44695 DF PROTO=TCP SPT=44032 DPT=9102 SEQ=2454602220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD2239A0000000001030307) 
Dec 05 09:51:55 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37121 DF PROTO=TCP SPT=59010 DPT=9102 SEQ=1459906025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD225DA0000000001030307) 
Dec 05 09:51:56 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44696 DF PROTO=TCP SPT=44032 DPT=9102 SEQ=2454602220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD22B990000000001030307) 
Dec 05 09:51:57 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29010 DF PROTO=TCP SPT=57728 DPT=9102 SEQ=4231890149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD22FD90000000001030307) 
Dec 05 09:51:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:51:58 np0005546420.localdomain podman[282657]: 2025-12-05 09:51:58.499449797 +0000 UTC m=+0.079227814 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 09:51:58 np0005546420.localdomain podman[282657]: 2025-12-05 09:51:58.503017107 +0000 UTC m=+0.082795124 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:51:58 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:52:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:52:00 np0005546420.localdomain podman[282675]: 2025-12-05 09:52:00.507891133 +0000 UTC m=+0.086710635 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:52:00 np0005546420.localdomain podman[282675]: 2025-12-05 09:52:00.52336651 +0000 UTC m=+0.102186022 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:52:00 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:52:00 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44697 DF PROTO=TCP SPT=44032 DPT=9102 SEQ=2454602220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD23B5A0000000001030307) 
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.068 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.068 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.069 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.092 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.093 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.094 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.886 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.887 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.887 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.887 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:52:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:02.887 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:52:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:03.314 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:52:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:03.485 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:52:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:03.486 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12835MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:52:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:03.486 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:52:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:03.486 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:52:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:03.538 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:52:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:03.538 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:52:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:03.555 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:52:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:04.023 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:52:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:04.030 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:52:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:04.065 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:52:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:04.067 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:52:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:04.068 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:52:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:52:04.112 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:52:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:52:04.113 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:52:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:52:04.114 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:52:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:52:05.068 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:52:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:52:05 np0005546420.localdomain systemd[1]: tmp-crun.X0iTFg.mount: Deactivated successfully.
Dec 05 09:52:05 np0005546420.localdomain podman[282743]: 2025-12-05 09:52:05.499309108 +0000 UTC m=+0.077325116 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec 05 09:52:05 np0005546420.localdomain podman[282743]: 2025-12-05 09:52:05.540341203 +0000 UTC m=+0.118357141 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 09:52:05 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:52:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7200.1 total, 600.0 interval
                                                          Cumulative writes: 5815 writes, 25K keys, 5815 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5815 writes, 771 syncs, 7.54 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 100 writes, 340 keys, 100 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                                          Interval WAL: 100 writes, 37 syncs, 2.70 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 09:52:08 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44698 DF PROTO=TCP SPT=44032 DPT=9102 SEQ=2454602220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD25BD90000000001030307) 
Dec 05 09:52:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:52:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:52:09 np0005546420.localdomain podman[282762]: 2025-12-05 09:52:09.7937638 +0000 UTC m=+0.057234176 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64)
Dec 05 09:52:09 np0005546420.localdomain podman[282762]: 2025-12-05 09:52:09.801503208 +0000 UTC m=+0.064973584 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:52:09 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:52:09 np0005546420.localdomain podman[282763]: 2025-12-05 09:52:09.854878494 +0000 UTC m=+0.114299625 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:52:09 np0005546420.localdomain podman[282763]: 2025-12-05 09:52:09.858448875 +0000 UTC m=+0.117869966 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:52:09 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:52:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7200.2 total, 600.0 interval
                                                          Cumulative writes: 4726 writes, 21K keys, 4726 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4726 writes, 602 syncs, 7.85 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 36 writes, 90 keys, 36 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s
                                                          Interval WAL: 36 writes, 18 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:52:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:52:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:52:15 np0005546420.localdomain podman[282805]: 2025-12-05 09:52:15.50622622 +0000 UTC m=+0.079524734 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 05 09:52:15 np0005546420.localdomain podman[282805]: 2025-12-05 09:52:15.53477297 +0000 UTC m=+0.108071454 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 09:52:15 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:52:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:52:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:52:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:52:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146542 "" "Go-http-client/1.1"
Dec 05 09:52:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:52:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16748 "" "Go-http-client/1.1"
Dec 05 09:52:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:52:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:52:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:52:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:52:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:52:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:52:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:52:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:52:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:52:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:52:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:52:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:52:22 np0005546420.localdomain sudo[282831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:22 np0005546420.localdomain sudo[282831]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:52:22 np0005546420.localdomain sudo[282831]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:22 np0005546420.localdomain podman[282849]: 2025-12-05 09:52:22.459844994 +0000 UTC m=+0.069192895 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:52:22 np0005546420.localdomain podman[282849]: 2025-12-05 09:52:22.46944341 +0000 UTC m=+0.078791291 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true)
Dec 05 09:52:22 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:52:22 np0005546420.localdomain sudo[282868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:52:22 np0005546420.localdomain sudo[282868]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:22 np0005546420.localdomain sudo[282868]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:22 np0005546420.localdomain sudo[282886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:52:22 np0005546420.localdomain sudo[282886]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:23 np0005546420.localdomain sudo[282886]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:23 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57964 DF PROTO=TCP SPT=51840 DPT=9102 SEQ=1349597502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD294A80000000001030307) 
Dec 05 09:52:23 np0005546420.localdomain sudo[282935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:23 np0005546420.localdomain sudo[282935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:23 np0005546420.localdomain sudo[282935]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:24 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57965 DF PROTO=TCP SPT=51840 DPT=9102 SEQ=1349597502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD2989A0000000001030307) 
Dec 05 09:52:25 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44699 DF PROTO=TCP SPT=44032 DPT=9102 SEQ=2454602220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD29BD90000000001030307) 
Dec 05 09:52:26 np0005546420.localdomain sshd[282953]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:52:26 np0005546420.localdomain sshd[282953]: Accepted publickey for tripleo-admin from 192.168.122.11 port 58982 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:52:26 np0005546420.localdomain systemd-logind[762]: New session 61 of user tripleo-admin.
Dec 05 09:52:26 np0005546420.localdomain systemd[1]: Created slice User Slice of UID 1003.
Dec 05 09:52:26 np0005546420.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Dec 05 09:52:26 np0005546420.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Dec 05 09:52:26 np0005546420.localdomain systemd[1]: Starting User Manager for UID 1003...
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 05 09:52:26 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57966 DF PROTO=TCP SPT=51840 DPT=9102 SEQ=1349597502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD2A09A0000000001030307) 
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Queued start job for default target Main User Target.
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Created slice User Application Slice.
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Reached target Paths.
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Reached target Timers.
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Starting D-Bus User Message Bus Socket...
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Starting Create User's Volatile Files and Directories...
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Finished Create User's Volatile Files and Directories.
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Listening on D-Bus User Message Bus Socket.
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Reached target Sockets.
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Reached target Basic System.
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Reached target Main User Target.
Dec 05 09:52:26 np0005546420.localdomain systemd[282957]: Startup finished in 130ms.
Dec 05 09:52:26 np0005546420.localdomain systemd[1]: Started User Manager for UID 1003.
Dec 05 09:52:26 np0005546420.localdomain systemd[1]: Started Session 61 of User tripleo-admin.
Dec 05 09:52:26 np0005546420.localdomain sshd[282953]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 05 09:52:27 np0005546420.localdomain sudo[283097]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqjizriquiypiiafghgzwcosrqubdjcx ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764928346.6491172-60569-21241545889466/AnsiballZ_blockinfile.py
Dec 05 09:52:27 np0005546420.localdomain sudo[283097]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 05 09:52:27 np0005546420.localdomain python3[283099]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"
                                                          # 100 ceph_dashboard (8443)
                                                          add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"
                                                          # 100 ceph_grafana (3100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"
                                                          # 100 ceph_prometheus (9092)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"
                                                          # 100 ceph_rgw (8080)
                                                          add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"
                                                          # 110 ceph_mon (6789, 3300, 9100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"
                                                          # 112 ceph_mds (6800-7300, 9100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"
                                                          # 113 ceph_mgr (6800-7300, 8444)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"
                                                          # 120 ceph_nfs (2049, 12049)
                                                          add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"
                                                          # 123 ceph_dashboard (9090, 9094, 9283)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"
                                                           insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:52:27 np0005546420.localdomain sudo[283097]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:27 np0005546420.localdomain systemd-journald[48245]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 80.5 (268 of 333 items), suggesting rotation.
Dec 05 09:52:27 np0005546420.localdomain systemd-journald[48245]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 05 09:52:27 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:52:27 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 09:52:27 np0005546420.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:67:6b:07 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37122 DF PROTO=TCP SPT=59010 DPT=9102 SEQ=1459906025 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A2AD2A3DA0000000001030307) 
Dec 05 09:52:27 np0005546420.localdomain sudo[283242]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fwkpncmkcayuriridtayvqhnyqaamgvj ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764928347.3756924-60583-17349836063369/AnsiballZ_systemd.py
Dec 05 09:52:27 np0005546420.localdomain sudo[283242]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 05 09:52:28 np0005546420.localdomain python3[283244]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 09:52:28 np0005546420.localdomain systemd[1]: Stopping Netfilter Tables...
Dec 05 09:52:28 np0005546420.localdomain systemd[1]: nftables.service: Deactivated successfully.
Dec 05 09:52:28 np0005546420.localdomain systemd[1]: Stopped Netfilter Tables.
Dec 05 09:52:28 np0005546420.localdomain systemd[1]: Starting Netfilter Tables...
Dec 05 09:52:28 np0005546420.localdomain systemd[1]: Finished Netfilter Tables.
Dec 05 09:52:28 np0005546420.localdomain sudo[283242]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:52:29 np0005546420.localdomain podman[283268]: 2025-12-05 09:52:29.492824974 +0000 UTC m=+0.068910486 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Dec 05 09:52:29 np0005546420.localdomain podman[283268]: 2025-12-05 09:52:29.503372159 +0000 UTC m=+0.079457711 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:52:29 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:52:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:52:31 np0005546420.localdomain systemd[1]: tmp-crun.wSHrYF.mount: Deactivated successfully.
Dec 05 09:52:31 np0005546420.localdomain podman[283288]: 2025-12-05 09:52:31.522393222 +0000 UTC m=+0.098518189 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:52:31 np0005546420.localdomain podman[283288]: 2025-12-05 09:52:31.561467167 +0000 UTC m=+0.137592104 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:52:31 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:52:33 np0005546420.localdomain sudo[283311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:33 np0005546420.localdomain sudo[283311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:33 np0005546420.localdomain sudo[283311]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:35 np0005546420.localdomain sudo[283329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:35 np0005546420.localdomain sudo[283329]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:52:35 np0005546420.localdomain sudo[283329]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:35 np0005546420.localdomain systemd[1]: tmp-crun.KRHOfg.mount: Deactivated successfully.
Dec 05 09:52:35 np0005546420.localdomain podman[283346]: 2025-12-05 09:52:35.769912295 +0000 UTC m=+0.096726223 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 09:52:35 np0005546420.localdomain podman[283346]: 2025-12-05 09:52:35.786452835 +0000 UTC m=+0.113266743 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:52:35 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:52:36 np0005546420.localdomain sudo[283366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:36 np0005546420.localdomain sudo[283366]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:36 np0005546420.localdomain sudo[283366]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:38 np0005546420.localdomain sudo[283384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:38 np0005546420.localdomain sudo[283384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:38 np0005546420.localdomain sudo[283384]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:39 np0005546420.localdomain sudo[283402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:39 np0005546420.localdomain sudo[283402]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:39 np0005546420.localdomain sudo[283402]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:52:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:52:40 np0005546420.localdomain systemd[1]: tmp-crun.Gltkm9.mount: Deactivated successfully.
Dec 05 09:52:40 np0005546420.localdomain podman[283421]: 2025-12-05 09:52:40.516301164 +0000 UTC m=+0.090808751 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:52:40 np0005546420.localdomain podman[283420]: 2025-12-05 09:52:40.5712807 +0000 UTC m=+0.145974283 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, vcs-type=git, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.openshift.tags=minimal rhel9)
Dec 05 09:52:40 np0005546420.localdomain podman[283420]: 2025-12-05 09:52:40.582548857 +0000 UTC m=+0.157242460 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, name=ubi9-minimal, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7)
Dec 05 09:52:40 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:52:40 np0005546420.localdomain podman[283421]: 2025-12-05 09:52:40.637201173 +0000 UTC m=+0.211708790 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:52:40 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:52:40 np0005546420.localdomain sudo[283463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:40 np0005546420.localdomain sudo[283463]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:40 np0005546420.localdomain sudo[283463]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:44 np0005546420.localdomain sudo[283481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:52:44 np0005546420.localdomain sudo[283481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:44 np0005546420.localdomain sudo[283481]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:44 np0005546420.localdomain sudo[283499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:52:44 np0005546420.localdomain sudo[283499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:44 np0005546420.localdomain podman[283559]: 
Dec 05 09:52:44 np0005546420.localdomain podman[283559]: 2025-12-05 09:52:44.919740506 +0000 UTC m=+0.087753997 container create afedd00c24525a8ea61fa20bc9ffdee7890e96d42674c550e1660c5c51906b2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_turing, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, io.buildah.version=1.41.4, ceph=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.openshift.expose-services=)
Dec 05 09:52:44 np0005546420.localdomain systemd[1]: Started libpod-conmon-afedd00c24525a8ea61fa20bc9ffdee7890e96d42674c550e1660c5c51906b2e.scope.
Dec 05 09:52:44 np0005546420.localdomain podman[283559]: 2025-12-05 09:52:44.880837596 +0000 UTC m=+0.048851147 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:52:44 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:52:44 np0005546420.localdomain podman[283559]: 2025-12-05 09:52:44.996261546 +0000 UTC m=+0.164275057 container init afedd00c24525a8ea61fa20bc9ffdee7890e96d42674c550e1660c5c51906b2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_turing, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main)
Dec 05 09:52:45 np0005546420.localdomain podman[283559]: 2025-12-05 09:52:45.006564653 +0000 UTC m=+0.174578144 container start afedd00c24525a8ea61fa20bc9ffdee7890e96d42674c550e1660c5c51906b2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_turing, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Dec 05 09:52:45 np0005546420.localdomain podman[283559]: 2025-12-05 09:52:45.006891193 +0000 UTC m=+0.174904764 container attach afedd00c24525a8ea61fa20bc9ffdee7890e96d42674c550e1660c5c51906b2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_turing, release=1763362218, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 09:52:45 np0005546420.localdomain busy_turing[283574]: 167 167
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: libpod-afedd00c24525a8ea61fa20bc9ffdee7890e96d42674c550e1660c5c51906b2e.scope: Deactivated successfully.
Dec 05 09:52:45 np0005546420.localdomain podman[283559]: 2025-12-05 09:52:45.010482195 +0000 UTC m=+0.178495696 container died afedd00c24525a8ea61fa20bc9ffdee7890e96d42674c550e1660c5c51906b2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_turing, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, name=rhceph)
Dec 05 09:52:45 np0005546420.localdomain podman[283579]: 2025-12-05 09:52:45.118831155 +0000 UTC m=+0.094229576 container remove afedd00c24525a8ea61fa20bc9ffdee7890e96d42674c550e1660c5c51906b2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_turing, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1763362218, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7)
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: libpod-conmon-afedd00c24525a8ea61fa20bc9ffdee7890e96d42674c550e1660c5c51906b2e.scope: Deactivated successfully.
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:52:45 np0005546420.localdomain systemd-sysv-generator[283621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:52:45 np0005546420.localdomain systemd-rc-local-generator[283618]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-01f69c594f7d328c22c7bf93a35455e00c993a124600d2de0be94baeab6c8487-merged.mount: Deactivated successfully.
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:52:45 np0005546420.localdomain podman[283634]: 2025-12-05 09:52:45.662392838 +0000 UTC m=+0.066962835 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller)
Dec 05 09:52:45 np0005546420.localdomain podman[283634]: 2025-12-05 09:52:45.701681119 +0000 UTC m=+0.106251136 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:52:45 np0005546420.localdomain systemd-sysv-generator[283689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:52:45 np0005546420.localdomain systemd-rc-local-generator[283685]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:52:45 np0005546420.localdomain systemd[1]: Starting Ceph mds.mds.np0005546420.eqhasr for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b...
Dec 05 09:52:46 np0005546420.localdomain podman[283752]: 
Dec 05 09:52:46 np0005546420.localdomain podman[283752]: 2025-12-05 09:52:46.200705519 +0000 UTC m=+0.059176677 container create d29a603d5e8c9438bb9b3dadfe2e3f440b3e1c62eeeea494998f4904f2104a23 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mds-mds-np0005546420-eqhasr, ceph=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, version=7, vcs-type=git, release=1763362218, name=rhceph)
Dec 05 09:52:46 np0005546420.localdomain systemd[1]: tmp-crun.3Edosi.mount: Deactivated successfully.
Dec 05 09:52:46 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3919f3c6f7bd29148d04caf4595d8b0ef2a4e3fb333492a0bebc3c0acc0eae47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:46 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3919f3c6f7bd29148d04caf4595d8b0ef2a4e3fb333492a0bebc3c0acc0eae47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:46 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3919f3c6f7bd29148d04caf4595d8b0ef2a4e3fb333492a0bebc3c0acc0eae47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:46 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3919f3c6f7bd29148d04caf4595d8b0ef2a4e3fb333492a0bebc3c0acc0eae47/merged/var/lib/ceph/mds/ceph-mds.np0005546420.eqhasr supports timestamps until 2038 (0x7fffffff)
Dec 05 09:52:46 np0005546420.localdomain podman[283752]: 2025-12-05 09:52:46.255085035 +0000 UTC m=+0.113556193 container init d29a603d5e8c9438bb9b3dadfe2e3f440b3e1c62eeeea494998f4904f2104a23 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mds-mds-np0005546420-eqhasr, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1763362218, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, build-date=2025-11-26T19:44:28Z)
Dec 05 09:52:46 np0005546420.localdomain podman[283752]: 2025-12-05 09:52:46.263481734 +0000 UTC m=+0.121952892 container start d29a603d5e8c9438bb9b3dadfe2e3f440b3e1c62eeeea494998f4904f2104a23 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mds-mds-np0005546420-eqhasr, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, release=1763362218, vcs-type=git, io.openshift.expose-services=, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7)
Dec 05 09:52:46 np0005546420.localdomain bash[283752]: d29a603d5e8c9438bb9b3dadfe2e3f440b3e1c62eeeea494998f4904f2104a23
Dec 05 09:52:46 np0005546420.localdomain podman[283752]: 2025-12-05 09:52:46.17317358 +0000 UTC m=+0.031644808 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:52:46 np0005546420.localdomain systemd[1]: Started Ceph mds.mds.np0005546420.eqhasr for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b.
Dec 05 09:52:46 np0005546420.localdomain ceph-mds[283770]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 09:52:46 np0005546420.localdomain ceph-mds[283770]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2
Dec 05 09:52:46 np0005546420.localdomain ceph-mds[283770]: main not setting numa affinity
Dec 05 09:52:46 np0005546420.localdomain ceph-mds[283770]: pidfile_write: ignore empty --pid-file
Dec 05 09:52:46 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mds-mds-np0005546420-eqhasr[283766]: starting mds.mds.np0005546420.eqhasr at 
Dec 05 09:52:46 np0005546420.localdomain sudo[283499]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:46 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr Updating MDS map to version 7 from mon.2
Dec 05 09:52:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:52:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:52:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:52:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148690 "" "Go-http-client/1.1"
Dec 05 09:52:47 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr Updating MDS map to version 8 from mon.2
Dec 05 09:52:47 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr Monitors have assigned me to become a standby.
Dec 05 09:52:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:52:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17236 "" "Go-http-client/1.1"
Dec 05 09:52:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:52:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:52:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:52:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:52:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:52:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:52:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:52:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:52:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:52:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:52:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:52:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:52:49 np0005546420.localdomain sudo[283790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:49 np0005546420.localdomain sudo[283790]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:49 np0005546420.localdomain sudo[283790]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:49 np0005546420.localdomain sudo[283808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:52:49 np0005546420.localdomain sudo[283808]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:49 np0005546420.localdomain sudo[283808]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:49 np0005546420.localdomain sshd[282615]: Received disconnect from 38.102.83.114 port 51074:11: disconnected by user
Dec 05 09:52:49 np0005546420.localdomain sshd[282615]: Disconnected from user zuul 38.102.83.114 port 51074
Dec 05 09:52:49 np0005546420.localdomain sshd[282612]: pam_unix(sshd:session): session closed for user zuul
Dec 05 09:52:49 np0005546420.localdomain systemd[1]: session-60.scope: Deactivated successfully.
Dec 05 09:52:49 np0005546420.localdomain systemd-logind[762]: Session 60 logged out. Waiting for processes to exit.
Dec 05 09:52:49 np0005546420.localdomain systemd-logind[762]: Removed session 60.
Dec 05 09:52:49 np0005546420.localdomain sudo[283826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:52:49 np0005546420.localdomain sudo[283826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:50 np0005546420.localdomain systemd[1]: tmp-crun.KjsHa6.mount: Deactivated successfully.
Dec 05 09:52:50 np0005546420.localdomain podman[283916]: 2025-12-05 09:52:50.242538469 +0000 UTC m=+0.093119862 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, distribution-scope=public, GIT_BRANCH=main, release=1763362218, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, name=rhceph, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True)
Dec 05 09:52:50 np0005546420.localdomain podman[283916]: 2025-12-05 09:52:50.315577301 +0000 UTC m=+0.166158714 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, ceph=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218)
Dec 05 09:52:50 np0005546420.localdomain sudo[283826]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:51 np0005546420.localdomain sudo[283999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:51 np0005546420.localdomain sudo[283999]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:51 np0005546420.localdomain sudo[283999]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:51 np0005546420.localdomain sudo[284017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:52:51 np0005546420.localdomain sudo[284017]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:52:51 np0005546420.localdomain sudo[284017]: pam_unix(sudo:session): session closed for user root
Dec 05 09:52:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:52:53 np0005546420.localdomain systemd[1]: tmp-crun.Gj7Q5h.mount: Deactivated successfully.
Dec 05 09:52:53 np0005546420.localdomain podman[284035]: 2025-12-05 09:52:53.507648128 +0000 UTC m=+0.087611092 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 09:52:53 np0005546420.localdomain podman[284035]: 2025-12-05 09:52:53.542474032 +0000 UTC m=+0.122437006 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:52:53 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:53:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:53:00 np0005546420.localdomain systemd[1]: tmp-crun.n9JaYh.mount: Deactivated successfully.
Dec 05 09:53:00 np0005546420.localdomain podman[284054]: 2025-12-05 09:53:00.50817951 +0000 UTC m=+0.085801087 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 09:53:00 np0005546420.localdomain podman[284054]: 2025-12-05 09:53:00.538130174 +0000 UTC m=+0.115751661 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 09:53:00 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:53:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:01.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:01.894 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:01.895 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:53:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:01.895 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:53:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:01.911 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:53:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:53:02 np0005546420.localdomain systemd[1]: tmp-crun.ZsFAk5.mount: Deactivated successfully.
Dec 05 09:53:02 np0005546420.localdomain podman[284073]: 2025-12-05 09:53:02.507969839 +0000 UTC m=+0.084759985 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:53:02 np0005546420.localdomain podman[284073]: 2025-12-05 09:53:02.517119901 +0000 UTC m=+0.093910057 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:53:02 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:53:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:02.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:02.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:03.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:03.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:03.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:03.870 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:53:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:03.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:03.890 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:53:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:03.891 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:53:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:03.891 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:53:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:03.892 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:53:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:03.892 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:53:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:53:04.113 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:53:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:53:04.114 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:53:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:53:04.114 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:53:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:04.364 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:53:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:04.584 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:53:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:04.585 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12807MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:53:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:04.585 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:53:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:04.586 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:53:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:04.644 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:53:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:04.644 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:53:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:04.663 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:53:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:05.083 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:53:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:05.090 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:53:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:05.105 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:53:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:05.109 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:53:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:05.110 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:53:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:06.113 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:06.114 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:53:06 np0005546420.localdomain podman[284141]: 2025-12-05 09:53:06.493023399 +0000 UTC m=+0.073180697 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 09:53:06 np0005546420.localdomain podman[284141]: 2025-12-05 09:53:06.502680528 +0000 UTC m=+0.082837856 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 09:53:06 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:53:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:53:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:53:11 np0005546420.localdomain systemd[1]: tmp-crun.32YsuE.mount: Deactivated successfully.
Dec 05 09:53:11 np0005546420.localdomain podman[284161]: 2025-12-05 09:53:11.609000485 +0000 UTC m=+0.114424269 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:53:11 np0005546420.localdomain podman[284161]: 2025-12-05 09:53:11.614161244 +0000 UTC m=+0.119585048 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:53:11 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:53:11 np0005546420.localdomain podman[284160]: 2025-12-05 09:53:11.653366863 +0000 UTC m=+0.165137164 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, config_id=edpm)
Dec 05 09:53:11 np0005546420.localdomain podman[284160]: 2025-12-05 09:53:11.670411968 +0000 UTC m=+0.182182269 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-type=git, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc.)
Dec 05 09:53:11 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:53:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:53:16 np0005546420.localdomain podman[284203]: 2025-12-05 09:53:16.508346879 +0000 UTC m=+0.087903061 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:53:16 np0005546420.localdomain podman[284203]: 2025-12-05 09:53:16.609484298 +0000 UTC m=+0.189040500 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 05 09:53:16 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:53:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:53:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:53:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:53:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148690 "" "Go-http-client/1.1"
Dec 05 09:53:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:53:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17236 "" "Go-http-client/1.1"
Dec 05 09:53:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:53:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:53:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:53:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:53:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:53:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:53:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:53:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:53:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:53:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:53:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:53:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:53:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:53:24 np0005546420.localdomain podman[284229]: 2025-12-05 09:53:24.493789275 +0000 UTC m=+0.070203799 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm)
Dec 05 09:53:24 np0005546420.localdomain podman[284229]: 2025-12-05 09:53:24.503641097 +0000 UTC m=+0.080055571 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:53:24 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:53:28 np0005546420.localdomain sshd[282972]: Received disconnect from 192.168.122.11 port 58982:11: disconnected by user
Dec 05 09:53:28 np0005546420.localdomain sshd[282972]: Disconnected from user tripleo-admin 192.168.122.11 port 58982
Dec 05 09:53:28 np0005546420.localdomain sshd[282953]: pam_unix(sshd:session): session closed for user tripleo-admin
Dec 05 09:53:28 np0005546420.localdomain systemd[1]: session-61.scope: Deactivated successfully.
Dec 05 09:53:28 np0005546420.localdomain systemd[1]: session-61.scope: Consumed 1.284s CPU time.
Dec 05 09:53:28 np0005546420.localdomain systemd-logind[762]: Session 61 logged out. Waiting for processes to exit.
Dec 05 09:53:28 np0005546420.localdomain systemd-logind[762]: Removed session 61.
Dec 05 09:53:29 np0005546420.localdomain sudo[284248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:53:29 np0005546420.localdomain sudo[284248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:53:29 np0005546420.localdomain sudo[284248]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:29 np0005546420.localdomain sudo[284266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:53:29 np0005546420.localdomain sudo[284266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:53:29 np0005546420.localdomain sudo[284266]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:29 np0005546420.localdomain sudo[284284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:53:29 np0005546420.localdomain sudo[284284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:53:29 np0005546420.localdomain sudo[284284]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:30 np0005546420.localdomain sudo[284334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:53:30 np0005546420.localdomain sudo[284334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:53:30 np0005546420.localdomain sudo[284334]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:30 np0005546420.localdomain sudo[284352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b -- inventory --format=json-pretty --filter-for-batch
Dec 05 09:53:30 np0005546420.localdomain sudo[284352]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:53:30 np0005546420.localdomain podman[284410]: 
Dec 05 09:53:30 np0005546420.localdomain podman[284410]: 2025-12-05 09:53:30.829828835 +0000 UTC m=+0.080479324 container create 10863766ffe1986ee28f62f56acb1655e60fbd8c1407608393dc4da85d469acc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_johnson, description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, RELEASE=main, com.redhat.component=rhceph-container, release=1763362218, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:53:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:53:30 np0005546420.localdomain systemd[1]: Started libpod-conmon-10863766ffe1986ee28f62f56acb1655e60fbd8c1407608393dc4da85d469acc.scope.
Dec 05 09:53:30 np0005546420.localdomain podman[284410]: 2025-12-05 09:53:30.795938514 +0000 UTC m=+0.046588973 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:53:30 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:53:30 np0005546420.localdomain podman[284410]: 2025-12-05 09:53:30.916093905 +0000 UTC m=+0.166744364 container init 10863766ffe1986ee28f62f56acb1655e60fbd8c1407608393dc4da85d469acc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_johnson, ceph=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, version=7, architecture=x86_64, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc.)
Dec 05 09:53:30 np0005546420.localdomain podman[284410]: 2025-12-05 09:53:30.92633047 +0000 UTC m=+0.176980929 container start 10863766ffe1986ee28f62f56acb1655e60fbd8c1407608393dc4da85d469acc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_johnson, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:53:30 np0005546420.localdomain podman[284410]: 2025-12-05 09:53:30.928434235 +0000 UTC m=+0.179084704 container attach 10863766ffe1986ee28f62f56acb1655e60fbd8c1407608393dc4da85d469acc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_johnson, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:53:30 np0005546420.localdomain amazing_johnson[284426]: 167 167
Dec 05 09:53:30 np0005546420.localdomain systemd[1]: libpod-10863766ffe1986ee28f62f56acb1655e60fbd8c1407608393dc4da85d469acc.scope: Deactivated successfully.
Dec 05 09:53:30 np0005546420.localdomain podman[284425]: 2025-12-05 09:53:30.97870081 +0000 UTC m=+0.106919547 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 05 09:53:31 np0005546420.localdomain podman[284425]: 2025-12-05 09:53:31.012994313 +0000 UTC m=+0.141213080 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 09:53:31 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:53:31 np0005546420.localdomain podman[284410]: 2025-12-05 09:53:31.032830192 +0000 UTC m=+0.283480641 container died 10863766ffe1986ee28f62f56acb1655e60fbd8c1407608393dc4da85d469acc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_johnson, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, CEPH_POINT_RELEASE=, release=1763362218, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main)
Dec 05 09:53:31 np0005546420.localdomain podman[284441]: 2025-12-05 09:53:31.10078439 +0000 UTC m=+0.159386188 container remove 10863766ffe1986ee28f62f56acb1655e60fbd8c1407608393dc4da85d469acc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_johnson, vendor=Red Hat, Inc., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.tags=rhceph ceph, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:53:31 np0005546420.localdomain systemd[1]: libpod-conmon-10863766ffe1986ee28f62f56acb1655e60fbd8c1407608393dc4da85d469acc.scope: Deactivated successfully.
Dec 05 09:53:31 np0005546420.localdomain podman[284468]: 
Dec 05 09:53:31 np0005546420.localdomain podman[284468]: 2025-12-05 09:53:31.302109536 +0000 UTC m=+0.068693571 container create 41d182ba3c011b83f1fd0a47e14860d81a096c1a8466244f8b10be4310522597 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wiles, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, name=rhceph, distribution-scope=public, io.openshift.expose-services=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:53:31 np0005546420.localdomain systemd[1]: Started libpod-conmon-41d182ba3c011b83f1fd0a47e14860d81a096c1a8466244f8b10be4310522597.scope.
Dec 05 09:53:31 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:53:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67737b055ac80dd83b2237ce3bfe93dacb7313ece472eb2e00083b3463bdc83e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67737b055ac80dd83b2237ce3bfe93dacb7313ece472eb2e00083b3463bdc83e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67737b055ac80dd83b2237ce3bfe93dacb7313ece472eb2e00083b3463bdc83e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67737b055ac80dd83b2237ce3bfe93dacb7313ece472eb2e00083b3463bdc83e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:53:31 np0005546420.localdomain podman[284468]: 2025-12-05 09:53:31.367569198 +0000 UTC m=+0.134153293 container init 41d182ba3c011b83f1fd0a47e14860d81a096c1a8466244f8b10be4310522597 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wiles, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, ceph=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, RELEASE=main, name=rhceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vendor=Red Hat, Inc.)
Dec 05 09:53:31 np0005546420.localdomain podman[284468]: 2025-12-05 09:53:31.379177085 +0000 UTC m=+0.145761140 container start 41d182ba3c011b83f1fd0a47e14860d81a096c1a8466244f8b10be4310522597 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wiles, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.component=rhceph-container)
Dec 05 09:53:31 np0005546420.localdomain podman[284468]: 2025-12-05 09:53:31.281225845 +0000 UTC m=+0.047809860 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:53:31 np0005546420.localdomain podman[284468]: 2025-12-05 09:53:31.379513115 +0000 UTC m=+0.146097220 container attach 41d182ba3c011b83f1fd0a47e14860d81a096c1a8466244f8b10be4310522597 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wiles, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:53:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3309ab1859ec19d834463606c9531700ebdc736ba93593f631bd1aeb6b34d58e-merged.mount: Deactivated successfully.
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]: [
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:     {
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:         "available": false,
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:         "ceph_device": false,
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:         "lsm_data": {},
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:         "lvs": [],
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:         "path": "/dev/sr0",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:         "rejected_reasons": [
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "Insufficient space (<5GB)",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "Has a FileSystem"
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:         ],
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:         "sys_api": {
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "actuators": null,
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "device_nodes": "sr0",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "human_readable_size": "482.00 KB",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "id_bus": "ata",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "model": "QEMU DVD-ROM",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "nr_requests": "2",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "partitions": {},
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "path": "/dev/sr0",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "removable": "1",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "rev": "2.5+",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "ro": "0",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "rotational": "1",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "sas_address": "",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "sas_device_handle": "",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "scheduler_mode": "mq-deadline",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "sectors": 0,
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "sectorsize": "2048",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "size": 493568.0,
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "support_discard": "0",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "type": "disk",
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:             "vendor": "QEMU"
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:         }
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]:     }
Dec 05 09:53:32 np0005546420.localdomain goofy_wiles[284484]: ]
Dec 05 09:53:32 np0005546420.localdomain systemd[1]: libpod-41d182ba3c011b83f1fd0a47e14860d81a096c1a8466244f8b10be4310522597.scope: Deactivated successfully.
Dec 05 09:53:32 np0005546420.localdomain systemd[1]: libpod-41d182ba3c011b83f1fd0a47e14860d81a096c1a8466244f8b10be4310522597.scope: Consumed 1.002s CPU time.
Dec 05 09:53:32 np0005546420.localdomain podman[284468]: 2025-12-05 09:53:32.361777275 +0000 UTC m=+1.128361320 container died 41d182ba3c011b83f1fd0a47e14860d81a096c1a8466244f8b10be4310522597 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wiles, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, release=1763362218)
Dec 05 09:53:32 np0005546420.localdomain systemd[1]: tmp-crun.RlMY6h.mount: Deactivated successfully.
Dec 05 09:53:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-67737b055ac80dd83b2237ce3bfe93dacb7313ece472eb2e00083b3463bdc83e-merged.mount: Deactivated successfully.
Dec 05 09:53:32 np0005546420.localdomain podman[286269]: 2025-12-05 09:53:32.439000478 +0000 UTC m=+0.065974567 container remove 41d182ba3c011b83f1fd0a47e14860d81a096c1a8466244f8b10be4310522597 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wiles, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, ceph=True, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, release=1763362218, distribution-scope=public, GIT_CLEAN=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, architecture=x86_64)
Dec 05 09:53:32 np0005546420.localdomain systemd[1]: libpod-conmon-41d182ba3c011b83f1fd0a47e14860d81a096c1a8466244f8b10be4310522597.scope: Deactivated successfully.
Dec 05 09:53:32 np0005546420.localdomain sudo[284352]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:53:33 np0005546420.localdomain podman[286283]: 2025-12-05 09:53:33.506539859 +0000 UTC m=+0.080598738 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:53:33 np0005546420.localdomain podman[286283]: 2025-12-05 09:53:33.546636001 +0000 UTC m=+0.120694870 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:53:33 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:53:34 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr Updating MDS map to version 13 from mon.2
Dec 05 09:53:34 np0005546420.localdomain ceph-mds[283770]: mds.0.13 handle_mds_map i am now mds.0.13
Dec 05 09:53:34 np0005546420.localdomain ceph-mds[283770]: mds.0.13 handle_mds_map state change up:standby --> up:replay
Dec 05 09:53:34 np0005546420.localdomain ceph-mds[283770]: mds.0.13 replay_start
Dec 05 09:53:34 np0005546420.localdomain ceph-mds[283770]: mds.0.13  waiting for osdmap 84 (which blocklists prior instance)
Dec 05 09:53:34 np0005546420.localdomain ceph-mds[283770]: mds.0.cache creating system inode with ino:0x100
Dec 05 09:53:34 np0005546420.localdomain ceph-mds[283770]: mds.0.cache creating system inode with ino:0x1
Dec 05 09:53:34 np0005546420.localdomain ceph-mds[283770]: mds.0.13 Finished replaying journal
Dec 05 09:53:34 np0005546420.localdomain ceph-mds[283770]: mds.0.13 making mds journal writeable
Dec 05 09:53:35 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr Updating MDS map to version 14 from mon.2
Dec 05 09:53:35 np0005546420.localdomain ceph-mds[283770]: mds.0.13 handle_mds_map i am now mds.0.13
Dec 05 09:53:35 np0005546420.localdomain ceph-mds[283770]: mds.0.13 handle_mds_map state change up:replay --> up:reconnect
Dec 05 09:53:35 np0005546420.localdomain ceph-mds[283770]: mds.0.13 reconnect_start
Dec 05 09:53:35 np0005546420.localdomain ceph-mds[283770]: mds.0.13 reopen_log
Dec 05 09:53:35 np0005546420.localdomain ceph-mds[283770]: mds.0.13 reconnect_done
Dec 05 09:53:36 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr Updating MDS map to version 15 from mon.2
Dec 05 09:53:36 np0005546420.localdomain ceph-mds[283770]: mds.0.13 handle_mds_map i am now mds.0.13
Dec 05 09:53:36 np0005546420.localdomain ceph-mds[283770]: mds.0.13 handle_mds_map state change up:reconnect --> up:rejoin
Dec 05 09:53:36 np0005546420.localdomain ceph-mds[283770]: mds.0.13 rejoin_start
Dec 05 09:53:36 np0005546420.localdomain ceph-mds[283770]: mds.0.13 rejoin_joint_start
Dec 05 09:53:36 np0005546420.localdomain ceph-mds[283770]: mds.0.13 rejoin_done
Dec 05 09:53:37 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:53:37 np0005546420.localdomain systemd[1]: tmp-crun.4c2veA.mount: Deactivated successfully.
Dec 05 09:53:37 np0005546420.localdomain podman[286315]: 2025-12-05 09:53:37.523545255 +0000 UTC m=+0.101785048 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3)
Dec 05 09:53:37 np0005546420.localdomain podman[286315]: 2025-12-05 09:53:37.541474556 +0000 UTC m=+0.119714359 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 05 09:53:37 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:53:37 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr Updating MDS map to version 16 from mon.2
Dec 05 09:53:37 np0005546420.localdomain ceph-mds[283770]: mds.0.13 handle_mds_map i am now mds.0.13
Dec 05 09:53:37 np0005546420.localdomain ceph-mds[283770]: mds.0.13 handle_mds_map state change up:rejoin --> up:active
Dec 05 09:53:37 np0005546420.localdomain ceph-mds[283770]: mds.0.13 recovery_done -- successful recovery!
Dec 05 09:53:37 np0005546420.localdomain ceph-mds[283770]: mds.0.13 active_start
Dec 05 09:53:37 np0005546420.localdomain ceph-mds[283770]: mds.0.13 cluster recovered.
Dec 05 09:53:38 np0005546420.localdomain systemd[1]: Stopping User Manager for UID 1003...
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Activating special unit Exit the Session...
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Stopped target Main User Target.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Stopped target Basic System.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Stopped target Paths.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Stopped target Sockets.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Stopped target Timers.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Closed D-Bus User Message Bus Socket.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Stopped Create User's Volatile Files and Directories.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Removed slice User Application Slice.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Reached target Shutdown.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Finished Exit the Session.
Dec 05 09:53:38 np0005546420.localdomain systemd[282957]: Reached target Exit the Session.
Dec 05 09:53:38 np0005546420.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Dec 05 09:53:38 np0005546420.localdomain systemd[1]: Stopped User Manager for UID 1003.
Dec 05 09:53:38 np0005546420.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Dec 05 09:53:38 np0005546420.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Dec 05 09:53:38 np0005546420.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Dec 05 09:53:38 np0005546420.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Dec 05 09:53:38 np0005546420.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Dec 05 09:53:38 np0005546420.localdomain systemd[1]: user-1003.slice: Consumed 1.673s CPU time.
Dec 05 09:53:42 np0005546420.localdomain ceph-mds[283770]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 05 09:53:42 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mds-mds-np0005546420-eqhasr[283766]: 2025-12-05T09:53:42.245+0000 7f513c97c640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 05 09:53:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:53:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:53:42 np0005546420.localdomain podman[286342]: 2025-12-05 09:53:42.489449318 +0000 UTC m=+0.064556914 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:53:42 np0005546420.localdomain podman[286341]: 2025-12-05 09:53:42.551878096 +0000 UTC m=+0.129547511 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, version=9.6, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 09:53:42 np0005546420.localdomain podman[286342]: 2025-12-05 09:53:42.578103822 +0000 UTC m=+0.153211488 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:53:42 np0005546420.localdomain podman[286341]: 2025-12-05 09:53:42.587758749 +0000 UTC m=+0.165428194 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, config_id=edpm, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, name=ubi9-minimal)
Dec 05 09:53:42 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:53:42 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:53:43 np0005546420.localdomain sudo[286382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:53:43 np0005546420.localdomain sudo[286382]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:53:43 np0005546420.localdomain sudo[286382]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:45 np0005546420.localdomain sudo[286400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:53:45 np0005546420.localdomain sudo[286400]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:53:45 np0005546420.localdomain sudo[286400]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:46 np0005546420.localdomain sudo[286418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:53:46 np0005546420.localdomain sudo[286418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:53:46 np0005546420.localdomain sudo[286418]: pam_unix(sudo:session): session closed for user root
Dec 05 09:53:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:53:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:53:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:53:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148690 "" "Go-http-client/1.1"
Dec 05 09:53:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:53:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17237 "" "Go-http-client/1.1"
Dec 05 09:53:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:53:47 np0005546420.localdomain podman[286436]: 2025-12-05 09:53:47.50023308 +0000 UTC m=+0.081113004 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:53:47 np0005546420.localdomain podman[286436]: 2025-12-05 09:53:47.563673749 +0000 UTC m=+0.144553673 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 09:53:47 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:53:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:53:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:53:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:53:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:53:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:53:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:53:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:53:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:53:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:53:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:53:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:53:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:53:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:53:55 np0005546420.localdomain podman[286460]: 2025-12-05 09:53:55.498028551 +0000 UTC m=+0.075266134 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 05 09:53:55 np0005546420.localdomain podman[286460]: 2025-12-05 09:53:55.511491745 +0000 UTC m=+0.088729338 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 09:53:55 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:53:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:59.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:59.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 09:53:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:59.891 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 09:53:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:59.892 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:53:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:59.892 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 09:53:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:53:59.908 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:54:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:54:01 np0005546420.localdomain podman[286479]: 2025-12-05 09:54:01.507887371 +0000 UTC m=+0.083791296 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Dec 05 09:54:01 np0005546420.localdomain podman[286479]: 2025-12-05 09:54:01.516528616 +0000 UTC m=+0.092432551 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 09:54:01 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:54:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:01.924 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:54:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:01.925 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:54:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:01.925 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:54:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:01.941 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:54:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:03.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:54:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:03.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:54:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:03.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:54:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:03.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:54:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:03.894 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:54:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:03.895 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:54:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:03.895 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:54:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:03.896 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:54:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:03.896 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:54:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:54:04.114 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:54:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:54:04.114 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:54:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:54:04.115 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.368 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:54:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:54:04 np0005546420.localdomain podman[286517]: 2025-12-05 09:54:04.487252624 +0000 UTC m=+0.063108779 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:54:04 np0005546420.localdomain podman[286517]: 2025-12-05 09:54:04.52421926 +0000 UTC m=+0.100075405 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:54:04 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.547 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.549 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12811MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.549 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.549 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.671 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.672 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.733 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.811 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.811 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.827 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.849 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 09:54:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:04.878 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:54:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:05.350 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:54:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:05.356 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:54:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:05.372 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:54:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:05.375 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:54:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:05.375 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:54:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:06.375 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:54:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:06.376 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:54:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:06.377 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:54:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:06.377 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:54:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:54:06.377 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:54:07 np0005546420.localdomain sudo[286562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:07 np0005546420.localdomain sudo[286562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:07 np0005546420.localdomain sudo[286562]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:08 np0005546420.localdomain sudo[286580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:08 np0005546420.localdomain sudo[286580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:08 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:54:08 np0005546420.localdomain sudo[286580]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:08 np0005546420.localdomain podman[286598]: 2025-12-05 09:54:08.509628096 +0000 UTC m=+0.084327102 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 09:54:08 np0005546420.localdomain podman[286598]: 2025-12-05 09:54:08.551617476 +0000 UTC m=+0.126316512 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 09:54:08 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:54:09 np0005546420.localdomain sudo[286617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:09 np0005546420.localdomain sudo[286617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:09 np0005546420.localdomain sudo[286617]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.949 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:54:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:54:13 np0005546420.localdomain sudo[286635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:54:13 np0005546420.localdomain sudo[286635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:54:13 np0005546420.localdomain sudo[286635]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:54:13 np0005546420.localdomain podman[286653]: 2025-12-05 09:54:13.417118954 +0000 UTC m=+0.056388553 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_id=edpm, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350)
Dec 05 09:54:13 np0005546420.localdomain sudo[286660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:13 np0005546420.localdomain sudo[286660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:13 np0005546420.localdomain podman[286653]: 2025-12-05 09:54:13.45602081 +0000 UTC m=+0.095290399 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git, release=1755695350, vendor=Red Hat, Inc., version=9.6, name=ubi9-minimal)
Dec 05 09:54:13 np0005546420.localdomain systemd[1]: tmp-crun.hMSjZO.mount: Deactivated successfully.
Dec 05 09:54:13 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:54:13 np0005546420.localdomain podman[286654]: 2025-12-05 09:54:13.472859437 +0000 UTC m=+0.104353728 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:54:13 np0005546420.localdomain podman[286654]: 2025-12-05 09:54:13.477247261 +0000 UTC m=+0.108741602 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:54:13 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:54:14 np0005546420.localdomain podman[286757]: 
Dec 05 09:54:14 np0005546420.localdomain podman[286757]: 2025-12-05 09:54:14.051870207 +0000 UTC m=+0.085525618 container create 9c52de370d91b49f702480ab2d895259c3b04d36ad664f196c3619ee5806a634 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_moore, build-date=2025-11-26T19:44:28Z, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, release=1763362218, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True)
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: Started libpod-conmon-9c52de370d91b49f702480ab2d895259c3b04d36ad664f196c3619ee5806a634.scope.
Dec 05 09:54:14 np0005546420.localdomain podman[286757]: 2025-12-05 09:54:14.012828778 +0000 UTC m=+0.046484229 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:54:14 np0005546420.localdomain podman[286757]: 2025-12-05 09:54:14.1280949 +0000 UTC m=+0.161750321 container init 9c52de370d91b49f702480ab2d895259c3b04d36ad664f196c3619ee5806a634 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_moore, distribution-scope=public, version=7, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:54:14 np0005546420.localdomain podman[286757]: 2025-12-05 09:54:14.140786859 +0000 UTC m=+0.174442280 container start 9c52de370d91b49f702480ab2d895259c3b04d36ad664f196c3619ee5806a634 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_moore, CEPH_POINT_RELEASE=, version=7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vcs-type=git, name=rhceph, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:54:14 np0005546420.localdomain podman[286757]: 2025-12-05 09:54:14.141627605 +0000 UTC m=+0.175283036 container attach 9c52de370d91b49f702480ab2d895259c3b04d36ad664f196c3619ee5806a634 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_moore, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 09:54:14 np0005546420.localdomain suspicious_moore[286772]: 167 167
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: libpod-9c52de370d91b49f702480ab2d895259c3b04d36ad664f196c3619ee5806a634.scope: Deactivated successfully.
Dec 05 09:54:14 np0005546420.localdomain podman[286757]: 2025-12-05 09:54:14.146024561 +0000 UTC m=+0.179680032 container died 9c52de370d91b49f702480ab2d895259c3b04d36ad664f196c3619ee5806a634 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_moore, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, release=1763362218, name=rhceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True)
Dec 05 09:54:14 np0005546420.localdomain podman[286777]: 2025-12-05 09:54:14.24236058 +0000 UTC m=+0.088271123 container remove 9c52de370d91b49f702480ab2d895259c3b04d36ad664f196c3619ee5806a634 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_moore, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.buildah.version=1.41.4, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, release=1763362218, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: libpod-conmon-9c52de370d91b49f702480ab2d895259c3b04d36ad664f196c3619ee5806a634.scope: Deactivated successfully.
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:54:14 np0005546420.localdomain systemd-rc-local-generator[286815]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:54:14 np0005546420.localdomain systemd-sysv-generator[286820]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-b91062c48f94c00938f5502e3287f2c36e848765606dc7e7c908fb611fa99b45-merged.mount: Deactivated successfully.
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:54:14 np0005546420.localdomain systemd-rc-local-generator[286863]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:54:14 np0005546420.localdomain systemd-sysv-generator[286866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:14 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:15 np0005546420.localdomain systemd[1]: Starting Ceph mgr.np0005546420.aoeylc for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b...
Dec 05 09:54:15 np0005546420.localdomain podman[286922]: 
Dec 05 09:54:15 np0005546420.localdomain podman[286922]: 2025-12-05 09:54:15.376763836 +0000 UTC m=+0.077087949 container create b957996e3d426af4ad0512931666634c343e13391d7b5d73b722553182cea751 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, ceph=True)
Dec 05 09:54:15 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d46d1ec79ca547c4429ce70004fc8dab3cdbb7440537ebfc53fd377472651ad9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:15 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d46d1ec79ca547c4429ce70004fc8dab3cdbb7440537ebfc53fd377472651ad9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:15 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d46d1ec79ca547c4429ce70004fc8dab3cdbb7440537ebfc53fd377472651ad9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:15 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d46d1ec79ca547c4429ce70004fc8dab3cdbb7440537ebfc53fd377472651ad9/merged/var/lib/ceph/mgr/ceph-np0005546420.aoeylc supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:15 np0005546420.localdomain podman[286922]: 2025-12-05 09:54:15.435614564 +0000 UTC m=+0.135938657 container init b957996e3d426af4ad0512931666634c343e13391d7b5d73b722553182cea751 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc, RELEASE=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, ceph=True, vendor=Red Hat, Inc., release=1763362218, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, CEPH_POINT_RELEASE=)
Dec 05 09:54:15 np0005546420.localdomain podman[286922]: 2025-12-05 09:54:15.345576328 +0000 UTC m=+0.045900461 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:54:15 np0005546420.localdomain podman[286922]: 2025-12-05 09:54:15.44685326 +0000 UTC m=+0.147177353 container start b957996e3d426af4ad0512931666634c343e13391d7b5d73b722553182cea751 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc, description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, version=7, vcs-type=git, io.buildah.version=1.41.4, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_CLEAN=True)
Dec 05 09:54:15 np0005546420.localdomain bash[286922]: b957996e3d426af4ad0512931666634c343e13391d7b5d73b722553182cea751
Dec 05 09:54:15 np0005546420.localdomain systemd[1]: Started Ceph mgr.np0005546420.aoeylc for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b.
Dec 05 09:54:15 np0005546420.localdomain sudo[286660]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:15 np0005546420.localdomain ceph-mgr[286940]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 09:54:15 np0005546420.localdomain ceph-mgr[286940]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2
Dec 05 09:54:15 np0005546420.localdomain ceph-mgr[286940]: pidfile_write: ignore empty --pid-file
Dec 05 09:54:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'alerts'
Dec 05 09:54:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:54:15 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:15.627+0000 7f394c871140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 09:54:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'balancer'
Dec 05 09:54:15 np0005546420.localdomain systemd[1]: tmp-crun.9eLpYi.mount: Deactivated successfully.
Dec 05 09:54:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:54:15 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:15.693+0000 7f394c871140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 09:54:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'cephadm'
Dec 05 09:54:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'crash'
Dec 05 09:54:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:54:16 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:16.318+0000 7f394c871140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 09:54:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'dashboard'
Dec 05 09:54:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'devicehealth'
Dec 05 09:54:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:54:16 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:16.854+0000 7f394c871140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 09:54:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'diskprediction_local'
Dec 05 09:54:16 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 05 09:54:16 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 05 09:54:16 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]:   from numpy import show_config as show_numpy_config
Dec 05 09:54:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:54:16 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:16.992+0000 7f394c871140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 09:54:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'influx'
Dec 05 09:54:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:54:17 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:17.050+0000 7f394c871140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 09:54:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'insights'
Dec 05 09:54:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'iostat'
Dec 05 09:54:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:54:17 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:17.161+0000 7f394c871140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 09:54:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'k8sevents'
Dec 05 09:54:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:54:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:54:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:54:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150826 "" "Go-http-client/1.1"
Dec 05 09:54:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:54:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17713 "" "Go-http-client/1.1"
Dec 05 09:54:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'localpool'
Dec 05 09:54:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'mds_autoscaler'
Dec 05 09:54:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'mirroring'
Dec 05 09:54:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'nfs'
Dec 05 09:54:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:54:17 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:17.915+0000 7f394c871140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 09:54:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'orchestrator'
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:18.068+0000 7f394c871140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'osd_perf_query'
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:18.133+0000 7f394c871140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'osd_support'
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:18.189+0000 7f394c871140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'pg_autoscaler'
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:18.256+0000 7f394c871140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'progress'
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:18.314+0000 7f394c871140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'prometheus'
Dec 05 09:54:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:54:18 np0005546420.localdomain systemd[1]: tmp-crun.PIXs5G.mount: Deactivated successfully.
Dec 05 09:54:18 np0005546420.localdomain podman[286970]: 2025-12-05 09:54:18.515463877 +0000 UTC m=+0.094310110 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 09:54:18 np0005546420.localdomain podman[286970]: 2025-12-05 09:54:18.559409036 +0000 UTC m=+0.138255349 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 09:54:18 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'rbd_support'
Dec 05 09:54:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:18.614+0000 7f394c871140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:18.699+0000 7f394c871140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'restful'
Dec 05 09:54:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:54:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:54:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:54:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:54:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:54:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:54:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:54:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:54:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:54:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:54:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:54:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:54:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'rgw'
Dec 05 09:54:18 np0005546420.localdomain sudo[286995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:18 np0005546420.localdomain sudo[286995]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:18 np0005546420.localdomain sudo[286995]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:19 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:54:19 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:19.089+0000 7f394c871140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 09:54:19 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'rook'
Dec 05 09:54:19 np0005546420.localdomain sudo[287013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:54:19 np0005546420.localdomain sudo[287013]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:19 np0005546420.localdomain sudo[287013]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:19 np0005546420.localdomain sudo[287031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:54:19 np0005546420.localdomain sudo[287031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:19 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:54:19 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'selftest'
Dec 05 09:54:19 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:19.758+0000 7f394c871140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 09:54:19 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:54:19 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'snap_schedule'
Dec 05 09:54:19 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:19.835+0000 7f394c871140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 09:54:19 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'stats'
Dec 05 09:54:19 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'status'
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:20.058+0000 7f394c871140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'telegraf'
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:20.145+0000 7f394c871140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'telemetry'
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:20.294+0000 7f394c871140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'test_orchestrator'
Dec 05 09:54:20 np0005546420.localdomain systemd[1]: tmp-crun.p6HA7G.mount: Deactivated successfully.
Dec 05 09:54:20 np0005546420.localdomain podman[287119]: 2025-12-05 09:54:20.464406769 +0000 UTC m=+0.124812575 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.41.4, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1763362218, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:20.470+0000 7f394c871140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'volumes'
Dec 05 09:54:20 np0005546420.localdomain podman[287119]: 2025-12-05 09:54:20.641175381 +0000 UTC m=+0.301581197 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, io.buildah.version=1.41.4, GIT_CLEAN=True, io.openshift.expose-services=, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, distribution-scope=public)
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:20.717+0000 7f394c871140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'zabbix'
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:54:20.779+0000 7f394c871140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adac4fb1e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0
Dec 05 09:54:20 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.103:6800/1502489571
Dec 05 09:54:21 np0005546420.localdomain sudo[287031]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:21 np0005546420.localdomain sudo[287219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:21 np0005546420.localdomain sudo[287219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:21 np0005546420.localdomain sudo[287219]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:21 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.103:6800/1502489571
Dec 05 09:54:22 np0005546420.localdomain sudo[287237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:22 np0005546420.localdomain sudo[287237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:22 np0005546420.localdomain sudo[287237]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:22 np0005546420.localdomain sudo[287255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:22 np0005546420.localdomain sudo[287255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:22 np0005546420.localdomain sudo[287255]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:54:26 np0005546420.localdomain sudo[287280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:26 np0005546420.localdomain systemd[1]: tmp-crun.7zqrLJ.mount: Deactivated successfully.
Dec 05 09:54:26 np0005546420.localdomain sudo[287280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:26 np0005546420.localdomain sudo[287280]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:26 np0005546420.localdomain podman[287273]: 2025-12-05 09:54:26.51005623 +0000 UTC m=+0.091934516 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125)
Dec 05 09:54:26 np0005546420.localdomain podman[287273]: 2025-12-05 09:54:26.54652537 +0000 UTC m=+0.128403696 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 09:54:26 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:54:26 np0005546420.localdomain sudo[287310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:54:26 np0005546420.localdomain sudo[287310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:26 np0005546420.localdomain sudo[287310]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:26 np0005546420.localdomain sudo[287328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:54:26 np0005546420.localdomain sudo[287328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:26 np0005546420.localdomain sudo[287328]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:26 np0005546420.localdomain sudo[287346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:54:26 np0005546420.localdomain sudo[287346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:26 np0005546420.localdomain sudo[287346]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:26 np0005546420.localdomain sudo[287364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:26 np0005546420.localdomain sudo[287364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:26 np0005546420.localdomain sudo[287364]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:26 np0005546420.localdomain sudo[287382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:54:26 np0005546420.localdomain sudo[287382]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287382]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:54:27 np0005546420.localdomain sudo[287416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287416]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:54:27 np0005546420.localdomain sudo[287434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287434]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:54:27 np0005546420.localdomain sudo[287452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287452]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:54:27 np0005546420.localdomain sudo[287470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287470]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:54:27 np0005546420.localdomain sudo[287488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287488]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:54:27 np0005546420.localdomain sudo[287506]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287506]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:27 np0005546420.localdomain sudo[287524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287524]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:54:27 np0005546420.localdomain sudo[287542]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287542]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:54:27 np0005546420.localdomain sudo[287576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287576]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:54:27 np0005546420.localdomain sudo[287594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287594]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:54:27 np0005546420.localdomain sudo[287612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287612]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:27 np0005546420.localdomain sudo[287630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:54:27 np0005546420.localdomain sudo[287630]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:27 np0005546420.localdomain sudo[287630]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:54:28 np0005546420.localdomain sudo[287648]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287648]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:54:28 np0005546420.localdomain sudo[287666]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287666]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:28 np0005546420.localdomain sudo[287684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287684]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:54:28 np0005546420.localdomain sudo[287702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287702]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:54:28 np0005546420.localdomain sudo[287736]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287736]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:54:28 np0005546420.localdomain sudo[287754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287754]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 09:54:28 np0005546420.localdomain sudo[287772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287772]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:54:28 np0005546420.localdomain sudo[287790]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287790]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:54:28 np0005546420.localdomain sudo[287808]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287808]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:54:28 np0005546420.localdomain sudo[287826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287826]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:28 np0005546420.localdomain sudo[287844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287844]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:54:28 np0005546420.localdomain sudo[287862]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287862]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:28 np0005546420.localdomain sudo[287896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:54:28 np0005546420.localdomain sudo[287896]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:28 np0005546420.localdomain sudo[287896]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:29 np0005546420.localdomain sudo[287914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:54:29 np0005546420.localdomain sudo[287914]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:29 np0005546420.localdomain sudo[287914]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:29 np0005546420.localdomain sudo[287932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:54:29 np0005546420.localdomain sudo[287932]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:29 np0005546420.localdomain sudo[287932]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:29 np0005546420.localdomain sudo[287950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:29 np0005546420.localdomain sudo[287950]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:29 np0005546420.localdomain sudo[287950]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:54:32 np0005546420.localdomain podman[287968]: 2025-12-05 09:54:32.490074222 +0000 UTC m=+0.071408915 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 05 09:54:32 np0005546420.localdomain podman[287968]: 2025-12-05 09:54:32.520816317 +0000 UTC m=+0.102151050 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 09:54:32 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:54:35 np0005546420.localdomain sudo[287985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:54:35 np0005546420.localdomain sudo[287985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:54:35 np0005546420.localdomain sudo[287985]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:35 np0005546420.localdomain sudo[288009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:35 np0005546420.localdomain podman[288003]: 2025-12-05 09:54:35.339620557 +0000 UTC m=+0.070831457 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:54:35 np0005546420.localdomain sudo[288009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:35 np0005546420.localdomain podman[288003]: 2025-12-05 09:54:35.346818718 +0000 UTC m=+0.078029678 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:54:35 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:54:35 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adac4fb1e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0
Dec 05 09:54:35 np0005546420.localdomain podman[288087]: 
Dec 05 09:54:35 np0005546420.localdomain podman[288087]: 2025-12-05 09:54:35.953155568 +0000 UTC m=+0.063189762 container create b75c75217f82bbde201147e9e2b0ede4fb7115c7bf6cac43255fcfa7adf2c323 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hugle, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., RELEASE=main, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:54:35 np0005546420.localdomain systemd[1]: Started libpod-conmon-b75c75217f82bbde201147e9e2b0ede4fb7115c7bf6cac43255fcfa7adf2c323.scope.
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:54:36 np0005546420.localdomain podman[288087]: 2025-12-05 09:54:35.920376092 +0000 UTC m=+0.030410346 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:54:36 np0005546420.localdomain podman[288087]: 2025-12-05 09:54:36.028688389 +0000 UTC m=+0.138722623 container init b75c75217f82bbde201147e9e2b0ede4fb7115c7bf6cac43255fcfa7adf2c323 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hugle, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.buildah.version=1.41.4)
Dec 05 09:54:36 np0005546420.localdomain podman[288087]: 2025-12-05 09:54:36.039229793 +0000 UTC m=+0.149263977 container start b75c75217f82bbde201147e9e2b0ede4fb7115c7bf6cac43255fcfa7adf2c323 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hugle, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, name=rhceph, ceph=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=7, GIT_BRANCH=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=)
Dec 05 09:54:36 np0005546420.localdomain podman[288087]: 2025-12-05 09:54:36.039571114 +0000 UTC m=+0.149605298 container attach b75c75217f82bbde201147e9e2b0ede4fb7115c7bf6cac43255fcfa7adf2c323 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hugle, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container)
Dec 05 09:54:36 np0005546420.localdomain vigorous_hugle[288103]: 167 167
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: libpod-b75c75217f82bbde201147e9e2b0ede4fb7115c7bf6cac43255fcfa7adf2c323.scope: Deactivated successfully.
Dec 05 09:54:36 np0005546420.localdomain podman[288087]: 2025-12-05 09:54:36.044913758 +0000 UTC m=+0.154947972 container died b75c75217f82bbde201147e9e2b0ede4fb7115c7bf6cac43255fcfa7adf2c323 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hugle, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1763362218, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 09:54:36 np0005546420.localdomain podman[288108]: 2025-12-05 09:54:36.143037383 +0000 UTC m=+0.085538549 container remove b75c75217f82bbde201147e9e2b0ede4fb7115c7bf6cac43255fcfa7adf2c323 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_hugle, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, RELEASE=main, version=7, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218)
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: libpod-conmon-b75c75217f82bbde201147e9e2b0ede4fb7115c7bf6cac43255fcfa7adf2c323.scope: Deactivated successfully.
Dec 05 09:54:36 np0005546420.localdomain podman[288126]: 
Dec 05 09:54:36 np0005546420.localdomain podman[288126]: 2025-12-05 09:54:36.255941691 +0000 UTC m=+0.079694639 container create 660b397bb74e313c3057574250489e57116ed18ec347598c5a654549016de333 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_jackson, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, architecture=x86_64, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, version=7, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, build-date=2025-11-26T19:44:28Z, vcs-type=git)
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: Started libpod-conmon-660b397bb74e313c3057574250489e57116ed18ec347598c5a654549016de333.scope.
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:54:36 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13112d400eed17437efdac23f915064b78060fb0b1246339b14bbf36532a7df/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:36 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13112d400eed17437efdac23f915064b78060fb0b1246339b14bbf36532a7df/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:36 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13112d400eed17437efdac23f915064b78060fb0b1246339b14bbf36532a7df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:36 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13112d400eed17437efdac23f915064b78060fb0b1246339b14bbf36532a7df/merged/var/lib/ceph/mon/ceph-np0005546420 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:36 np0005546420.localdomain podman[288126]: 2025-12-05 09:54:36.224148965 +0000 UTC m=+0.047901883 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:54:36 np0005546420.localdomain podman[288126]: 2025-12-05 09:54:36.323399065 +0000 UTC m=+0.147151963 container init 660b397bb74e313c3057574250489e57116ed18ec347598c5a654549016de333 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_jackson, architecture=x86_64, RELEASE=main, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z)
Dec 05 09:54:36 np0005546420.localdomain podman[288126]: 2025-12-05 09:54:36.332814674 +0000 UTC m=+0.156567572 container start 660b397bb74e313c3057574250489e57116ed18ec347598c5a654549016de333 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_jackson, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, version=7, com.redhat.component=rhceph-container, release=1763362218, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-921d90c0ef8db5a148f4fcd6531ece8d1f26ff29c27c3987a567ac46a8bf3bdd-merged.mount: Deactivated successfully.
Dec 05 09:54:36 np0005546420.localdomain podman[288126]: 2025-12-05 09:54:36.338768247 +0000 UTC m=+0.162521145 container attach 660b397bb74e313c3057574250489e57116ed18ec347598c5a654549016de333 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_jackson, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., version=7, io.openshift.expose-services=, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, release=1763362218, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: libpod-660b397bb74e313c3057574250489e57116ed18ec347598c5a654549016de333.scope: Deactivated successfully.
Dec 05 09:54:36 np0005546420.localdomain podman[288126]: 2025-12-05 09:54:36.445324151 +0000 UTC m=+0.269077059 container died 660b397bb74e313c3057574250489e57116ed18ec347598c5a654549016de333 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_jackson, io.buildah.version=1.41.4, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f13112d400eed17437efdac23f915064b78060fb0b1246339b14bbf36532a7df-merged.mount: Deactivated successfully.
Dec 05 09:54:36 np0005546420.localdomain podman[288167]: 2025-12-05 09:54:36.56440702 +0000 UTC m=+0.103946716 container remove 660b397bb74e313c3057574250489e57116ed18ec347598c5a654549016de333 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_jackson, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-type=git, io.buildah.version=1.41.4, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z)
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: libpod-conmon-660b397bb74e313c3057574250489e57116ed18ec347598c5a654549016de333.scope: Deactivated successfully.
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:54:36 np0005546420.localdomain systemd-rc-local-generator[288208]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:54:36 np0005546420.localdomain systemd-sysv-generator[288212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:36 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:54:37 np0005546420.localdomain systemd-rc-local-generator[288251]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:54:37 np0005546420.localdomain systemd-sysv-generator[288254]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: Starting Ceph mon.np0005546420 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b...
Dec 05 09:54:37 np0005546420.localdomain podman[288312]: 
Dec 05 09:54:37 np0005546420.localdomain podman[288312]: 2025-12-05 09:54:37.838336282 +0000 UTC m=+0.084512998 container create 645b8ccfa2de70142d96d855afc4f4edd4e701bee4eab236aac4acc6a90f6630 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546420, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, name=rhceph, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:54:37 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21466fbcebca3012089ae16af882543bc406498c642f64ed0cca31d52e5cb588/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:37 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21466fbcebca3012089ae16af882543bc406498c642f64ed0cca31d52e5cb588/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:37 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21466fbcebca3012089ae16af882543bc406498c642f64ed0cca31d52e5cb588/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:37 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21466fbcebca3012089ae16af882543bc406498c642f64ed0cca31d52e5cb588/merged/var/lib/ceph/mon/ceph-np0005546420 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:54:37 np0005546420.localdomain podman[288312]: 2025-12-05 09:54:37.899996017 +0000 UTC m=+0.146172773 container init 645b8ccfa2de70142d96d855afc4f4edd4e701bee4eab236aac4acc6a90f6630 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546420, ceph=True, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, name=rhceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:54:37 np0005546420.localdomain podman[288312]: 2025-12-05 09:54:37.802419529 +0000 UTC m=+0.048596275 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:54:37 np0005546420.localdomain podman[288312]: 2025-12-05 09:54:37.909718606 +0000 UTC m=+0.155895322 container start 645b8ccfa2de70142d96d855afc4f4edd4e701bee4eab236aac4acc6a90f6630 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546420, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_BRANCH=main)
Dec 05 09:54:37 np0005546420.localdomain bash[288312]: 645b8ccfa2de70142d96d855afc4f4edd4e701bee4eab236aac4acc6a90f6630
Dec 05 09:54:37 np0005546420.localdomain systemd[1]: Started Ceph mon.np0005546420 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b.
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: pidfile_write: ignore empty --pid-file
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: load: jerasure load: lrc 
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: RocksDB version: 7.9.2
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Git sha 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: DB SUMMARY
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: DB Session ID:  4WA5JLFCDLFMTDS0OOZ2
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: CURRENT file:  CURRENT
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005546420/store.db dir, Total Num: 0, files: 
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005546420/store.db: 000004.log size: 761 ; 
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                         Options.error_if_exists: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                       Options.create_if_missing: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                                     Options.env: 0x56269a26b9e0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                                Options.info_log: 0x56269c2e0d20
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                              Options.statistics: (nil)
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                               Options.use_fsync: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                              Options.db_log_dir: 
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                                 Options.wal_dir: 
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                    Options.write_buffer_manager: 0x56269c2f1540
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                  Options.unordered_write: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                               Options.row_cache: None
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                              Options.wal_filter: None
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.two_write_queues: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.wal_compression: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.atomic_flush: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.max_background_jobs: 2
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.max_background_compactions: -1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.max_subcompactions: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.max_total_wal_size: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                          Options.max_open_files: -1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:       Options.compaction_readahead_size: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Compression algorithms supported:
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         kZSTD supported: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         kXpressCompression supported: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         kBZip2Compression supported: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         kLZ4Compression supported: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         kZlibCompression supported: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         kSnappyCompression supported: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005546420/store.db/MANIFEST-000005
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:           Options.merge_operator: 
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:        Options.compaction_filter: None
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56269c2e0980)
                                                             cache_index_and_filter_blocks: 1
                                                             cache_index_and_filter_blocks_with_high_priority: 0
                                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                                             pin_top_level_index_and_filter: 1
                                                             index_type: 0
                                                             data_block_index_type: 0
                                                             index_shortening: 1
                                                             data_block_hash_table_util_ratio: 0.750000
                                                             checksum: 4
                                                             no_block_cache: 0
                                                             block_cache: 0x56269c2dd350
                                                             block_cache_name: BinnedLRUCache
                                                             block_cache_options:
                                                               capacity : 536870912
                                                               num_shard_bits : 4
                                                               strict_capacity_limit : 0
                                                               high_pri_pool_ratio: 0.000
                                                             block_cache_compressed: (nil)
                                                             persistent_cache: (nil)
                                                             block_size: 4096
                                                             block_size_deviation: 10
                                                             block_restart_interval: 16
                                                             index_block_restart_interval: 1
                                                             metadata_block_size: 4096
                                                             partition_filters: 0
                                                             use_delta_encoding: 1
                                                             filter_policy: bloomfilter
                                                             whole_key_filtering: 1
                                                             verify_compression: 0
                                                             read_amp_bytes_per_bit: 0
                                                             format_version: 5
                                                             enable_index_compression: 1
                                                             block_align: 0
                                                             max_auto_readahead_size: 262144
                                                             prepopulate_block_cache: 0
                                                             initial_auto_readahead_size: 8192
                                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:        Options.write_buffer_size: 33554432
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:  Options.max_write_buffer_number: 2
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:          Options.compression: NoCompression
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.num_levels: 7
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                   Options.table_properties_collectors: 
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005546420/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6c980799-7b55-4c4e-92d8-beaefbaee73e
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928477962661, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928477964822, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928477, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c980799-7b55-4c4e-92d8-beaefbaee73e", "db_session_id": "4WA5JLFCDLFMTDS0OOZ2", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928477964949, "job": 1, "event": "recovery_finished"}
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 05 09:54:37 np0005546420.localdomain sudo[288009]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56269c304e00
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: DB pointer 0x56269c3fa000
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420 does not exist in monmap, will attempt to join an existing cluster
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      1/0    1.84 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                            Sum      1/0    1.84 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Flush(GB): cumulative 0.000, interval 0.000
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x56269c2dd350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.9e-05 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: using public_addr v2:172.18.0.107:0/0 -> [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0]
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: starting mon.np0005546420 rank -1 at public addrs [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] at bind addrs [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005546420 fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(???) e0 preinit fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(synchronizing) e4 sync_obtain_latest_monmap
Dec 05 09:54:37 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(synchronizing) e4 sync_obtain_latest_monmap obtained monmap e4
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(synchronizing).mds e16 new map
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(synchronizing).mds e16 print_map
                                                           e16
                                                           enable_multiple, ever_enabled_multiple: 1,1
                                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           legacy client fscid: 1
                                                            
                                                           Filesystem 'cephfs' (1)
                                                           fs_name        cephfs
                                                           epoch        16
                                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                                           created        2025-12-05T08:10:30.749420+0000
                                                           modified        2025-12-05T09:53:37.952087+0000
                                                           tableserver        0
                                                           root        0
                                                           session_timeout        60
                                                           session_autoclose        300
                                                           max_file_size        1099511627776
                                                           required_client_features        {}
                                                           last_failure        0
                                                           last_failure_osd_epoch        84
                                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           max_mds        1
                                                           in        0
                                                           up        {0=26492}
                                                           failed        
                                                           damaged        
                                                           stopped        
                                                           data_pools        [6]
                                                           metadata_pool        7
                                                           inline_data        disabled
                                                           balancer        
                                                           bal_rank_mask        -1
                                                           standby_count_wanted        1
                                                           qdb_cluster        leader: 26492 members: 26492
                                                           [mds.mds.np0005546420.eqhasr{0:26492} state up:active seq 16 addr [v2:172.18.0.107:6808/530338393,v1:172.18.0.107:6809/530338393] compat {c=[1],r=[1],i=[17ff]}]
                                                            
                                                            
                                                           Standby daemons:
                                                            
                                                           [mds.mds.np0005546419.rweotn{-1:16917} state up:standby seq 1 addr [v2:172.18.0.106:6808/2431590011,v1:172.18.0.106:6809/2431590011] compat {c=[1],r=[1],i=[17ff]}]
                                                           [mds.mds.np0005546421.tuudjq{-1:26486} state up:standby seq 1 addr [v2:172.18.0.108:6808/812129975,v1:172.18.0.108:6809/812129975] compat {c=[1],r=[1],i=[17ff]}]
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(synchronizing).osd e84 crush map has features 3314933000852226048, adjusting msgr requires
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.106:0/4254879914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.107:0/949863280' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3812: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17040 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546419.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label mgr to host np0005546419.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3813: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17046 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546420.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label mgr to host np0005546420.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.108:0/3323973998' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.108:0/1551263168' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17064 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546421.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label mgr to host np0005546421.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3814: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17070 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Saving service mgr spec with placement label:mgr
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Deploying daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3815: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17076 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mgr", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Deploying daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17088 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546415.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label mon to host np0005546415.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3816: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17094 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546415.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label _admin to host np0005546415.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Deploying daemon mgr.np0005546421.sukfea on np0005546421.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3817: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17106 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546416.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label mon to host np0005546416.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17112 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546416.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label _admin to host np0005546416.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3818: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Standby manager daemon np0005546419.zhsnqq started
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: mgrmap e12: np0005546415.knqtle(active, since 2h), standbys: np0005546418.garyvl, np0005546416.kmqcnq, np0005546419.zhsnqq
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17124 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546418.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label mon to host np0005546418.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3819: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17130 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546418.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Standby manager daemon np0005546420.aoeylc started
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label _admin to host np0005546418.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: mgrmap e13: np0005546415.knqtle(active, since 2h), standbys: np0005546418.garyvl, np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3820: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17136 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546419.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label mon to host np0005546419.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17142 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546419.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label _admin to host np0005546419.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Standby manager daemon np0005546421.sukfea started
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3821: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: mgrmap e14: np0005546415.knqtle(active, since 2h), standbys: np0005546418.garyvl, np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17148 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546420.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label mon to host np0005546420.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3822: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17154 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546420.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label _admin to host np0005546420.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17160 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546421.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label mon to host np0005546421.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3823: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005546421.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Added label _admin to host np0005546421.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3824: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Saving service mon spec with placement label:mon
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: from='client.17178 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546419", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: pgmap v3825: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: Deploying daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:54:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3
Dec 05 09:54:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:54:39 np0005546420.localdomain podman[288370]: 2025-12-05 09:54:39.523103358 +0000 UTC m=+0.089709657 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:54:39 np0005546420.localdomain podman[288370]: 2025-12-05 09:54:39.561748156 +0000 UTC m=+0.128354415 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=multipathd)
Dec 05 09:54:39 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:54:42 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adac4faf20 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0
Dec 05 09:54:43 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(probing) e4  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 05 09:54:43 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(probing) e4  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 05 09:54:44 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@-1(probing) e5  my rank is now 4 (was -1)
Dec 05 09:54:44 np0005546420.localdomain ceph-mon[288331]: log_channel(cluster) log [INF] : mon.np0005546420 calling monitor election
Dec 05 09:54:44 np0005546420.localdomain ceph-mon[288331]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 05 09:54:44 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:54:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:54:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:54:44 np0005546420.localdomain podman[288389]: 2025-12-05 09:54:44.51731213 +0000 UTC m=+0.094014209 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Dec 05 09:54:44 np0005546420.localdomain podman[288389]: 2025-12-05 09:54:44.53324586 +0000 UTC m=+0.109947959 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 09:54:44 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:54:44 np0005546420.localdomain podman[288390]: 2025-12-05 09:54:44.623207364 +0000 UTC m=+0.195267340 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:54:44 np0005546420.localdomain podman[288390]: 2025-12-05 09:54:44.638514975 +0000 UTC m=+0.210574981 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:54:44 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:54:45 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 05 09:54:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:54:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:54:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:54:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 09:54:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:54:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18195 "" "Go-http-client/1.1"
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon) e5 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon) e5 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: Deploying daemon mon.np0005546420 on np0005546420.localdomain
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546415"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546416"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546415 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546416 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: pgmap v3827: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546421 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: pgmap v3828: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546415 is new leader, mons np0005546415,np0005546418,np0005546416,np0005546421 in quorum (ranks 0,1,2,3)
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: monmap epoch 4
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: last_changed 2025-12-05T09:54:35.440235+0000
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: min_mon_release 18 (reef)
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: election_strategy: 1
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546415
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546416
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: osdmap e84: 6 total, 6 up, 6 in
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mgrmap e14: np0005546415.knqtle(active, since 2h), standbys: np0005546418.garyvl, np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: overall HEALTH_OK
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: pgmap v3829: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: Deploying daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='client.17184 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546419", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mgrc update_daemon_metadata mon.np0005546420 metadata {addrs=[v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005546420.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005546420.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux}
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: pgmap v3830: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546415"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546416"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546415 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546416 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546421 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: pgmap v3831: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: pgmap v3832: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546415 is new leader, mons np0005546415,np0005546418,np0005546416 in quorum (ranks 0,1,2)
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: overall HEALTH_OK
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546415 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546416 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546415 is new leader, mons np0005546415,np0005546418,np0005546416,np0005546421,np0005546420 in quorum (ranks 0,1,2,3,4)
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: monmap epoch 5
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: last_changed 2025-12-05T09:54:42.321504+0000
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: min_mon_release 18 (reef)
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: election_strategy: 1
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546415
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546416
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: 4: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005546420
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: osdmap e84: 6 total, 6 up, 6 in
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mgrmap e14: np0005546415.knqtle(active, since 2h), standbys: np0005546418.garyvl, np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: overall HEALTH_OK
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:47 np0005546420.localdomain sshd[288430]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 05 09:54:47 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adac4fb1e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: log_channel(cluster) log [INF] : mon.np0005546420 calling monitor election
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: paxos.4).electionLogic(24) init, last seen epoch 24
Dec 05 09:54:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:54:47 np0005546420.localdomain sudo[288431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:48 np0005546420.localdomain sudo[288431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:48 np0005546420.localdomain sudo[288431]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:48 np0005546420.localdomain sudo[288450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:54:48 np0005546420.localdomain sudo[288450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:48 np0005546420.localdomain sudo[288450]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:48 np0005546420.localdomain sudo[288468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:54:48 np0005546420.localdomain sudo[288468]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:54:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:54:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:54:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:54:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:54:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:54:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:54:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:54:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:54:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:54:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:54:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:54:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:54:48 np0005546420.localdomain podman[288529]: 2025-12-05 09:54:48.958185431 +0000 UTC m=+0.105139782 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 05 09:54:48 np0005546420.localdomain podman[288529]: 2025-12-05 09:54:48.998408866 +0000 UTC m=+0.145363187 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 09:54:49 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:54:49 np0005546420.localdomain systemd[1]: tmp-crun.XBm2YT.mount: Deactivated successfully.
Dec 05 09:54:49 np0005546420.localdomain podman[288580]: 2025-12-05 09:54:49.124290124 +0000 UTC m=+0.087463098 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=1763362218, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:54:49 np0005546420.localdomain podman[288580]: 2025-12-05 09:54:49.21169628 +0000 UTC m=+0.174869194 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, CEPH_POINT_RELEASE=, release=1763362218, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public)
Dec 05 09:54:49 np0005546420.localdomain sudo[288468]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:50 np0005546420.localdomain sshd[288430]: Connection reset by authenticating user root 91.202.233.33 port 55324 [preauth]
Dec 05 09:54:50 np0005546420.localdomain sshd[288697]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:54:52 np0005546420.localdomain sshd[288697]: Invalid user guest from 91.202.233.33 port 62570
Dec 05 09:54:52 np0005546420.localdomain ceph-mds[283770]: mds.beacon.mds.np0005546420.eqhasr missed beacon ack from the monitors
Dec 05 09:54:52 np0005546420.localdomain sshd[288697]: Connection reset by invalid user guest 91.202.233.33 port 62570 [preauth]
Dec 05 09:54:52 np0005546420.localdomain ceph-mon[288331]: paxos.4).electionLogic(25) init, last seen epoch 25, mid-election, bumping
Dec 05 09:54:52 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:54:52 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:54:52 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:54:52 np0005546420.localdomain sshd[288699]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:54:52 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:54:53 np0005546420.localdomain sudo[288700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:54:53 np0005546420.localdomain sudo[288700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:53 np0005546420.localdomain sudo[288700]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:53 np0005546420.localdomain sudo[288718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:54:53 np0005546420.localdomain sudo[288718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:53 np0005546420.localdomain sudo[288718]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 np0005546420.localdomain sudo[288769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:54:54 np0005546420.localdomain sudo[288769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:54 np0005546420.localdomain sudo[288769]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 calling monitor election
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420 calling monitor election
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546416 calling monitor election
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546421 calling monitor election
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546419 calling monitor election
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 calling monitor election
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: monmap epoch 6
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: last_changed 2025-12-05T09:54:47.841149+0000
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: min_mon_release 18 (reef)
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: election_strategy: 1
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546415
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546416
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 4: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005546420
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 5: [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon.np0005546419
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546416 calling monitor election
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546421 calling monitor election
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: osdmap e84: 6 total, 6 up, 6 in
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mgrmap e14: np0005546415.knqtle(active, since 2h), standbys: np0005546418.garyvl, np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: Health check failed: 2/6 mons down, quorum np0005546415,np0005546418,np0005546416,np0005546421 (MON_DOWN)
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: overall HEALTH_OK
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546415 calling monitor election
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546415 is new leader, mons np0005546415,np0005546418,np0005546416,np0005546421,np0005546420,np0005546419 in quorum (ranks 0,1,2,3,4,5)
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: monmap epoch 6
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: last_changed 2025-12-05T09:54:47.841149+0000
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: min_mon_release 18 (reef)
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: election_strategy: 1
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546415
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546416
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 4: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005546420
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: 5: [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon.np0005546419
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: osdmap e84: 6 total, 6 up, 6 in
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: mgrmap e14: np0005546415.knqtle(active, since 2h), standbys: np0005546418.garyvl, np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: Health check cleared: MON_DOWN (was: 2/6 mons down, quorum np0005546415,np0005546418,np0005546416,np0005546421)
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: Cluster is now healthy
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: overall HEALTH_OK
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:54:54 np0005546420.localdomain sudo[288787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:54:54 np0005546420.localdomain sudo[288787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:54 np0005546420.localdomain sudo[288787]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 np0005546420.localdomain sudo[288805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:54:54 np0005546420.localdomain sudo[288805]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:54 np0005546420.localdomain sudo[288805]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 np0005546420.localdomain sudo[288823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:54 np0005546420.localdomain sudo[288823]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:54 np0005546420.localdomain sudo[288823]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 np0005546420.localdomain sudo[288841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:54:54 np0005546420.localdomain sudo[288841]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:54 np0005546420.localdomain sudo[288841]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 np0005546420.localdomain sudo[288875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:54:54 np0005546420.localdomain sudo[288875]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:54 np0005546420.localdomain sudo[288875]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 np0005546420.localdomain sudo[288893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:54:54 np0005546420.localdomain sudo[288893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:54 np0005546420.localdomain sudo[288893]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 np0005546420.localdomain sudo[288911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:54:54 np0005546420.localdomain sudo[288911]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:54 np0005546420.localdomain sudo[288911]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 np0005546420.localdomain sudo[288929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:54:54 np0005546420.localdomain sudo[288929]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:54 np0005546420.localdomain sudo[288929]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 np0005546420.localdomain sudo[288947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:54:54 np0005546420.localdomain sudo[288947]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:54 np0005546420.localdomain sudo[288947]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:54 np0005546420.localdomain sudo[288965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:54:54 np0005546420.localdomain sudo[288965]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:54 np0005546420.localdomain sudo[288965]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:55 np0005546420.localdomain sudo[288983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:54:55 np0005546420.localdomain sudo[288983]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:55 np0005546420.localdomain sudo[288983]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:55 np0005546420.localdomain sudo[289001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:54:55 np0005546420.localdomain sshd[288699]: Connection reset by authenticating user root 91.202.233.33 port 62584 [preauth]
Dec 05 09:54:55 np0005546420.localdomain sudo[289001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:55 np0005546420.localdomain sudo[289001]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:55 np0005546420.localdomain ceph-mon[288331]: pgmap v3836: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:54:55 np0005546420.localdomain ceph-mon[288331]: Updating np0005546415.localdomain:/etc/ceph/ceph.conf
Dec 05 09:54:55 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf
Dec 05 09:54:55 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:54:55 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:54:55 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:54:55 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:54:55 np0005546420.localdomain ceph-mon[288331]: from='client.26539 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546419", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:54:55 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:54:55 np0005546420.localdomain sudo[289035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:54:55 np0005546420.localdomain sudo[289035]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:55 np0005546420.localdomain sshd[289052]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:54:55 np0005546420.localdomain sudo[289035]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:55 np0005546420.localdomain sudo[289054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:54:55 np0005546420.localdomain sudo[289054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:55 np0005546420.localdomain sudo[289054]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:55 np0005546420.localdomain sudo[289072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:54:55 np0005546420.localdomain sudo[289072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:55 np0005546420.localdomain sudo[289072]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:56 np0005546420.localdomain sudo[289091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:54:56 np0005546420.localdomain sudo[289091]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:54:56 np0005546420.localdomain sudo[289091]: pam_unix(sudo:session): session closed for user root
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: Updating np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='client.34102 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546420", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:54:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:57 np0005546420.localdomain ceph-mon[288331]: pgmap v3837: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:57 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546415 (monmap changed)...
Dec 05 09:54:57 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546415 on np0005546415.localdomain
Dec 05 09:54:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546415.knqtle", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:54:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:54:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:54:57 np0005546420.localdomain podman[289109]: 2025-12-05 09:54:57.515405959 +0000 UTC m=+0.088899622 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 09:54:57 np0005546420.localdomain podman[289109]: 2025-12-05 09:54:57.531546436 +0000 UTC m=+0.105040109 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:54:57 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:54:58 np0005546420.localdomain ceph-mon[288331]: from='client.26545 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546421", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:54:58 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546415.knqtle (monmap changed)...
Dec 05 09:54:58 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546415.knqtle on np0005546415.localdomain
Dec 05 09:54:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546415.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:54:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:54:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:59 np0005546420.localdomain sshd[289052]: Connection reset by authenticating user root 91.202.233.33 port 62596 [preauth]
Dec 05 09:54:59 np0005546420.localdomain sshd[289128]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:54:59 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546415 (monmap changed)...
Dec 05 09:54:59 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546415 on np0005546415.localdomain
Dec 05 09:54:59 np0005546420.localdomain ceph-mon[288331]: pgmap v3838: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:54:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:59 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.103:0/4197787940' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 05 09:54:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:54:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546416.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:54:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:00 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546416 (monmap changed)...
Dec 05 09:55:00 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546416 on np0005546416.localdomain
Dec 05 09:55:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:55:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:55:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:55:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:55:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:00 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.103:0/612572294' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Dec 05 09:55:01 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546416 (monmap changed)...
Dec 05 09:55:01 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546416 on np0005546416.localdomain
Dec 05 09:55:01 np0005546420.localdomain ceph-mon[288331]: pgmap v3839: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:55:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:55:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546416.kmqcnq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:55:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:55:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:01 np0005546420.localdomain sshd[289128]: Connection reset by authenticating user root 91.202.233.33 port 62602 [preauth]
Dec 05 09:55:01 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon).osd e84 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 05 09:55:01 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon).osd e84 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 05 09:55:01 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon).osd e85 e85: 6 total, 6 up, 6 in
Dec 05 09:55:01 np0005546420.localdomain sshd[26376]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:01 np0005546420.localdomain sshd[26433]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:01 np0005546420.localdomain systemd[1]: session-17.scope: Deactivated successfully.
Dec 05 09:55:01 np0005546420.localdomain sshd[26545]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:01 np0005546420.localdomain systemd[1]: session-26.scope: Deactivated successfully.
Dec 05 09:55:01 np0005546420.localdomain sshd[26490]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:01 np0005546420.localdomain systemd[1]: session-26.scope: Consumed 3min 30.351s CPU time.
Dec 05 09:55:01 np0005546420.localdomain sshd[26395]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:01 np0005546420.localdomain systemd[1]: session-22.scope: Deactivated successfully.
Dec 05 09:55:01 np0005546420.localdomain sshd[26471]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:01 np0005546420.localdomain systemd[1]: session-24.scope: Deactivated successfully.
Dec 05 09:55:01 np0005546420.localdomain sshd[26354]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:01 np0005546420.localdomain systemd[1]: session-23.scope: Deactivated successfully.
Dec 05 09:55:01 np0005546420.localdomain sshd[26509]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:01 np0005546420.localdomain sshd[26414]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:01 np0005546420.localdomain systemd[1]: session-19.scope: Deactivated successfully.
Dec 05 09:55:01 np0005546420.localdomain sshd[26337]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:01 np0005546420.localdomain systemd[1]: session-16.scope: Deactivated successfully.
Dec 05 09:55:01 np0005546420.localdomain systemd-logind[762]: Session 17 logged out. Waiting for processes to exit.
Dec 05 09:55:01 np0005546420.localdomain systemd[1]: session-20.scope: Deactivated successfully.
Dec 05 09:55:01 np0005546420.localdomain systemd[1]: session-18.scope: Deactivated successfully.
Dec 05 09:55:01 np0005546420.localdomain systemd[1]: session-14.scope: Deactivated successfully.
Dec 05 09:55:01 np0005546420.localdomain systemd-logind[762]: Session 16 logged out. Waiting for processes to exit.
Dec 05 09:55:01 np0005546420.localdomain systemd-logind[762]: Session 23 logged out. Waiting for processes to exit.
Dec 05 09:55:01 np0005546420.localdomain systemd-logind[762]: Session 22 logged out. Waiting for processes to exit.
Dec 05 09:55:01 np0005546420.localdomain systemd-logind[762]: Session 24 logged out. Waiting for processes to exit.
Dec 05 09:55:01 np0005546420.localdomain systemd-logind[762]: Session 26 logged out. Waiting for processes to exit.
Dec 05 09:55:01 np0005546420.localdomain systemd-logind[762]: Session 20 logged out. Waiting for processes to exit.
Dec 05 09:55:01 np0005546420.localdomain systemd-logind[762]: Session 18 logged out. Waiting for processes to exit.
Dec 05 09:55:01 np0005546420.localdomain systemd-logind[762]: Session 14 logged out. Waiting for processes to exit.
Dec 05 09:55:01 np0005546420.localdomain systemd-logind[762]: Session 19 logged out. Waiting for processes to exit.
Dec 05 09:55:01 np0005546420.localdomain sshd[26526]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:02 np0005546420.localdomain systemd[1]: session-25.scope: Deactivated successfully.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 17.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Session 25 logged out. Waiting for processes to exit.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 26.
Dec 05 09:55:02 np0005546420.localdomain sshd[26452]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 22.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 24.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 23.
Dec 05 09:55:02 np0005546420.localdomain systemd[1]: session-21.scope: Deactivated successfully.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Session 21 logged out. Waiting for processes to exit.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 19.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 16.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 20.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 18.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 14.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 25.
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: Removed session 21.
Dec 05 09:55:02 np0005546420.localdomain sshd[289131]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:55:02 np0005546420.localdomain sshd[289131]: Accepted publickey for ceph-admin from 192.168.122.105 port 55498 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 09:55:02 np0005546420.localdomain systemd-logind[762]: New session 63 of user ceph-admin.
Dec 05 09:55:02 np0005546420.localdomain systemd[1]: Started Session 63 of User ceph-admin.
Dec 05 09:55:02 np0005546420.localdomain sshd[289131]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 09:55:02 np0005546420.localdomain sudo[289135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:02 np0005546420.localdomain sudo[289135]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:02 np0005546420.localdomain sudo[289135]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:02 np0005546420.localdomain sudo[289153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:55:02 np0005546420.localdomain sudo[289153]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546416.kmqcnq (monmap changed)...
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546416.kmqcnq on np0005546416.localdomain
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' 
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.103:0/2547002603' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: Activating manager daemon np0005546418.garyvl
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546415"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546416"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr metadata", "who": "np0005546418.garyvl", "id": "np0005546418.garyvl"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: osdmap e85: 6 total, 6 up, 6 in
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr metadata", "who": "np0005546416.kmqcnq", "id": "np0005546416.kmqcnq"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.103:0/2547002603' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: mgrmap e15: np0005546418.garyvl(active, starting, since 0.0730246s), standbys: np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mds metadata"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "osd metadata"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: Manager daemon np0005546418.garyvl is now available
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546418.garyvl/mirror_snapshot_schedule"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546418.garyvl/mirror_snapshot_schedule"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546418.garyvl/trash_purge_schedule"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546418.garyvl/trash_purge_schedule"} : dispatch
Dec 05 09:55:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:02.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:55:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:02.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:55:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:02.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:55:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:02.889 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:55:02 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon).osd e85 _set_new_cache_sizes cache_size:1019576689 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:55:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:55:03 np0005546420.localdomain podman[289215]: 2025-12-05 09:55:03.121277826 +0000 UTC m=+0.071078485 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:55:03 np0005546420.localdomain podman[289215]: 2025-12-05 09:55:03.156496369 +0000 UTC m=+0.106297038 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 09:55:03 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:55:03 np0005546420.localdomain systemd[1]: tmp-crun.EEHVQM.mount: Deactivated successfully.
Dec 05 09:55:03 np0005546420.localdomain podman[289260]: 2025-12-05 09:55:03.282607924 +0000 UTC m=+0.082616459 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, RELEASE=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=1763362218, io.buildah.version=1.41.4, version=7)
Dec 05 09:55:03 np0005546420.localdomain podman[289260]: 2025-12-05 09:55:03.3986912 +0000 UTC m=+0.198699725 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, vcs-type=git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, ceph=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:55:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:03.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:55:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:03.888 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:55:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:03.888 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: mgrmap e16: np0005546418.garyvl(active, since 1.11847s), standbys: np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:55:03] ENGINE Bus STARTING
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:55:03] ENGINE Serving on http://172.18.0.105:8765
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.32:0/2461894034' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.32:0/2461894034' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:04 np0005546420.localdomain sudo[289153]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:55:04.115 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:55:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:55:04.116 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:55:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:55:04.116 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:55:04 np0005546420.localdomain sudo[289381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:04 np0005546420.localdomain sudo[289381]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:04 np0005546420.localdomain sudo[289381]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:04 np0005546420.localdomain sudo[289399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:55:04 np0005546420.localdomain sudo[289399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:04.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:55:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:04.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:55:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:04.901 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:55:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:04.902 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:55:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:04.902 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:55:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:04.903 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:55:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:04.904 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:55:03] ENGINE Serving on https://172.18.0.105:7150
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:55:03] ENGINE Bus STARTED
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:55:03] ENGINE Client ('172.18.0.105', 37538) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: mgrmap e17: np0005546418.garyvl(active, since 2s), standbys: np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:05 np0005546420.localdomain sudo[289399]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:05 np0005546420.localdomain sudo[289469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:05 np0005546420.localdomain sudo[289469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:05 np0005546420.localdomain sudo[289469]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:05 np0005546420.localdomain sudo[289487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 05 09:55:05 np0005546420.localdomain sudo[289487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:05.379 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:55:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:55:05 np0005546420.localdomain systemd[1]: tmp-crun.YQDKXb.mount: Deactivated successfully.
Dec 05 09:55:05 np0005546420.localdomain podman[289507]: 2025-12-05 09:55:05.495237689 +0000 UTC m=+0.076438829 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:55:05 np0005546420.localdomain podman[289507]: 2025-12-05 09:55:05.506462224 +0000 UTC m=+0.087663334 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:55:05 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:55:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:05.545 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:55:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:05.546 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12335MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:55:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:05.547 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:55:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:05.547 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:55:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:05.624 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:55:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:05.625 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:55:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:05.638 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:55:05 np0005546420.localdomain sudo[289487]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:05 np0005546420.localdomain sudo[289569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:55:05 np0005546420.localdomain sudo[289569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:05 np0005546420.localdomain sudo[289569]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain sudo[289587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:55:06 np0005546420.localdomain sudo[289587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289587]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:06.077 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:55:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:06.084 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:55:06 np0005546420.localdomain sudo[289607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:55:06 np0005546420.localdomain sudo[289607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289607]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain sudo[289625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:06 np0005546420.localdomain sudo[289625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289625]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:06.255 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:55:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:06.257 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:55:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:06.257 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:55:06 np0005546420.localdomain sudo[289643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:55:06 np0005546420.localdomain sudo[289643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289643]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.107:0/3812049454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: mgrmap e18: np0005546418.garyvl(active, since 3s), standbys: np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546415", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546415", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Updating np0005546415.localdomain:/etc/ceph/ceph.conf
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.106:0/1198559030' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.107:0/4093047416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:55:06 np0005546420.localdomain sudo[289677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:55:06 np0005546420.localdomain sudo[289677]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289677]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain sudo[289695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:55:06 np0005546420.localdomain sudo[289695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289695]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain sudo[289713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:55:06 np0005546420.localdomain sudo[289713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289713]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain sudo[289731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:55:06 np0005546420.localdomain sudo[289731]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289731]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain sudo[289749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:55:06 np0005546420.localdomain sudo[289749]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289749]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain sudo[289767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:55:06 np0005546420.localdomain sudo[289767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289767]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain sudo[289785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:06 np0005546420.localdomain sudo[289785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289785]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:06 np0005546420.localdomain sudo[289803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:55:06 np0005546420.localdomain sudo[289803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:06 np0005546420.localdomain sudo[289803]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 np0005546420.localdomain sudo[289837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:55:07 np0005546420.localdomain sudo[289837]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:07 np0005546420.localdomain sudo[289837]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 np0005546420.localdomain sudo[289855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:55:07 np0005546420.localdomain sudo[289855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:07 np0005546420.localdomain sudo[289855]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:07.254 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:55:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:07.254 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:55:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:07.255 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:55:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:07.255 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:55:07 np0005546420.localdomain sudo[289873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:07 np0005546420.localdomain sudo[289873]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:07 np0005546420.localdomain sudo[289873]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 np0005546420.localdomain sudo[289891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:55:07 np0005546420.localdomain sudo[289891]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:07 np0005546420.localdomain sudo[289891]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 np0005546420.localdomain sudo[289909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:55:07 np0005546420.localdomain sudo[289909]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:07 np0005546420.localdomain sudo[289909]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 np0005546420.localdomain sudo[289927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:55:07 np0005546420.localdomain sudo[289927]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:07 np0005546420.localdomain sudo[289927]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 np0005546420.localdomain sudo[289945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:07 np0005546420.localdomain sudo[289945]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:07 np0005546420.localdomain sudo[289945]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 np0005546420.localdomain sudo[289963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:55:07 np0005546420.localdomain sudo[289963]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:07 np0005546420.localdomain sudo[289963]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 np0005546420.localdomain sudo[289997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:55:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:55:07.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:55:07 np0005546420.localdomain sudo[289997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:07 np0005546420.localdomain sudo[289997]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: Updating np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.106:0/2198987355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: Standby manager daemon np0005546415.knqtle started
Dec 05 09:55:07 np0005546420.localdomain sudo[290015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:55:07 np0005546420.localdomain sudo[290015]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:07 np0005546420.localdomain sudo[290015]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:07 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon).osd e85 _set_new_cache_sizes cache_size:1020042225 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:55:08 np0005546420.localdomain sudo[290033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 09:55:08 np0005546420.localdomain sudo[290033]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:08 np0005546420.localdomain sudo[290033]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:08 np0005546420.localdomain sudo[290051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:55:08 np0005546420.localdomain sudo[290051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:08 np0005546420.localdomain sudo[290051]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:08 np0005546420.localdomain sudo[290069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:55:08 np0005546420.localdomain sudo[290069]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:08 np0005546420.localdomain sudo[290069]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3714341566' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:55:08 np0005546420.localdomain sudo[290087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:55:08 np0005546420.localdomain sudo[290087]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:08 np0005546420.localdomain sudo[290087]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:08 np0005546420.localdomain sudo[290105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:08 np0005546420.localdomain sudo[290105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:08 np0005546420.localdomain sudo[290105]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:08 np0005546420.localdomain sudo[290123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:55:08 np0005546420.localdomain sudo[290123]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:08 np0005546420.localdomain sudo[290123]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:08 np0005546420.localdomain sudo[290157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:55:08 np0005546420.localdomain sudo[290157]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:08 np0005546420.localdomain sudo[290157]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:08 np0005546420.localdomain sudo[290175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:55:08 np0005546420.localdomain sudo[290175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:08 np0005546420.localdomain sudo[290175]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:08 np0005546420.localdomain sudo[290193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:55:08 np0005546420.localdomain sudo[290193]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:08 np0005546420.localdomain sudo[290193]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: Updating np0005546415.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr metadata", "who": "np0005546415.knqtle", "id": "np0005546415.knqtle"} : dispatch
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: mgrmap e19: np0005546418.garyvl(active, since 6s), standbys: np0005546415.knqtle, np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: Updating np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.108:0/3714341566' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:55:09 np0005546420.localdomain sudo[290211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:55:09 np0005546420.localdomain sudo[290211]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:09 np0005546420.localdomain sudo[290211]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:55:09 np0005546420.localdomain podman[290229]: 2025-12-05 09:55:09.842678607 +0000 UTC m=+0.086784297 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 09:55:09 np0005546420.localdomain podman[290229]: 2025-12-05 09:55:09.854300944 +0000 UTC m=+0.098406674 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 09:55:09 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:55:10 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.108:0/118535931' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:55:10 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:55:10 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:55:10 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:11 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546418 (monmap changed)...
Dec 05 09:55:11 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain
Dec 05 09:55:11 np0005546420.localdomain ceph-mon[288331]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Dec 05 09:55:11 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:11 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:11 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:55:11 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:55:11 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:55:11 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:12 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)...
Dec 05 09:55:12 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain
Dec 05 09:55:12 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:12 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:12 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:55:12 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:55:12 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:12 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054407 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:55:13 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546418 (monmap changed)...
Dec 05 09:55:13 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain
Dec 05 09:55:13 np0005546420.localdomain ceph-mon[288331]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Dec 05 09:55:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:55:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:55:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:14 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 09:55:14 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 09:55:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:55:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:15 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.0 (monmap changed)...
Dec 05 09:55:15 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:55:15 np0005546420.localdomain ceph-mon[288331]: from='client.17334 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:55:15 np0005546420.localdomain ceph-mon[288331]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 09:55:15 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:15 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.3 (monmap changed)...
Dec 05 09:55:15 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 05 09:55:15 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:15 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:55:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:55:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:55:15 np0005546420.localdomain systemd[1]: tmp-crun.K4qjYu.mount: Deactivated successfully.
Dec 05 09:55:15 np0005546420.localdomain podman[290250]: 2025-12-05 09:55:15.52270853 +0000 UTC m=+0.095694101 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 09:55:15 np0005546420.localdomain podman[290251]: 2025-12-05 09:55:15.561662167 +0000 UTC m=+0.131425429 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:55:15 np0005546420.localdomain podman[290251]: 2025-12-05 09:55:15.600317975 +0000 UTC m=+0.170081227 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:55:15 np0005546420.localdomain podman[290250]: 2025-12-05 09:55:15.604258116 +0000 UTC m=+0.177243647 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 09:55:15 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:55:15 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:55:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:55:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:55:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:55:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 09:55:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:55:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18213 "" "Go-http-client/1.1"
Dec 05 09:55:17 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adac4fb1e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0
Dec 05 09:55:17 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@4(peon) e7  my rank is now 3 (was 4)
Dec 05 09:55:17 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Dec 05 09:55:17 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Dec 05 09:55:17 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adac4fb600 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0
Dec 05 09:55:17 np0005546420.localdomain ceph-mon[288331]: log_channel(cluster) log [INF] : mon.np0005546420 calling monitor election
Dec 05 09:55:17 np0005546420.localdomain ceph-mon[288331]: paxos.3).electionLogic(32) init, last seen epoch 32
Dec 05 09:55:17 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:55:17 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:55:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:55:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:55:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:55:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:55:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:55:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:55:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:55:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:55:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:55:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:55:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:55:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:55:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:55:19 np0005546420.localdomain podman[290295]: 2025-12-05 09:55:19.507298951 +0000 UTC m=+0.084676774 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 09:55:19 np0005546420.localdomain podman[290295]: 2025-12-05 09:55:19.549494427 +0000 UTC m=+0.126872220 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 09:55:19 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='client.17340 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546415", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546419 (monmap changed)...
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='client.34126 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005546415"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: Remove daemons mon.np0005546415
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "quorum_status"} : dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: Safe to remove mon.np0005546415: new quorum should be ['np0005546418', 'np0005546416', 'np0005546421', 'np0005546420', 'np0005546419'] (from ['np0005546418', 'np0005546416', 'np0005546421', 'np0005546420', 'np0005546419'])
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: Removing monitor np0005546415 from monmap...
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon rm", "name": "np0005546415"} : dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: Removing daemon mon.np0005546415 from np0005546415.localdomain -- ports []
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 calling monitor election
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420 calling monitor election
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546416"} : dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: mon.np0005546416 calling monitor election
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: mon.np0005546419 calling monitor election
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: mon.np0005546421 calling monitor election
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 is new leader, mons np0005546418,np0005546416,np0005546421,np0005546420,np0005546419 in quorum (ranks 0,1,2,3,4)
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: monmap epoch 7
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: last_changed 2025-12-05T09:55:17.747076+0000
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: min_mon_release 18 (reef)
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: election_strategy: 1
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546416
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005546420
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: 4: [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon.np0005546419
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: osdmap e85: 6 total, 6 up, 6 in
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: mgrmap e19: np0005546418.garyvl(active, since 20s), standbys: np0005546415.knqtle, np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: overall HEALTH_OK
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:55:22 np0005546420.localdomain sudo[290320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:22 np0005546420.localdomain sudo[290320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:22 np0005546420.localdomain sudo[290320]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:22 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054724 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:55:23 np0005546420.localdomain sudo[290338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:23 np0005546420.localdomain sudo[290338]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:23 np0005546420.localdomain podman[290374]: 
Dec 05 09:55:23 np0005546420.localdomain podman[290374]: 2025-12-05 09:55:23.546760168 +0000 UTC m=+0.087685616 container create ad826aa5cebd82fa166183c511881a8402c758c9d6bc30b48dbd72a8d1cc6b40 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_feynman, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:55:23 np0005546420.localdomain systemd[1]: Started libpod-conmon-ad826aa5cebd82fa166183c511881a8402c758c9d6bc30b48dbd72a8d1cc6b40.scope.
Dec 05 09:55:23 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:55:23 np0005546420.localdomain podman[290374]: 2025-12-05 09:55:23.508525753 +0000 UTC m=+0.049451231 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:55:23 np0005546420.localdomain podman[290374]: 2025-12-05 09:55:23.612701344 +0000 UTC m=+0.153626762 container init ad826aa5cebd82fa166183c511881a8402c758c9d6bc30b48dbd72a8d1cc6b40 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_feynman, description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.41.4, name=rhceph, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, release=1763362218, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, RELEASE=main)
Dec 05 09:55:23 np0005546420.localdomain podman[290374]: 2025-12-05 09:55:23.622539016 +0000 UTC m=+0.163464455 container start ad826aa5cebd82fa166183c511881a8402c758c9d6bc30b48dbd72a8d1cc6b40 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_feynman, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:55:23 np0005546420.localdomain podman[290374]: 2025-12-05 09:55:23.622690731 +0000 UTC m=+0.163616149 container attach ad826aa5cebd82fa166183c511881a8402c758c9d6bc30b48dbd72a8d1cc6b40 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_feynman, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=)
Dec 05 09:55:23 np0005546420.localdomain nice_feynman[290389]: 167 167
Dec 05 09:55:23 np0005546420.localdomain systemd[1]: libpod-ad826aa5cebd82fa166183c511881a8402c758c9d6bc30b48dbd72a8d1cc6b40.scope: Deactivated successfully.
Dec 05 09:55:23 np0005546420.localdomain podman[290374]: 2025-12-05 09:55:23.628787988 +0000 UTC m=+0.169713466 container died ad826aa5cebd82fa166183c511881a8402c758c9d6bc30b48dbd72a8d1cc6b40 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_feynman, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:55:23 np0005546420.localdomain podman[290394]: 2025-12-05 09:55:23.738278262 +0000 UTC m=+0.095944578 container remove ad826aa5cebd82fa166183c511881a8402c758c9d6bc30b48dbd72a8d1cc6b40 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_feynman, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-type=git, release=1763362218, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7)
Dec 05 09:55:23 np0005546420.localdomain systemd[1]: libpod-conmon-ad826aa5cebd82fa166183c511881a8402c758c9d6bc30b48dbd72a8d1cc6b40.scope: Deactivated successfully.
Dec 05 09:55:23 np0005546420.localdomain sudo[290338]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ade2e188aecba5b145c067fcedffe4e9fa12a991c14c16040aca0db723cc4000-merged.mount: Deactivated successfully.
Dec 05 09:55:24 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 09:55:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:24 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 09:55:25 np0005546420.localdomain sudo[290410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:25 np0005546420.localdomain sudo[290410]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:25 np0005546420.localdomain sudo[290410]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:25 np0005546420.localdomain sudo[290428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:25 np0005546420.localdomain sudo[290428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:25 np0005546420.localdomain ceph-mon[288331]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:25 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:25 np0005546420.localdomain ceph-mon[288331]: from='client.34133 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546415.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:55:25 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:25 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.1 (monmap changed)...
Dec 05 09:55:25 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 05 09:55:25 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:25 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:55:25 np0005546420.localdomain podman[290463]: 
Dec 05 09:55:25 np0005546420.localdomain podman[290463]: 2025-12-05 09:55:25.74821583 +0000 UTC m=+0.085833738 container create 348486eecc44df568311084fce64ca5065bae6e44a571e023767d3bbac417aa6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_keldysh, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=1763362218, architecture=x86_64, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:55:25 np0005546420.localdomain systemd[1]: Started libpod-conmon-348486eecc44df568311084fce64ca5065bae6e44a571e023767d3bbac417aa6.scope.
Dec 05 09:55:25 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:55:25 np0005546420.localdomain podman[290463]: 2025-12-05 09:55:25.815058583 +0000 UTC m=+0.152676531 container init 348486eecc44df568311084fce64ca5065bae6e44a571e023767d3bbac417aa6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_keldysh, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, name=rhceph, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64)
Dec 05 09:55:25 np0005546420.localdomain podman[290463]: 2025-12-05 09:55:25.717230568 +0000 UTC m=+0.054848516 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:55:25 np0005546420.localdomain systemd[1]: tmp-crun.nSWlJg.mount: Deactivated successfully.
Dec 05 09:55:25 np0005546420.localdomain podman[290463]: 2025-12-05 09:55:25.829701334 +0000 UTC m=+0.167319242 container start 348486eecc44df568311084fce64ca5065bae6e44a571e023767d3bbac417aa6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_keldysh, version=7, architecture=x86_64, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main)
Dec 05 09:55:25 np0005546420.localdomain podman[290463]: 2025-12-05 09:55:25.830092376 +0000 UTC m=+0.167710294 container attach 348486eecc44df568311084fce64ca5065bae6e44a571e023767d3bbac417aa6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_keldysh, CEPH_POINT_RELEASE=, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public)
Dec 05 09:55:25 np0005546420.localdomain intelligent_keldysh[290478]: 167 167
Dec 05 09:55:25 np0005546420.localdomain systemd[1]: libpod-348486eecc44df568311084fce64ca5065bae6e44a571e023767d3bbac417aa6.scope: Deactivated successfully.
Dec 05 09:55:25 np0005546420.localdomain podman[290463]: 2025-12-05 09:55:25.835374548 +0000 UTC m=+0.172992516 container died 348486eecc44df568311084fce64ca5065bae6e44a571e023767d3bbac417aa6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_keldysh, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, version=7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.buildah.version=1.41.4, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64, distribution-scope=public)
Dec 05 09:55:25 np0005546420.localdomain podman[290483]: 2025-12-05 09:55:25.958330916 +0000 UTC m=+0.109435274 container remove 348486eecc44df568311084fce64ca5065bae6e44a571e023767d3bbac417aa6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_keldysh, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public)
Dec 05 09:55:25 np0005546420.localdomain systemd[1]: libpod-conmon-348486eecc44df568311084fce64ca5065bae6e44a571e023767d3bbac417aa6.scope: Deactivated successfully.
Dec 05 09:55:26 np0005546420.localdomain sudo[290428]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:26 np0005546420.localdomain sudo[290507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:26 np0005546420.localdomain sudo[290507]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:26 np0005546420.localdomain sudo[290507]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:26 np0005546420.localdomain sudo[290525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:26 np0005546420.localdomain sudo[290525]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:26 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:26 np0005546420.localdomain ceph-mon[288331]: Removed label mon from host np0005546415.localdomain
Dec 05 09:55:26 np0005546420.localdomain ceph-mon[288331]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:26 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:26 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:26 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.4 (monmap changed)...
Dec 05 09:55:26 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:55:26 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:26 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:55:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-73db8e744adf01e5edcc326c87f470005f6a51df3dbbaa3785895080926241c4-merged.mount: Deactivated successfully.
Dec 05 09:55:26 np0005546420.localdomain podman[290559]: 
Dec 05 09:55:26 np0005546420.localdomain podman[290559]: 2025-12-05 09:55:26.855003888 +0000 UTC m=+0.079714371 container create 567da284fd066cd1ee2ca25fb9521ed6eaaab9261a33d01fdf91f1772778db80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_leavitt, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=1763362218, GIT_BRANCH=main, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:55:26 np0005546420.localdomain systemd[1]: Started libpod-conmon-567da284fd066cd1ee2ca25fb9521ed6eaaab9261a33d01fdf91f1772778db80.scope.
Dec 05 09:55:26 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:55:26 np0005546420.localdomain podman[290559]: 2025-12-05 09:55:26.918131267 +0000 UTC m=+0.142841760 container init 567da284fd066cd1ee2ca25fb9521ed6eaaab9261a33d01fdf91f1772778db80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_leavitt, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, build-date=2025-11-26T19:44:28Z, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph)
Dec 05 09:55:26 np0005546420.localdomain podman[290559]: 2025-12-05 09:55:26.824380067 +0000 UTC m=+0.049090580 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:55:26 np0005546420.localdomain podman[290559]: 2025-12-05 09:55:26.929206027 +0000 UTC m=+0.153916520 container start 567da284fd066cd1ee2ca25fb9521ed6eaaab9261a33d01fdf91f1772778db80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_leavitt, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, distribution-scope=public, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, RELEASE=main, vcs-type=git, version=7, CEPH_POINT_RELEASE=)
Dec 05 09:55:26 np0005546420.localdomain podman[290559]: 2025-12-05 09:55:26.929482326 +0000 UTC m=+0.154192849 container attach 567da284fd066cd1ee2ca25fb9521ed6eaaab9261a33d01fdf91f1772778db80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_leavitt, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, build-date=2025-11-26T19:44:28Z)
Dec 05 09:55:26 np0005546420.localdomain great_leavitt[290575]: 167 167
Dec 05 09:55:26 np0005546420.localdomain systemd[1]: libpod-567da284fd066cd1ee2ca25fb9521ed6eaaab9261a33d01fdf91f1772778db80.scope: Deactivated successfully.
Dec 05 09:55:26 np0005546420.localdomain podman[290559]: 2025-12-05 09:55:26.933240301 +0000 UTC m=+0.157950794 container died 567da284fd066cd1ee2ca25fb9521ed6eaaab9261a33d01fdf91f1772778db80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_leavitt, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public)
Dec 05 09:55:27 np0005546420.localdomain podman[290580]: 2025-12-05 09:55:27.025335721 +0000 UTC m=+0.080338259 container remove 567da284fd066cd1ee2ca25fb9521ed6eaaab9261a33d01fdf91f1772778db80 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_leavitt, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, release=1763362218, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, name=rhceph, GIT_CLEAN=True)
Dec 05 09:55:27 np0005546420.localdomain systemd[1]: libpod-conmon-567da284fd066cd1ee2ca25fb9521ed6eaaab9261a33d01fdf91f1772778db80.scope: Deactivated successfully.
Dec 05 09:55:27 np0005546420.localdomain sudo[290525]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:27 np0005546420.localdomain sudo[290604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:27 np0005546420.localdomain sudo[290604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:27 np0005546420.localdomain sudo[290604]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:27 np0005546420.localdomain sudo[290622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:27 np0005546420.localdomain sudo[290622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:55:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-9afe1c0dbe5280e2a419ed344f0ccc099ada4dec29e31775d62d20ac96cc0da1-merged.mount: Deactivated successfully.
Dec 05 09:55:27 np0005546420.localdomain podman[290640]: 2025-12-05 09:55:27.779455752 +0000 UTC m=+0.095091892 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 09:55:27 np0005546420.localdomain podman[290640]: 2025-12-05 09:55:27.794218596 +0000 UTC m=+0.109854736 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Dec 05 09:55:27 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:55:27 np0005546420.localdomain podman[290676]: 
Dec 05 09:55:27 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:55:28 np0005546420.localdomain podman[290676]: 2025-12-05 09:55:28.007834069 +0000 UTC m=+0.087761097 container create 388460ff9bd9f578a373342d9ddb20dd94d6818abed4a07ee04bcaa2dad41d36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_perlman, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:55:28 np0005546420.localdomain systemd[1]: Started libpod-conmon-388460ff9bd9f578a373342d9ddb20dd94d6818abed4a07ee04bcaa2dad41d36.scope.
Dec 05 09:55:28 np0005546420.localdomain ceph-mon[288331]: from='client.34159 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546415.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:55:28 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:28 np0005546420.localdomain ceph-mon[288331]: Removed label mgr from host np0005546415.localdomain
Dec 05 09:55:28 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:28 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:28 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:55:28 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:55:28 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:28 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:55:28 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:55:28 np0005546420.localdomain podman[290676]: 2025-12-05 09:55:27.970788921 +0000 UTC m=+0.050716019 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:55:28 np0005546420.localdomain podman[290676]: 2025-12-05 09:55:28.083264627 +0000 UTC m=+0.163191645 container init 388460ff9bd9f578a373342d9ddb20dd94d6818abed4a07ee04bcaa2dad41d36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_perlman, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, release=1763362218, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, version=7)
Dec 05 09:55:28 np0005546420.localdomain podman[290676]: 2025-12-05 09:55:28.094880864 +0000 UTC m=+0.174807932 container start 388460ff9bd9f578a373342d9ddb20dd94d6818abed4a07ee04bcaa2dad41d36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_perlman, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:55:28 np0005546420.localdomain podman[290676]: 2025-12-05 09:55:28.095460431 +0000 UTC m=+0.175387469 container attach 388460ff9bd9f578a373342d9ddb20dd94d6818abed4a07ee04bcaa2dad41d36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_perlman, io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_BRANCH=main, vcs-type=git)
Dec 05 09:55:28 np0005546420.localdomain stupefied_perlman[290691]: 167 167
Dec 05 09:55:28 np0005546420.localdomain systemd[1]: libpod-388460ff9bd9f578a373342d9ddb20dd94d6818abed4a07ee04bcaa2dad41d36.scope: Deactivated successfully.
Dec 05 09:55:28 np0005546420.localdomain podman[290676]: 2025-12-05 09:55:28.101557949 +0000 UTC m=+0.181484977 container died 388460ff9bd9f578a373342d9ddb20dd94d6818abed4a07ee04bcaa2dad41d36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_perlman, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.41.4, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, version=7, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=)
Dec 05 09:55:28 np0005546420.localdomain podman[290696]: 2025-12-05 09:55:28.215261333 +0000 UTC m=+0.097967482 container remove 388460ff9bd9f578a373342d9ddb20dd94d6818abed4a07ee04bcaa2dad41d36 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_perlman, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.expose-services=, release=1763362218, version=7, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:55:28 np0005546420.localdomain systemd[1]: libpod-conmon-388460ff9bd9f578a373342d9ddb20dd94d6818abed4a07ee04bcaa2dad41d36.scope: Deactivated successfully.
Dec 05 09:55:28 np0005546420.localdomain sudo[290622]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:28 np0005546420.localdomain sudo[290712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:28 np0005546420.localdomain sudo[290712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:28 np0005546420.localdomain sudo[290712]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:28 np0005546420.localdomain sudo[290730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:28 np0005546420.localdomain sudo[290730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-05d6f6f170a263446c2b3066439af5e47a4daaef108f4d226cbb39f035ee3109-merged.mount: Deactivated successfully.
Dec 05 09:55:28 np0005546420.localdomain podman[290764]: 
Dec 05 09:55:28 np0005546420.localdomain podman[290764]: 2025-12-05 09:55:28.990194054 +0000 UTC m=+0.084030893 container create d61fadd681942c46d74bb344727aeee77604831ab75cd304bc58091fc4c3939c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, distribution-scope=public, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:55:29 np0005546420.localdomain systemd[1]: Started libpod-conmon-d61fadd681942c46d74bb344727aeee77604831ab75cd304bc58091fc4c3939c.scope.
Dec 05 09:55:29 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:55:29 np0005546420.localdomain podman[290764]: 2025-12-05 09:55:29.051375234 +0000 UTC m=+0.145212033 container init d61fadd681942c46d74bb344727aeee77604831ab75cd304bc58091fc4c3939c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, release=1763362218)
Dec 05 09:55:29 np0005546420.localdomain podman[290764]: 2025-12-05 09:55:28.955772205 +0000 UTC m=+0.049609064 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:55:29 np0005546420.localdomain podman[290764]: 2025-12-05 09:55:29.061308968 +0000 UTC m=+0.155145817 container start d61fadd681942c46d74bb344727aeee77604831ab75cd304bc58091fc4c3939c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main)
Dec 05 09:55:29 np0005546420.localdomain podman[290764]: 2025-12-05 09:55:29.061672819 +0000 UTC m=+0.155509638 container attach d61fadd681942c46d74bb344727aeee77604831ab75cd304bc58091fc4c3939c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vendor=Red Hat, Inc., name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:55:29 np0005546420.localdomain elastic_bassi[290779]: 167 167
Dec 05 09:55:29 np0005546420.localdomain systemd[1]: libpod-d61fadd681942c46d74bb344727aeee77604831ab75cd304bc58091fc4c3939c.scope: Deactivated successfully.
Dec 05 09:55:29 np0005546420.localdomain podman[290764]: 2025-12-05 09:55:29.064823056 +0000 UTC m=+0.158659935 container died d61fadd681942c46d74bb344727aeee77604831ab75cd304bc58091fc4c3939c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-type=git, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, name=rhceph, release=1763362218, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:55:29 np0005546420.localdomain podman[290784]: 2025-12-05 09:55:29.153601254 +0000 UTC m=+0.077147091 container remove d61fadd681942c46d74bb344727aeee77604831ab75cd304bc58091fc4c3939c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, CEPH_POINT_RELEASE=)
Dec 05 09:55:29 np0005546420.localdomain systemd[1]: libpod-conmon-d61fadd681942c46d74bb344727aeee77604831ab75cd304bc58091fc4c3939c.scope: Deactivated successfully.
Dec 05 09:55:29 np0005546420.localdomain sudo[290730]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:55:29 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:29 np0005546420.localdomain sudo[290800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:29 np0005546420.localdomain sudo[290800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:29 np0005546420.localdomain sudo[290800]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:29 np0005546420.localdomain sudo[290818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:29 np0005546420.localdomain sudo[290818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a8542881fd8220703e6e54e5090373a79d0bedc4dc9d673d5e567820cd22ed37-merged.mount: Deactivated successfully.
Dec 05 09:55:29 np0005546420.localdomain podman[290854]: 
Dec 05 09:55:29 np0005546420.localdomain podman[290854]: 2025-12-05 09:55:29.875545047 +0000 UTC m=+0.087111788 container create dd6104c3530e93ce6c77e2a304a42ed7cf552b8f2bf93107ce6703ebe41b4f0c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_cerf, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 05 09:55:29 np0005546420.localdomain systemd[1]: Started libpod-conmon-dd6104c3530e93ce6c77e2a304a42ed7cf552b8f2bf93107ce6703ebe41b4f0c.scope.
Dec 05 09:55:29 np0005546420.localdomain podman[290854]: 2025-12-05 09:55:29.839544881 +0000 UTC m=+0.051111642 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:55:29 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:55:29 np0005546420.localdomain podman[290854]: 2025-12-05 09:55:29.955509223 +0000 UTC m=+0.167075974 container init dd6104c3530e93ce6c77e2a304a42ed7cf552b8f2bf93107ce6703ebe41b4f0c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_cerf, RELEASE=main, release=1763362218, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, architecture=x86_64, version=7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:55:29 np0005546420.localdomain podman[290854]: 2025-12-05 09:55:29.965996246 +0000 UTC m=+0.177562997 container start dd6104c3530e93ce6c77e2a304a42ed7cf552b8f2bf93107ce6703ebe41b4f0c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_cerf, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:55:29 np0005546420.localdomain podman[290854]: 2025-12-05 09:55:29.96645018 +0000 UTC m=+0.178016921 container attach dd6104c3530e93ce6c77e2a304a42ed7cf552b8f2bf93107ce6703ebe41b4f0c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_cerf, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:55:29 np0005546420.localdomain unruffled_cerf[290869]: 167 167
Dec 05 09:55:29 np0005546420.localdomain systemd[1]: libpod-dd6104c3530e93ce6c77e2a304a42ed7cf552b8f2bf93107ce6703ebe41b4f0c.scope: Deactivated successfully.
Dec 05 09:55:29 np0005546420.localdomain podman[290854]: 2025-12-05 09:55:29.97036038 +0000 UTC m=+0.181927151 container died dd6104c3530e93ce6c77e2a304a42ed7cf552b8f2bf93107ce6703ebe41b4f0c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_cerf, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_CLEAN=True, version=7, RELEASE=main, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:55:30 np0005546420.localdomain podman[290874]: 2025-12-05 09:55:30.052134903 +0000 UTC m=+0.068218208 container remove dd6104c3530e93ce6c77e2a304a42ed7cf552b8f2bf93107ce6703ebe41b4f0c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_cerf, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4)
Dec 05 09:55:30 np0005546420.localdomain systemd[1]: libpod-conmon-dd6104c3530e93ce6c77e2a304a42ed7cf552b8f2bf93107ce6703ebe41b4f0c.scope: Deactivated successfully.
Dec 05 09:55:30 np0005546420.localdomain sudo[290818]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: from='client.34164 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546415.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: Removed label _admin from host np0005546415.localdomain
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546420 (monmap changed)...
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:30 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:55:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-2d46e028383631f5556fc193eaf58113b2e8774f5e2400034f7ae96eb9ca2b71-merged.mount: Deactivated successfully.
Dec 05 09:55:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:32 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.2 (monmap changed)...
Dec 05 09:55:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:55:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:32 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:55:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:55:33 np0005546420.localdomain ceph-mon[288331]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:33 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:33 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:33 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 05 09:55:33 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:55:33 np0005546420.localdomain podman[290889]: 2025-12-05 09:55:33.512985632 +0000 UTC m=+0.086114478 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 09:55:33 np0005546420.localdomain podman[290889]: 2025-12-05 09:55:33.544368896 +0000 UTC m=+0.117497702 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 09:55:33 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:55:34 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.5 (monmap changed)...
Dec 05 09:55:34 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 09:55:34 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:34 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:34 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:55:34 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:34 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)...
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)...
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546421 (monmap changed)...
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:35 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:55:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:55:36 np0005546420.localdomain podman[290907]: 2025-12-05 09:55:36.502469747 +0000 UTC m=+0.079263187 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:55:36 np0005546420.localdomain podman[290907]: 2025-12-05 09:55:36.5403238 +0000 UTC m=+0.117117260 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:55:36 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:55:37 np0005546420.localdomain ceph-mon[288331]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:37 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:55:38 np0005546420.localdomain sudo[290930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:55:38 np0005546420.localdomain sudo[290930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:38 np0005546420.localdomain sudo[290930]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:38 np0005546420.localdomain sudo[290948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:55:38 np0005546420.localdomain sudo[290948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:38 np0005546420.localdomain sudo[290948]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:38 np0005546420.localdomain sudo[290966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:55:38 np0005546420.localdomain sudo[290966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:38 np0005546420.localdomain sudo[290966]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:38 np0005546420.localdomain sudo[290984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:38 np0005546420.localdomain sudo[290984]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:38 np0005546420.localdomain sudo[290984]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:38 np0005546420.localdomain sudo[291002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:55:38 np0005546420.localdomain sudo[291002]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:38 np0005546420.localdomain sudo[291002]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:38 np0005546420.localdomain sudo[291036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:55:38 np0005546420.localdomain sudo[291036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:38 np0005546420.localdomain sudo[291036]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:38 np0005546420.localdomain sudo[291054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:55:38 np0005546420.localdomain sudo[291054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:38 np0005546420.localdomain sudo[291054]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:38 np0005546420.localdomain sudo[291072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:55:38 np0005546420.localdomain sudo[291072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:38 np0005546420.localdomain sudo[291072]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:38 np0005546420.localdomain sudo[291090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:55:38 np0005546420.localdomain sudo[291090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:38 np0005546420.localdomain sudo[291090]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:39 np0005546420.localdomain sudo[291108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:55:39 np0005546420.localdomain sudo[291108]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:39 np0005546420.localdomain sudo[291108]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:39 np0005546420.localdomain sudo[291126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:55:39 np0005546420.localdomain sudo[291126]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:39 np0005546420.localdomain sudo[291126]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: Removing np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: Removing np0005546415.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:39 np0005546420.localdomain sudo[291144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:39 np0005546420.localdomain sudo[291144]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:39 np0005546420.localdomain sudo[291144]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:39 np0005546420.localdomain sudo[291162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:55:39 np0005546420.localdomain sudo[291162]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:39 np0005546420.localdomain sudo[291162]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:39 np0005546420.localdomain sudo[291196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:55:39 np0005546420.localdomain sudo[291196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:39 np0005546420.localdomain sudo[291196]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:39 np0005546420.localdomain sudo[291214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:55:39 np0005546420.localdomain sudo[291214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:39 np0005546420.localdomain sudo[291214]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:39 np0005546420.localdomain sudo[291232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:39 np0005546420.localdomain sudo[291232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:39 np0005546420.localdomain sudo[291232]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: Removing np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:55:40 np0005546420.localdomain podman[291250]: 2025-12-05 09:55:40.509393712 +0000 UTC m=+0.087267062 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 09:55:40 np0005546420.localdomain podman[291250]: 2025-12-05 09:55:40.551402823 +0000 UTC m=+0.129276173 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 09:55:40 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:55:41 np0005546420.localdomain ceph-mon[288331]: from='client.34169 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005546415.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:55:41 np0005546420.localdomain ceph-mon[288331]: Added label _no_schedule to host np0005546415.localdomain
Dec 05 09:55:41 np0005546420.localdomain ceph-mon[288331]: Removing daemon crash.np0005546415 from np0005546415.localdomain -- ports []
Dec 05 09:55:41 np0005546420.localdomain ceph-mon[288331]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005546415.localdomain
Dec 05 09:55:41 np0005546420.localdomain ceph-mon[288331]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:42 np0005546420.localdomain ceph-mon[288331]: from='client.34179 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005546415.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:55:42 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth rm", "entity": "client.crash.np0005546415.localdomain"} : dispatch
Dec 05 09:55:42 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005546415.localdomain"}]': finished
Dec 05 09:55:42 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: Removing key for client.crash.np0005546415.localdomain
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: Removing daemon mgr.np0005546415.knqtle from np0005546415.localdomain -- ports [9283, 8765]
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain"} : dispatch
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain"}]': finished
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth rm", "entity": "mgr.np0005546415.knqtle"} : dispatch
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005546415.knqtle"}]': finished
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:55:43 np0005546420.localdomain sudo[291269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:55:43 np0005546420.localdomain sudo[291269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:43 np0005546420.localdomain sudo[291269]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:43 np0005546420.localdomain sudo[291287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:55:43 np0005546420.localdomain sudo[291287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:43 np0005546420.localdomain sudo[291287]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:44 np0005546420.localdomain ceph-mon[288331]: from='client.34184 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005546415.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:55:44 np0005546420.localdomain ceph-mon[288331]: Removed host np0005546415.localdomain
Dec 05 09:55:44 np0005546420.localdomain ceph-mon[288331]: Removing key for mgr.np0005546415.knqtle
Dec 05 09:55:44 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:44 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:55:44 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:44 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:55:44 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546416.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:55:44 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:45 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546416 (monmap changed)...
Dec 05 09:55:45 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546416 on np0005546416.localdomain
Dec 05 09:55:45 np0005546420.localdomain ceph-mon[288331]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:55:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:55:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:46 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546416 (monmap changed)...
Dec 05 09:55:46 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546416 on np0005546416.localdomain
Dec 05 09:55:46 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:46 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:46 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546416.kmqcnq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:55:46 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:55:46 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:55:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:55:46 np0005546420.localdomain podman[291305]: 2025-12-05 09:55:46.511311555 +0000 UTC m=+0.088268322 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9)
Dec 05 09:55:46 np0005546420.localdomain podman[291305]: 2025-12-05 09:55:46.529443363 +0000 UTC m=+0.106400080 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 09:55:46 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:55:46 np0005546420.localdomain podman[291306]: 2025-12-05 09:55:46.482609544 +0000 UTC m=+0.058368055 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:55:46 np0005546420.localdomain podman[291306]: 2025-12-05 09:55:46.617444297 +0000 UTC m=+0.193202798 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:55:46 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:55:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:55:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:55:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:55:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 09:55:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:55:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18212 "" "Go-http-client/1.1"
Dec 05 09:55:47 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546416.kmqcnq (monmap changed)...
Dec 05 09:55:47 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546416.kmqcnq on np0005546416.localdomain
Dec 05 09:55:47 np0005546420.localdomain ceph-mon[288331]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:47 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546418 (monmap changed)...
Dec 05 09:55:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:55:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:55:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:47 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain
Dec 05 09:55:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)...
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:55:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:55:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:55:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:55:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:55:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:55:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:55:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:55:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:55:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:55:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:55:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:55:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546418 (monmap changed)...
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 09:55:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:55:50 np0005546420.localdomain podman[291348]: 2025-12-05 09:55:50.516365756 +0000 UTC m=+0.085548920 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:55:50 np0005546420.localdomain podman[291348]: 2025-12-05 09:55:50.559082338 +0000 UTC m=+0.128265512 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 09:55:50 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:55:50.704654) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550704734, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 12650, "num_deletes": 773, "total_data_size": 20090625, "memory_usage": 20822488, "flush_reason": "Manual Compaction"}
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550787349, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12116661, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 12655, "table_properties": {"data_size": 12060753, "index_size": 29178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 266096, "raw_average_key_size": 25, "raw_value_size": 11888621, "raw_average_value_size": 1160, "num_data_blocks": 1110, "num_entries": 10246, "num_filter_entries": 10246, "num_deletions": 772, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928477, "oldest_key_time": 1764928477, "file_creation_time": 1764928550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c980799-7b55-4c4e-92d8-beaefbaee73e", "db_session_id": "4WA5JLFCDLFMTDS0OOZ2", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 82814 microseconds, and 29739 cpu microseconds.
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:55:50.787461) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12116661 bytes OK
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:55:50.787500) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:55:50.789555) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:55:50.789583) EVENT_LOG_v1 {"time_micros": 1764928550789575, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:55:50.789611) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 20008190, prev total WAL file size 20008939, number of live WAL files 2.
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:55:50.793711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353036' seq:72057594037927935, type:22 .. '6D6772737461740033373538' seq:0, type:0; will stop at (end)
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1887B)]
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550793869, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12118548, "oldest_snapshot_seqno": -1}
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9477 keys, 12104786 bytes, temperature: kUnknown
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550884181, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12104786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12050874, "index_size": 29107, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23749, "raw_key_size": 252685, "raw_average_key_size": 26, "raw_value_size": 11888679, "raw_average_value_size": 1254, "num_data_blocks": 1108, "num_entries": 9477, "num_filter_entries": 9477, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928477, "oldest_key_time": 0, "file_creation_time": 1764928550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c980799-7b55-4c4e-92d8-beaefbaee73e", "db_session_id": "4WA5JLFCDLFMTDS0OOZ2", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:55:50.884513) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12104786 bytes
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:55:50.886273) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.0 rd, 133.9 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.6, 0.0 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10251, records dropped: 774 output_compression: NoCompression
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:55:50.886302) EVENT_LOG_v1 {"time_micros": 1764928550886290, "job": 4, "event": "compaction_finished", "compaction_time_micros": 90410, "compaction_time_cpu_micros": 38035, "output_level": 6, "num_output_files": 1, "total_output_size": 12104786, "num_input_records": 10251, "num_output_records": 9477, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550888217, "job": 4, "event": "table_file_deletion", "file_number": 14}
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550888277, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 05 09:55:50 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:55:50.793527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:55:51 np0005546420.localdomain ceph-mon[288331]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:51 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:51 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:51 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.0 (monmap changed)...
Dec 05 09:55:51 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:55:51 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:51 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:55:51 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:52 np0005546420.localdomain ceph-mon[288331]: from='client.34212 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:55:52 np0005546420.localdomain ceph-mon[288331]: Saving service mon spec with placement label:mon
Dec 05 09:55:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:52 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.3 (monmap changed)...
Dec 05 09:55:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 05 09:55:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:55:53 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adac4faf20 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0
Dec 05 09:55:53 np0005546420.localdomain ceph-mon[288331]: log_channel(cluster) log [INF] : mon.np0005546420 calling monitor election
Dec 05 09:55:53 np0005546420.localdomain ceph-mon[288331]: paxos.3).electionLogic(36) init, last seen epoch 36
Dec 05 09:55:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:55:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:55:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:55:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:55:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: from='client.26811 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005546419"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: Remove daemons mon.np0005546419
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: Safe to remove mon.np0005546419: new quorum should be ['np0005546418', 'np0005546416', 'np0005546421', 'np0005546420'] (from ['np0005546418', 'np0005546416', 'np0005546421', 'np0005546420'])
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: Removing monitor np0005546419 from monmap...
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: Removing daemon mon.np0005546419 from np0005546419.localdomain -- ports []
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420 calling monitor election
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546416"} : dispatch
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 calling monitor election
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546421 calling monitor election
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546416 calling monitor election
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 is new leader, mons np0005546418,np0005546416,np0005546421,np0005546420 in quorum (ranks 0,1,2,3)
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: monmap epoch 8
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: last_changed 2025-12-05T09:55:53.198687+0000
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: min_mon_release 18 (reef)
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: election_strategy: 1
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546416
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005546420
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: osdmap e85: 6 total, 6 up, 6 in
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: mgrmap e19: np0005546418.garyvl(active, since 51s), standbys: np0005546415.knqtle, np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:55:54 np0005546420.localdomain ceph-mon[288331]: overall HEALTH_OK
Dec 05 09:55:55 np0005546420.localdomain ceph-mon[288331]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:55 np0005546420.localdomain sudo[291372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:55 np0005546420.localdomain sudo[291372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:55 np0005546420.localdomain sudo[291372]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:55 np0005546420.localdomain sudo[291390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:55 np0005546420.localdomain sudo[291390]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:56 np0005546420.localdomain podman[291425]: 
Dec 05 09:55:56 np0005546420.localdomain podman[291425]: 2025-12-05 09:55:56.271369664 +0000 UTC m=+0.068991910 container create 721985f7005546463b53c6766dccbc8ee626989a4abc2f1fea28cf7f019220c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_yalow, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, io.openshift.expose-services=, ceph=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.buildah.version=1.41.4, architecture=x86_64, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7)
Dec 05 09:55:56 np0005546420.localdomain systemd[1]: Started libpod-conmon-721985f7005546463b53c6766dccbc8ee626989a4abc2f1fea28cf7f019220c0.scope.
Dec 05 09:55:56 np0005546420.localdomain podman[291425]: 2025-12-05 09:55:56.236311938 +0000 UTC m=+0.033934244 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:55:56 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:55:56 np0005546420.localdomain podman[291425]: 2025-12-05 09:55:56.352813297 +0000 UTC m=+0.150435533 container init 721985f7005546463b53c6766dccbc8ee626989a4abc2f1fea28cf7f019220c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_yalow, name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-type=git, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:55:56 np0005546420.localdomain podman[291425]: 2025-12-05 09:55:56.363811325 +0000 UTC m=+0.161433591 container start 721985f7005546463b53c6766dccbc8ee626989a4abc2f1fea28cf7f019220c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_yalow, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main)
Dec 05 09:55:56 np0005546420.localdomain podman[291425]: 2025-12-05 09:55:56.364109774 +0000 UTC m=+0.161732050 container attach 721985f7005546463b53c6766dccbc8ee626989a4abc2f1fea28cf7f019220c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_yalow, io.openshift.tags=rhceph ceph, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, version=7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, RELEASE=main, release=1763362218, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:55:56 np0005546420.localdomain infallible_yalow[291440]: 167 167
Dec 05 09:55:56 np0005546420.localdomain systemd[1]: libpod-721985f7005546463b53c6766dccbc8ee626989a4abc2f1fea28cf7f019220c0.scope: Deactivated successfully.
Dec 05 09:55:56 np0005546420.localdomain podman[291425]: 2025-12-05 09:55:56.369117188 +0000 UTC m=+0.166739434 container died 721985f7005546463b53c6766dccbc8ee626989a4abc2f1fea28cf7f019220c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_yalow, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, GIT_BRANCH=main, ceph=True, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, release=1763362218, build-date=2025-11-26T19:44:28Z)
Dec 05 09:55:56 np0005546420.localdomain podman[291445]: 2025-12-05 09:55:56.467912223 +0000 UTC m=+0.089404638 container remove 721985f7005546463b53c6766dccbc8ee626989a4abc2f1fea28cf7f019220c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_yalow, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, architecture=x86_64, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z)
Dec 05 09:55:56 np0005546420.localdomain systemd[1]: libpod-conmon-721985f7005546463b53c6766dccbc8ee626989a4abc2f1fea28cf7f019220c0.scope: Deactivated successfully.
Dec 05 09:55:56 np0005546420.localdomain sudo[291390]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:56 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 09:55:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:55:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:56 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 09:55:56 np0005546420.localdomain ceph-mon[288331]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:56 np0005546420.localdomain sudo[291461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:56 np0005546420.localdomain sudo[291461]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:56 np0005546420.localdomain sudo[291461]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:56 np0005546420.localdomain sudo[291479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:56 np0005546420.localdomain sudo[291479]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:57 np0005546420.localdomain systemd[1]: tmp-crun.z4Eokw.mount: Deactivated successfully.
Dec 05 09:55:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-b1e09711d3f7f44c977a49631d0d80d5d9a3bd152d9d278c923530084397d74e-merged.mount: Deactivated successfully.
Dec 05 09:55:57 np0005546420.localdomain podman[291513]: 
Dec 05 09:55:57 np0005546420.localdomain podman[291513]: 2025-12-05 09:55:57.406846333 +0000 UTC m=+0.080939917 container create b62dbe9cacf048046f29bfb793bcecd17f04fb91c6797b15dd40b378a3c7f113 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_haibt, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, distribution-scope=public, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4)
Dec 05 09:55:57 np0005546420.localdomain systemd[1]: Started libpod-conmon-b62dbe9cacf048046f29bfb793bcecd17f04fb91c6797b15dd40b378a3c7f113.scope.
Dec 05 09:55:57 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:55:57 np0005546420.localdomain podman[291513]: 2025-12-05 09:55:57.373780788 +0000 UTC m=+0.047874402 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:55:57 np0005546420.localdomain podman[291513]: 2025-12-05 09:55:57.479211727 +0000 UTC m=+0.153305311 container init b62dbe9cacf048046f29bfb793bcecd17f04fb91c6797b15dd40b378a3c7f113 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_haibt, build-date=2025-11-26T19:44:28Z, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:55:57 np0005546420.localdomain podman[291513]: 2025-12-05 09:55:57.492382902 +0000 UTC m=+0.166476486 container start b62dbe9cacf048046f29bfb793bcecd17f04fb91c6797b15dd40b378a3c7f113 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_haibt, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, release=1763362218, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 09:55:57 np0005546420.localdomain podman[291513]: 2025-12-05 09:55:57.492772033 +0000 UTC m=+0.166865667 container attach b62dbe9cacf048046f29bfb793bcecd17f04fb91c6797b15dd40b378a3c7f113 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_haibt, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.tags=rhceph ceph, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main)
Dec 05 09:55:57 np0005546420.localdomain systemd[1]: libpod-b62dbe9cacf048046f29bfb793bcecd17f04fb91c6797b15dd40b378a3c7f113.scope: Deactivated successfully.
Dec 05 09:55:57 np0005546420.localdomain trusting_haibt[291529]: 167 167
Dec 05 09:55:57 np0005546420.localdomain podman[291513]: 2025-12-05 09:55:57.496498229 +0000 UTC m=+0.170591843 container died b62dbe9cacf048046f29bfb793bcecd17f04fb91c6797b15dd40b378a3c7f113 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_haibt, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, version=7, description=Red Hat Ceph Storage 7, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc.)
Dec 05 09:55:57 np0005546420.localdomain podman[291534]: 2025-12-05 09:55:57.594452867 +0000 UTC m=+0.088645744 container remove b62dbe9cacf048046f29bfb793bcecd17f04fb91c6797b15dd40b378a3c7f113 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_haibt, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, version=7, vendor=Red Hat, Inc., release=1763362218, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2025-11-26T19:44:28Z, vcs-type=git, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:55:57 np0005546420.localdomain systemd[1]: libpod-conmon-b62dbe9cacf048046f29bfb793bcecd17f04fb91c6797b15dd40b378a3c7f113.scope: Deactivated successfully.
Dec 05 09:55:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:57 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.1 (monmap changed)...
Dec 05 09:55:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 05 09:55:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:57 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:55:57 np0005546420.localdomain sudo[291479]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:57 np0005546420.localdomain sudo[291558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:57 np0005546420.localdomain sudo[291558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:55:57 np0005546420.localdomain sudo[291558]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:58 np0005546420.localdomain sudo[291577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:58 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:55:58 np0005546420.localdomain sudo[291577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:58 np0005546420.localdomain podman[291576]: 2025-12-05 09:55:58.052334087 +0000 UTC m=+0.112169818 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 09:55:58 np0005546420.localdomain podman[291576]: 2025-12-05 09:55:58.091514721 +0000 UTC m=+0.151350472 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 05 09:55:58 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:55:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-63ecd408a409867dc7e878f3f9516afb5fc83e3583c7c112b42e762077247329-merged.mount: Deactivated successfully.
Dec 05 09:55:58 np0005546420.localdomain podman[291631]: 
Dec 05 09:55:58 np0005546420.localdomain podman[291631]: 2025-12-05 09:55:58.555118935 +0000 UTC m=+0.083663651 container create 7842d0c50f0bbe3a09bee7f95899bfa790b4b551f81522ddad4ea1cfd621bdb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_bassi, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, release=1763362218, version=7, name=rhceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, com.redhat.component=rhceph-container)
Dec 05 09:55:58 np0005546420.localdomain systemd[1]: Started libpod-conmon-7842d0c50f0bbe3a09bee7f95899bfa790b4b551f81522ddad4ea1cfd621bdb2.scope.
Dec 05 09:55:58 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:55:58 np0005546420.localdomain podman[291631]: 2025-12-05 09:55:58.518093637 +0000 UTC m=+0.046638343 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:55:58 np0005546420.localdomain podman[291631]: 2025-12-05 09:55:58.636032141 +0000 UTC m=+0.164576817 container init 7842d0c50f0bbe3a09bee7f95899bfa790b4b551f81522ddad4ea1cfd621bdb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_bassi, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:55:58 np0005546420.localdomain podman[291631]: 2025-12-05 09:55:58.647259186 +0000 UTC m=+0.175803852 container start 7842d0c50f0bbe3a09bee7f95899bfa790b4b551f81522ddad4ea1cfd621bdb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_bassi, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, release=1763362218, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph)
Dec 05 09:55:58 np0005546420.localdomain podman[291631]: 2025-12-05 09:55:58.647576476 +0000 UTC m=+0.176121192 container attach 7842d0c50f0bbe3a09bee7f95899bfa790b4b551f81522ddad4ea1cfd621bdb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_bassi, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, distribution-scope=public, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:55:58 np0005546420.localdomain eloquent_bassi[291646]: 167 167
Dec 05 09:55:58 np0005546420.localdomain systemd[1]: libpod-7842d0c50f0bbe3a09bee7f95899bfa790b4b551f81522ddad4ea1cfd621bdb2.scope: Deactivated successfully.
Dec 05 09:55:58 np0005546420.localdomain podman[291631]: 2025-12-05 09:55:58.652129336 +0000 UTC m=+0.180674082 container died 7842d0c50f0bbe3a09bee7f95899bfa790b4b551f81522ddad4ea1cfd621bdb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_bassi, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:55:58 np0005546420.localdomain podman[291652]: 2025-12-05 09:55:58.749472817 +0000 UTC m=+0.087944554 container remove 7842d0c50f0bbe3a09bee7f95899bfa790b4b551f81522ddad4ea1cfd621bdb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_bassi, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1763362218, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:55:58 np0005546420.localdomain systemd[1]: libpod-conmon-7842d0c50f0bbe3a09bee7f95899bfa790b4b551f81522ddad4ea1cfd621bdb2.scope: Deactivated successfully.
Dec 05 09:55:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:55:58 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.4 (monmap changed)...
Dec 05 09:55:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:55:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:55:58 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:55:58 np0005546420.localdomain ceph-mon[288331]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:55:58 np0005546420.localdomain sudo[291577]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:59 np0005546420.localdomain sudo[291676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:55:59 np0005546420.localdomain sudo[291676]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:59 np0005546420.localdomain sudo[291676]: pam_unix(sudo:session): session closed for user root
Dec 05 09:55:59 np0005546420.localdomain sudo[291694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:55:59 np0005546420.localdomain sudo[291694]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:55:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-69e918f8b4c8834baca198aed9fa845c88122386f6da49188231ae501e7d3d35-merged.mount: Deactivated successfully.
Dec 05 09:55:59 np0005546420.localdomain podman[291728]: 
Dec 05 09:55:59 np0005546420.localdomain podman[291728]: 2025-12-05 09:55:59.686471607 +0000 UTC m=+0.081755113 container create 09f7b49561cab3f5cf952dddded88e86c54cce3d35dded3a561ef54249ad319d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hellman, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, build-date=2025-11-26T19:44:28Z, distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 09:55:59 np0005546420.localdomain systemd[1]: Started libpod-conmon-09f7b49561cab3f5cf952dddded88e86c54cce3d35dded3a561ef54249ad319d.scope.
Dec 05 09:55:59 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:55:59 np0005546420.localdomain podman[291728]: 2025-12-05 09:55:59.650834133 +0000 UTC m=+0.046117699 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:55:59 np0005546420.localdomain podman[291728]: 2025-12-05 09:55:59.761935166 +0000 UTC m=+0.157218672 container init 09f7b49561cab3f5cf952dddded88e86c54cce3d35dded3a561ef54249ad319d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hellman, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, name=rhceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z)
Dec 05 09:55:59 np0005546420.localdomain podman[291728]: 2025-12-05 09:55:59.77213709 +0000 UTC m=+0.167420646 container start 09f7b49561cab3f5cf952dddded88e86c54cce3d35dded3a561ef54249ad319d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hellman, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, RELEASE=main, vcs-type=git)
Dec 05 09:55:59 np0005546420.localdomain podman[291728]: 2025-12-05 09:55:59.772436619 +0000 UTC m=+0.167720175 container attach 09f7b49561cab3f5cf952dddded88e86c54cce3d35dded3a561ef54249ad319d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hellman, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.buildah.version=1.41.4, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph)
Dec 05 09:55:59 np0005546420.localdomain determined_hellman[291743]: 167 167
Dec 05 09:55:59 np0005546420.localdomain systemd[1]: libpod-09f7b49561cab3f5cf952dddded88e86c54cce3d35dded3a561ef54249ad319d.scope: Deactivated successfully.
Dec 05 09:55:59 np0005546420.localdomain podman[291728]: 2025-12-05 09:55:59.775803362 +0000 UTC m=+0.171086898 container died 09f7b49561cab3f5cf952dddded88e86c54cce3d35dded3a561ef54249ad319d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hellman, GIT_BRANCH=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=)
Dec 05 09:55:59 np0005546420.localdomain podman[291748]: 2025-12-05 09:55:59.879925281 +0000 UTC m=+0.090199282 container remove 09f7b49561cab3f5cf952dddded88e86c54cce3d35dded3a561ef54249ad319d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_hellman, io.openshift.tags=rhceph ceph, ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 05 09:55:59 np0005546420.localdomain systemd[1]: libpod-conmon-09f7b49561cab3f5cf952dddded88e86c54cce3d35dded3a561ef54249ad319d.scope: Deactivated successfully.
Dec 05 09:55:59 np0005546420.localdomain sudo[291694]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:00 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:56:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:56:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:00 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:56:00 np0005546420.localdomain sudo[291764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:56:00 np0005546420.localdomain sudo[291764]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:00 np0005546420.localdomain sudo[291764]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:00 np0005546420.localdomain sudo[291782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:56:00 np0005546420.localdomain sudo[291782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-b6656ebb804ead73a28ce2795b6c6acbcba31da0c27ce0a4cba94c7b92d224cc-merged.mount: Deactivated successfully.
Dec 05 09:56:00 np0005546420.localdomain podman[291816]: 
Dec 05 09:56:00 np0005546420.localdomain podman[291816]: 2025-12-05 09:56:00.701356101 +0000 UTC m=+0.081731283 container create 62923adf2386e32677dc0c6b0bbd8cb4749b60f77e3362510a309ee0c271f208 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_tesla, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Dec 05 09:56:00 np0005546420.localdomain systemd[1]: Started libpod-conmon-62923adf2386e32677dc0c6b0bbd8cb4749b60f77e3362510a309ee0c271f208.scope.
Dec 05 09:56:00 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:56:00 np0005546420.localdomain podman[291816]: 2025-12-05 09:56:00.668679997 +0000 UTC m=+0.049055209 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:56:00 np0005546420.localdomain podman[291816]: 2025-12-05 09:56:00.772396044 +0000 UTC m=+0.152771236 container init 62923adf2386e32677dc0c6b0bbd8cb4749b60f77e3362510a309ee0c271f208 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_tesla, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=1763362218, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:56:00 np0005546420.localdomain podman[291816]: 2025-12-05 09:56:00.783880666 +0000 UTC m=+0.164255848 container start 62923adf2386e32677dc0c6b0bbd8cb4749b60f77e3362510a309ee0c271f208 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_tesla, architecture=x86_64, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1763362218, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, name=rhceph, build-date=2025-11-26T19:44:28Z)
Dec 05 09:56:00 np0005546420.localdomain podman[291816]: 2025-12-05 09:56:00.784204556 +0000 UTC m=+0.164579788 container attach 62923adf2386e32677dc0c6b0bbd8cb4749b60f77e3362510a309ee0c271f208 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_tesla, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., release=1763362218, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, version=7, RELEASE=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, ceph=True, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7)
Dec 05 09:56:00 np0005546420.localdomain quirky_tesla[291831]: 167 167
Dec 05 09:56:00 np0005546420.localdomain systemd[1]: libpod-62923adf2386e32677dc0c6b0bbd8cb4749b60f77e3362510a309ee0c271f208.scope: Deactivated successfully.
Dec 05 09:56:00 np0005546420.localdomain podman[291816]: 2025-12-05 09:56:00.788465658 +0000 UTC m=+0.168840850 container died 62923adf2386e32677dc0c6b0bbd8cb4749b60f77e3362510a309ee0c271f208 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_tesla, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:56:00 np0005546420.localdomain podman[291836]: 2025-12-05 09:56:00.897390895 +0000 UTC m=+0.094208036 container remove 62923adf2386e32677dc0c6b0bbd8cb4749b60f77e3362510a309ee0c271f208 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_tesla, ceph=True, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:56:00 np0005546420.localdomain systemd[1]: libpod-conmon-62923adf2386e32677dc0c6b0bbd8cb4749b60f77e3362510a309ee0c271f208.scope: Deactivated successfully.
Dec 05 09:56:00 np0005546420.localdomain sudo[291782]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:01 np0005546420.localdomain ceph-mon[288331]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:01 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:56:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:56:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:56:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:01 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:56:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:01 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-46049acb32850c63250c1adad71639d9c32f38fa7cf78958bad0aee40e651fb3-merged.mount: Deactivated successfully.
Dec 05 09:56:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:02 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:56:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:56:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:02 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:56:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:56:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:56:03 np0005546420.localdomain ceph-mon[288331]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:03 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.2 (monmap changed)...
Dec 05 09:56:03 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:56:03 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:03.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:56:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:03.874 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:56:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:03.874 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:56:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:03.888 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:56:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:56:04.116 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:56:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:56:04.117 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:56:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:56:04.117 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:56:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:04 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.5 (monmap changed)...
Dec 05 09:56:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 05 09:56:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:04 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 09:56:04 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.32:0/3185751951' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 09:56:04 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.32:0/3185751951' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 09:56:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:56:04 np0005546420.localdomain systemd[1]: tmp-crun.CCW0EM.mount: Deactivated successfully.
Dec 05 09:56:04 np0005546420.localdomain podman[291852]: 2025-12-05 09:56:04.51117755 +0000 UTC m=+0.085913890 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:56:04 np0005546420.localdomain podman[291852]: 2025-12-05 09:56:04.52225431 +0000 UTC m=+0.096990570 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 09:56:04 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:56:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:04.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:56:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:04.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:56:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:04.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)...
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.193256) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928565193327, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 781, "num_deletes": 251, "total_data_size": 1125329, "memory_usage": 1145304, "flush_reason": "Manual Compaction"}
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928565203454, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 652338, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12660, "largest_seqno": 13436, "table_properties": {"data_size": 648391, "index_size": 1610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10658, "raw_average_key_size": 21, "raw_value_size": 639925, "raw_average_value_size": 1305, "num_data_blocks": 68, "num_entries": 490, "num_filter_entries": 490, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928550, "oldest_key_time": 1764928550, "file_creation_time": 1764928565, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c980799-7b55-4c4e-92d8-beaefbaee73e", "db_session_id": "4WA5JLFCDLFMTDS0OOZ2", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 10243 microseconds, and 3992 cpu microseconds.
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.203503) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 652338 bytes OK
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.203532) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.205607) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.205631) EVENT_LOG_v1 {"time_micros": 1764928565205624, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.205654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1120919, prev total WAL file size 1120919, number of live WAL files 2.
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.206528) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end)
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(637KB)], [15(11MB)]
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928565206657, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12757124, "oldest_snapshot_seqno": -1}
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 9440 keys, 11584771 bytes, temperature: kUnknown
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928565292293, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 11584771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11532219, "index_size": 27875, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23621, "raw_key_size": 252896, "raw_average_key_size": 26, "raw_value_size": 11371746, "raw_average_value_size": 1204, "num_data_blocks": 1054, "num_entries": 9440, "num_filter_entries": 9440, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928477, "oldest_key_time": 0, "file_creation_time": 1764928565, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c980799-7b55-4c4e-92d8-beaefbaee73e", "db_session_id": "4WA5JLFCDLFMTDS0OOZ2", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.292712) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11584771 bytes
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.295096) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.8 rd, 135.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.5 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(37.3) write-amplify(17.8) OK, records in: 9967, records dropped: 527 output_compression: NoCompression
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.295140) EVENT_LOG_v1 {"time_micros": 1764928565295121, "job": 6, "event": "compaction_finished", "compaction_time_micros": 85755, "compaction_time_cpu_micros": 37291, "output_level": 6, "num_output_files": 1, "total_output_size": 11584771, "num_input_records": 9967, "num_output_records": 9440, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928565295491, "job": 6, "event": "table_file_deletion", "file_number": 17}
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928565297771, "job": 6, "event": "table_file_deletion", "file_number": 15}
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.206361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.297843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.297850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.297853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.297856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:56:05 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:56:05.297859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:56:06 np0005546420.localdomain sudo[291871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:56:06 np0005546420.localdomain sudo[291871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:06 np0005546420.localdomain sudo[291871]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:06 np0005546420.localdomain sudo[291889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:56:06 np0005546420.localdomain sudo[291889]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:06 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)...
Dec 05 09:56:06 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain
Dec 05 09:56:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:06 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.106:0/3471975609' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:56:06 np0005546420.localdomain sudo[291889]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:06.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:56:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:06.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:56:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:06.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:56:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:06.893 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:56:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:06.893 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:56:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:06.894 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:56:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:06.894 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:56:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:06.894 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:56:07 np0005546420.localdomain ceph-mon[288331]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:07 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:07 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:56:07 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.106:0/2622762546' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:56:07 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:07.344 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:56:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:56:07 np0005546420.localdomain systemd[1]: tmp-crun.iI1kMc.mount: Deactivated successfully.
Dec 05 09:56:07 np0005546420.localdomain podman[291960]: 2025-12-05 09:56:07.521458854 +0000 UTC m=+0.088068596 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:56:07 np0005546420.localdomain podman[291960]: 2025-12-05 09:56:07.559930756 +0000 UTC m=+0.126540528 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:56:07 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:56:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:07.614 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:56:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:07.617 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12367MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:56:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:07.617 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:56:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:07.618 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:56:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:07.679 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:56:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:07.680 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:56:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:07.700 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:56:08 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:56:08 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 09:56:08 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1332154044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:56:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:08.172 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:56:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:08.180 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:56:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:08.197 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:56:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:08.200 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:56:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:08.200 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:56:08 np0005546420.localdomain ceph-mon[288331]: from='client.34175 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005546419.localdomain:172.18.0.103", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:56:08 np0005546420.localdomain ceph-mon[288331]: Deploying daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 09:56:08 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.107:0/1216353305' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:56:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:08 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.107:0/1332154044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:56:08 np0005546420.localdomain sudo[292005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:56:08 np0005546420.localdomain sudo[292005]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:08 np0005546420.localdomain sudo[292005]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:08 np0005546420.localdomain sudo[292023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:56:08 np0005546420.localdomain sudo[292023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:08 np0005546420.localdomain sudo[292023]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:08 np0005546420.localdomain sudo[292041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:56:08 np0005546420.localdomain sudo[292041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:08 np0005546420.localdomain sudo[292041]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:08 np0005546420.localdomain sudo[292059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:56:08 np0005546420.localdomain sudo[292059]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:08 np0005546420.localdomain sudo[292059]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 np0005546420.localdomain sudo[292077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:56:09 np0005546420.localdomain sudo[292077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:09 np0005546420.localdomain sudo[292077]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 np0005546420.localdomain sudo[292111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:56:09 np0005546420.localdomain sudo[292111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:09 np0005546420.localdomain sudo[292111]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:09.195 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:56:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:09.197 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:56:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:56:09.197 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:56:09 np0005546420.localdomain sudo[292129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:56:09 np0005546420.localdomain sudo[292129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:09 np0005546420.localdomain sudo[292129]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 np0005546420.localdomain ceph-mon[288331]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:56:09 np0005546420.localdomain sudo[292147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:56:09 np0005546420.localdomain sudo[292147]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:09 np0005546420.localdomain sudo[292147]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 np0005546420.localdomain sudo[292165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:56:09 np0005546420.localdomain sudo[292165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:09 np0005546420.localdomain sudo[292165]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 np0005546420.localdomain sudo[292183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:56:09 np0005546420.localdomain sudo[292183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:09 np0005546420.localdomain sudo[292183]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 np0005546420.localdomain sudo[292201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:56:09 np0005546420.localdomain sudo[292201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:09 np0005546420.localdomain sudo[292201]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 np0005546420.localdomain sudo[292219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:56:09 np0005546420.localdomain sudo[292219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:09 np0005546420.localdomain sudo[292219]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 np0005546420.localdomain sudo[292237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:56:09 np0005546420.localdomain sudo[292237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:09 np0005546420.localdomain sudo[292237]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:09 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Dec 05 09:56:09 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Dec 05 09:56:09 np0005546420.localdomain sudo[292271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:56:09 np0005546420.localdomain sudo[292271]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:09 np0005546420.localdomain sudo[292271]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:10 np0005546420.localdomain sudo[292289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:56:10 np0005546420.localdomain sudo[292289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:10 np0005546420.localdomain sudo[292289]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:10 np0005546420.localdomain sudo[292307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:56:10 np0005546420.localdomain sudo[292307]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:10 np0005546420.localdomain sudo[292307]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.108:0/593155805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.108:0/2203377590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Dec 05 09:56:10 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adac4fb600 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: log_channel(cluster) log [INF] : mon.np0005546420 calling monitor election
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: paxos.3).electionLogic(38) init, last seen epoch 38
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:56:10 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:56:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:56:11 np0005546420.localdomain podman[292325]: 2025-12-05 09:56:11.509285314 +0000 UTC m=+0.084181128 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 05 09:56:11 np0005546420.localdomain podman[292325]: 2025-12-05 09:56:11.546695233 +0000 UTC m=+0.121591007 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 05 09:56:11 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:56:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:56:15 np0005546420.localdomain sudo[292345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:56:15 np0005546420.localdomain sudo[292345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:15 np0005546420.localdomain sudo[292345]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: paxos.3).electionLogic(39) init, last seen epoch 39, mid-election, bumping
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546416"} : dispatch
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 calling monitor election
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546416 calling monitor election
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546421 calling monitor election
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 is new leader, mons np0005546418,np0005546416,np0005546421 in quorum (ranks 0,1,2)
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: monmap epoch 9
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: last_changed 2025-12-05T09:56:10.401648+0000
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: min_mon_release 18 (reef)
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: election_strategy: 1
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546416
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005546420
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: 4: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: osdmap e85: 6 total, 6 up, 6 in
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mgrmap e19: np0005546418.garyvl(active, since 73s), standbys: np0005546415.knqtle, np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: Health check failed: 2/5 mons down, quorum np0005546418,np0005546416,np0005546421 (MON_DOWN)
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: Health detail: HEALTH_WARN 2/5 mons down, quorum np0005546418,np0005546416,np0005546421
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: [WRN] MON_DOWN: 2/5 mons down, quorum np0005546418,np0005546416,np0005546421
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]:     mon.np0005546420 (rank 3) addr [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] is down (out of quorum)
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]:     mon.np0005546419 (rank 4) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum)
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420 calling monitor election
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546419 calling monitor election
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 calling monitor election
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546416 calling monitor election
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546421 calling monitor election
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 is new leader, mons np0005546418,np0005546416,np0005546421,np0005546420,np0005546419 in quorum (ranks 0,1,2,3,4)
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: monmap epoch 9
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: last_changed 2025-12-05T09:56:10.401648+0000
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: min_mon_release 18 (reef)
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: election_strategy: 1
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546416
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005546420
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: 4: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: osdmap e85: 6 total, 6 up, 6 in
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: mgrmap e19: np0005546418.garyvl(active, since 74s), standbys: np0005546415.knqtle, np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: Health check cleared: MON_DOWN (was: 2/5 mons down, quorum np0005546418,np0005546416,np0005546421)
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: Cluster is now healthy
Dec 05 09:56:16 np0005546420.localdomain ceph-mon[288331]: overall HEALTH_OK
Dec 05 09:56:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:56:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:56:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:56:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 09:56:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:56:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18200 "" "Go-http-client/1.1"
Dec 05 09:56:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:56:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:56:17 np0005546420.localdomain systemd[1]: tmp-crun.Wbrwvm.mount: Deactivated successfully.
Dec 05 09:56:17 np0005546420.localdomain podman[292363]: 2025-12-05 09:56:17.51713599 +0000 UTC m=+0.093344479 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:56:17 np0005546420.localdomain systemd[1]: tmp-crun.6EZVJY.mount: Deactivated successfully.
Dec 05 09:56:17 np0005546420.localdomain podman[292364]: 2025-12-05 09:56:17.580525598 +0000 UTC m=+0.152857318 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:56:17 np0005546420.localdomain podman[292363]: 2025-12-05 09:56:17.586562693 +0000 UTC m=+0.162771172 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public)
Dec 05 09:56:17 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:56:17 np0005546420.localdomain podman[292364]: 2025-12-05 09:56:17.640305284 +0000 UTC m=+0.212636964 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:56:17 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546416.kmqcnq (monmap changed)...
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546416.kmqcnq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546416.kmqcnq on np0005546416.localdomain
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:56:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:18 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:56:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:56:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:56:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:56:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:56:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:56:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:56:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:56:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:56:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:56:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:56:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:56:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:56:18 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)...
Dec 05 09:56:18 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain
Dec 05 09:56:18 np0005546420.localdomain ceph-mon[288331]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:18 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:19 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:19 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546418 (monmap changed)...
Dec 05 09:56:19 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:56:19 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:19 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain
Dec 05 09:56:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:20 np0005546420.localdomain ceph-mon[288331]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:20 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 09:56:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:56:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:20 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 09:56:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:56:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:56:21 np0005546420.localdomain podman[292406]: 2025-12-05 09:56:21.513730199 +0000 UTC m=+0.087346585 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:56:21 np0005546420.localdomain podman[292406]: 2025-12-05 09:56:21.599476053 +0000 UTC m=+0.173092429 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 09:56:21 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:56:21 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.0 (monmap changed)...
Dec 05 09:56:21 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:56:21 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.200:0/2377674985' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 05 09:56:21 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:21 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:21 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 05 09:56:21 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.3 (monmap changed)...
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: from='client.34237 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: Reconfig service osd.default_drive_group
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' 
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:56:24 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e86 e86: 6 total, 6 up, 6 in
Dec 05 09:56:24 np0005546420.localdomain sshd[289131]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:56:24 np0005546420.localdomain systemd-logind[762]: Session 63 logged out. Waiting for processes to exit.
Dec 05 09:56:24 np0005546420.localdomain systemd[1]: session-63.scope: Deactivated successfully.
Dec 05 09:56:24 np0005546420.localdomain systemd[1]: session-63.scope: Consumed 19.217s CPU time.
Dec 05 09:56:24 np0005546420.localdomain systemd-logind[762]: Removed session 63.
Dec 05 09:56:25 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 09:56:25 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:56:25 np0005546420.localdomain ceph-mon[288331]: pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:25 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.200:0/431698000' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 05 09:56:25 np0005546420.localdomain ceph-mon[288331]: Activating manager daemon np0005546415.knqtle
Dec 05 09:56:25 np0005546420.localdomain ceph-mon[288331]: osdmap e86: 6 total, 6 up, 6 in
Dec 05 09:56:25 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.200:0/431698000' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 05 09:56:25 np0005546420.localdomain ceph-mon[288331]: mgrmap e20: np0005546415.knqtle(active, starting, since 0.0778726s), standbys: np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea
Dec 05 09:56:28 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:56:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:56:28 np0005546420.localdomain systemd[1]: tmp-crun.Xof4uU.mount: Deactivated successfully.
Dec 05 09:56:28 np0005546420.localdomain podman[292429]: 2025-12-05 09:56:28.51859328 +0000 UTC m=+0.091708688 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 05 09:56:28 np0005546420.localdomain podman[292429]: 2025-12-05 09:56:28.556453904 +0000 UTC m=+0.129569292 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true)
Dec 05 09:56:28 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:56:29 np0005546420.localdomain ceph-mon[288331]: Standby manager daemon np0005546418.garyvl started
Dec 05 09:56:30 np0005546420.localdomain ceph-mon[288331]: mgrmap e21: np0005546415.knqtle(active, starting, since 5s), standbys: np0005546416.kmqcnq, np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:56:33 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: Stopping User Manager for UID 1002...
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Activating special unit Exit the Session...
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Removed slice User Background Tasks Slice.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Stopped target Main User Target.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Stopped target Basic System.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Stopped target Paths.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Stopped target Sockets.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Stopped target Timers.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Closed D-Bus User Message Bus Socket.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Stopped Create User's Volatile Files and Directories.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Removed slice User Application Slice.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Reached target Shutdown.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Finished Exit the Session.
Dec 05 09:56:34 np0005546420.localdomain systemd[26341]: Reached target Exit the Session.
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: user@1002.service: Deactivated successfully.
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: Stopped User Manager for UID 1002.
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: user@1002.service: Consumed 13.258s CPU time, read 0B from disk, written 7.0K to disk.
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1002...
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: run-user-1002.mount: Deactivated successfully.
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: user-runtime-dir@1002.service: Deactivated successfully.
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1002.
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: Removed slice User Slice of UID 1002.
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: user-1002.slice: Consumed 4min 6.700s CPU time.
Dec 05 09:56:34 np0005546420.localdomain podman[292447]: 2025-12-05 09:56:34.774017896 +0000 UTC m=+0.096757994 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:56:34 np0005546420.localdomain podman[292447]: 2025-12-05 09:56:34.809351822 +0000 UTC m=+0.132091940 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:56:34 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:56:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:56:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:56:38 np0005546420.localdomain podman[292467]: 2025-12-05 09:56:38.511093421 +0000 UTC m=+0.084145166 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:56:38 np0005546420.localdomain podman[292467]: 2025-12-05 09:56:38.519119508 +0000 UTC m=+0.092171263 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:56:38 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:56:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:56:42 np0005546420.localdomain podman[292490]: 2025-12-05 09:56:42.509116424 +0000 UTC m=+0.085784786 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 09:56:42 np0005546420.localdomain podman[292490]: 2025-12-05 09:56:42.545650486 +0000 UTC m=+0.122318828 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 05 09:56:42 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:56:43 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:56:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:56:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:56:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:56:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 09:56:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:56:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18211 "" "Go-http-client/1.1"
Dec 05 09:56:48 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:56:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:56:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:56:48 np0005546420.localdomain podman[292509]: 2025-12-05 09:56:48.52991323 +0000 UTC m=+0.104215434 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec 05 09:56:48 np0005546420.localdomain podman[292510]: 2025-12-05 09:56:48.57939058 +0000 UTC m=+0.150319569 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:56:48 np0005546420.localdomain podman[292509]: 2025-12-05 09:56:48.599913011 +0000 UTC m=+0.174215215 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350)
Dec 05 09:56:48 np0005546420.localdomain podman[292510]: 2025-12-05 09:56:48.612792326 +0000 UTC m=+0.183721295 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:56:48 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:56:48 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:56:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:56:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:56:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:56:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:56:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:56:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:56:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:56:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:56:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:56:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:56:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:56:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:56:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:56:52 np0005546420.localdomain podman[292550]: 2025-12-05 09:56:52.506654869 +0000 UTC m=+0.083314621 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 09:56:52 np0005546420.localdomain podman[292550]: 2025-12-05 09:56:52.576479195 +0000 UTC m=+0.153138907 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 09:56:52 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:56:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e87 e87: 6 total, 6 up, 6 in
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546416"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata", "id": "np0005546416"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546418"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546419"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546416.kmqcnq", "id": "np0005546416.kmqcnq"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr metadata", "who": "np0005546416.kmqcnq", "id": "np0005546416.kmqcnq"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546418.garyvl", "id": "np0005546418.garyvl"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr metadata", "who": "np0005546418.garyvl", "id": "np0005546418.garyvl"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: Activating manager daemon np0005546416.kmqcnq
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: Manager daemon np0005546415.knqtle is unresponsive, replacing it with standby daemon np0005546416.kmqcnq
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: osdmap e87: 6 total, 6 up, 6 in
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mgrmap e22: np0005546416.kmqcnq(active, starting, since 0.0520522s), standbys: np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).mds e16 all = 0
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).mds e16 all = 0
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).mds e16 all = 0
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mds metadata"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).mds e16 all = 1
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} : dispatch
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} v 0)
Dec 05 09:56:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/mirror_snapshot_schedule"} v 0)
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/mirror_snapshot_schedule"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/trash_purge_schedule"} v 0)
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/trash_purge_schedule"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain sshd[292575]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:56:56 np0005546420.localdomain sshd[292575]: Accepted publickey for ceph-admin from 192.168.122.104 port 44930 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 09:56:56 np0005546420.localdomain systemd-logind[762]: New session 64 of user ceph-admin.
Dec 05 09:56:56 np0005546420.localdomain systemd[1]: Created slice User Slice of UID 1002.
Dec 05 09:56:56 np0005546420.localdomain systemd[1]: Starting User Runtime Directory /run/user/1002...
Dec 05 09:56:56 np0005546420.localdomain systemd[1]: Finished User Runtime Directory /run/user/1002.
Dec 05 09:56:56 np0005546420.localdomain systemd[1]: Starting User Manager for UID 1002...
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Queued start job for default target Main User Target.
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Created slice User Application Slice.
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Reached target Paths.
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Reached target Timers.
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Starting D-Bus User Message Bus Socket...
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Starting Create User's Volatile Files and Directories...
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Listening on D-Bus User Message Bus Socket.
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Reached target Sockets.
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Finished Create User's Volatile Files and Directories.
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Reached target Basic System.
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Reached target Main User Target.
Dec 05 09:56:56 np0005546420.localdomain systemd[292579]: Startup finished in 137ms.
Dec 05 09:56:56 np0005546420.localdomain systemd[1]: Started User Manager for UID 1002.
Dec 05 09:56:56 np0005546420.localdomain systemd[1]: Started Session 64 of User ceph-admin.
Dec 05 09:56:56 np0005546420.localdomain sshd[292575]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 09:56:56 np0005546420.localdomain sudo[292595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:56:56 np0005546420.localdomain sudo[292595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:56 np0005546420.localdomain sudo[292595]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:56 np0005546420.localdomain sudo[292613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 09:56:56 np0005546420.localdomain sudo[292613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata", "id": "np0005546416"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr metadata", "who": "np0005546416.kmqcnq", "id": "np0005546416.kmqcnq"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr metadata", "who": "np0005546418.garyvl", "id": "np0005546418.garyvl"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mds metadata"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd metadata"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mon metadata"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: Manager daemon np0005546416.kmqcnq is now available
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: removing stray HostCache host record np0005546415.localdomain.devices.0
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"}]': finished
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"}]': finished
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/mirror_snapshot_schedule"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/mirror_snapshot_schedule"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/trash_purge_schedule"} : dispatch
Dec 05 09:56:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/trash_purge_schedule"} : dispatch
Dec 05 09:56:57 np0005546420.localdomain sudo[292613]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:56:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:56:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:56:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:56:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:56:57 np0005546420.localdomain sudo[292651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:56:57 np0005546420.localdomain sudo[292651]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:57 np0005546420.localdomain sudo[292651]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:56:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:56:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:56:57 np0005546420.localdomain sudo[292669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:56:57 np0005546420.localdomain sudo[292669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: mgrmap e23: np0005546416.kmqcnq(active, since 1.14785s), standbys: np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: from='client.26765 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain.devices.0}] v 0)
Dec 05 09:56:58 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain}] v 0)
Dec 05 09:56:58 np0005546420.localdomain podman[292760]: 2025-12-05 09:56:58.371315577 +0000 UTC m=+0.105202484 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.expose-services=, version=7, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, release=1763362218)
Dec 05 09:56:58 np0005546420.localdomain podman[292760]: 2025-12-05 09:56:58.477904912 +0000 UTC m=+0.211791869 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, release=1763362218, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container)
Dec 05 09:56:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:56:58 np0005546420.localdomain podman[292812]: 2025-12-05 09:56:58.742245274 +0000 UTC m=+0.091863644 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:56:58 np0005546420.localdomain podman[292812]: 2025-12-05 09:56:58.759563076 +0000 UTC m=+0.109181436 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:56:58 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:56:57] ENGINE Bus STARTING
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:56:58] ENGINE Serving on http://172.18.0.104:8765
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:56:59 np0005546420.localdomain sudo[292669]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:56:59 np0005546420.localdomain sudo[292900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:56:59 np0005546420.localdomain sudo[292900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:59 np0005546420.localdomain sudo[292900]: pam_unix(sudo:session): session closed for user root
Dec 05 09:56:59 np0005546420.localdomain sudo[292918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:56:59 np0005546420.localdomain sudo[292918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain.devices.0}] v 0)
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain}] v 0)
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} v 0)
Dec 05 09:56:59 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch
Dec 05 09:56:59 np0005546420.localdomain sudo[292918]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:56:58] ENGINE Serving on https://172.18.0.104:7150
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:56:58] ENGINE Bus STARTED
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:56:58] ENGINE Client ('172.18.0.104', 47164) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mgrmap e24: np0005546416.kmqcnq(active, since 3s), standbys: np0005546419.zhsnqq, np0005546420.aoeylc, np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 05 09:57:00 np0005546420.localdomain sudo[292967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:00 np0005546420.localdomain sudo[292967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:00 np0005546420.localdomain sudo[292967]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:00 np0005546420.localdomain sudo[292985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 05 09:57:00 np0005546420.localdomain sudo[292985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:00 np0005546420.localdomain sudo[292985]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:57:00 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain sudo[293021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:57:01 np0005546420.localdomain sudo[293021]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:01 np0005546420.localdomain sudo[293021]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:57:01 np0005546420.localdomain sudo[293039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:57:01 np0005546420.localdomain sudo[293039]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:01 np0005546420.localdomain sudo[293039]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 np0005546420.localdomain sudo[293057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:01 np0005546420.localdomain sudo[293057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:01 np0005546420.localdomain sudo[293057]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 np0005546420.localdomain sudo[293075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:01 np0005546420.localdomain sudo[293075]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:01 np0005546420.localdomain sudo[293075]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 np0005546420.localdomain sudo[293093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:01 np0005546420.localdomain sudo[293093]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:01 np0005546420.localdomain sudo[293093]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 np0005546420.localdomain sudo[293127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:01 np0005546420.localdomain sudo[293127]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:01 np0005546420.localdomain sudo[293127]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 np0005546420.localdomain sudo[293145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:01 np0005546420.localdomain sudo[293145]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:01 np0005546420.localdomain sudo[293145]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 np0005546420.localdomain sudo[293163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:57:01 np0005546420.localdomain sudo[293163]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:01 np0005546420.localdomain sudo[293163]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 np0005546420.localdomain sudo[293181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:57:01 np0005546420.localdomain sudo[293181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:01 np0005546420.localdomain sudo[293181]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:01 np0005546420.localdomain sudo[293199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:57:01 np0005546420.localdomain sudo[293199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:01 np0005546420.localdomain sudo[293199]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain sudo[293217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:02 np0005546420.localdomain sudo[293217]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293217]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain sudo[293235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:02 np0005546420.localdomain sudo[293235]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293235]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: from='client.26790 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Saving service mon spec with placement label:mon
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:02 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:02 np0005546420.localdomain sudo[293253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:02 np0005546420.localdomain sudo[293253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293253]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain sudo[293287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:02 np0005546420.localdomain sudo[293287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293287]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain sudo[293305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:02 np0005546420.localdomain sudo[293305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293305]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain sudo[293323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:02 np0005546420.localdomain sudo[293323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293323]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain sudo[293341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:57:02 np0005546420.localdomain sudo[293341]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293341]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain sudo[293359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:57:02 np0005546420.localdomain sudo[293359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293359]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain sudo[293377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:57:02 np0005546420.localdomain sudo[293377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293377]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain sudo[293395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:02 np0005546420.localdomain sudo[293395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293395]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain sudo[293413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:57:02 np0005546420.localdomain sudo[293413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293413]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:02 np0005546420.localdomain sudo[293447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:57:02 np0005546420.localdomain sudo[293447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:02 np0005546420.localdomain sudo[293447]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:57:03 np0005546420.localdomain sudo[293465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:57:03 np0005546420.localdomain sudo[293465]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:03 np0005546420.localdomain sudo[293465]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: from='client.34223 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546419", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:03 np0005546420.localdomain sudo[293483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:03 np0005546420.localdomain sudo[293483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:03 np0005546420.localdomain sudo[293483]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 np0005546420.localdomain sudo[293501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:57:03 np0005546420.localdomain sudo[293501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:03 np0005546420.localdomain sudo[293501]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 np0005546420.localdomain sudo[293519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:57:03 np0005546420.localdomain sudo[293519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:03 np0005546420.localdomain sudo[293519]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:57:03 np0005546420.localdomain sudo[293537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:57:03 np0005546420.localdomain sudo[293537]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:03 np0005546420.localdomain sudo[293537]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain.devices.0}] v 0)
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain}] v 0)
Dec 05 09:57:03 np0005546420.localdomain sudo[293555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:03 np0005546420.localdomain sudo[293555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:03 np0005546420.localdomain sudo[293555]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 np0005546420.localdomain sudo[293573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:57:03 np0005546420.localdomain sudo[293573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:03 np0005546420.localdomain sudo[293573]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 np0005546420.localdomain sudo[293607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:57:03 np0005546420.localdomain sudo[293607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:03 np0005546420.localdomain sudo[293607]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:03 np0005546420.localdomain sudo[293625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:57:03 np0005546420.localdomain sudo[293625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:03 np0005546420.localdomain sudo[293625]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 np0005546420.localdomain sudo[293643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:03 np0005546420.localdomain sudo[293643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:03 np0005546420.localdomain sudo[293643]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:57:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:57:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:57:04.116 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:57:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:57:04.118 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:57:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:57:04.118 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.200:0/3434603813' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.32:0/2465703393' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.32:0/2465703393' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:57:04 np0005546420.localdomain sudo[293661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:57:04 np0005546420.localdomain sudo[293661]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:04 np0005546420.localdomain sudo[293661]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:04 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:04.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:57:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:04.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546416 (monmap changed)...
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546416 on np0005546416.localdomain
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain.devices.0}] v 0)
Dec 05 09:57:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain}] v 0)
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:05 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:05 np0005546420.localdomain podman[293679]: 2025-12-05 09:57:05.517033036 +0000 UTC m=+0.090918224 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 09:57:05 np0005546420.localdomain podman[293679]: 2025-12-05 09:57:05.528598532 +0000 UTC m=+0.102483750 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:57:05 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:57:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:05.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:57:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:05.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:57:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:05.874 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:57:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:06.035 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:57:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:06.037 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546418 (monmap changed)...
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.106:0/662041042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:06 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:06.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:57:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:06.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:57:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:06.888 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:57:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:06.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:57:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:06.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:57:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:06.889 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:57:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:06.890 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4227518814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:07.369 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:07.573 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:57:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:07.575 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12329MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:57:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:07.575 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:57:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:07.575 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:57:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:07.636 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:57:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:07.636 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:57:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:07.657 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.106:0/4294312734' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.107:0/4227518814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:07 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0)
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/5138112' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Dec 05 09:57:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:08.129 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:57:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:08.136 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:57:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:08.150 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:57:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:08.153 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:57:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:08.153 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.200:0/5138112' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.107:0/1686838431' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:08 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:09.150 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:57:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:09.151 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:57:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:09.175 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:57:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:09.175 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain sudo[293743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:57:09 np0005546420.localdomain sudo[293743]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:09 np0005546420.localdomain sudo[293743]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:09 np0005546420.localdomain sudo[293767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:09 np0005546420.localdomain sudo[293767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:09 np0005546420.localdomain podman[293760]: 2025-12-05 09:57:09.517844366 +0000 UTC m=+0.092925777 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 09:57:09 np0005546420.localdomain podman[293760]: 2025-12-05 09:57:09.527795951 +0000 UTC m=+0.102877352 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:57:09 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e88 e88: 6 total, 6 up, 6 in
Dec 05 09:57:09 np0005546420.localdomain ceph-mds[283770]: --2- [v2:172.18.0.107:6808/530338393,v1:172.18.0.107:6809/530338393] >> 172.18.0.104:0/2660358873 conn(0x5594f386e800 0x5594f3a80000 secure :-1 s=SESSION_ACCEPTING pgs=4 cs=0 l=0 rev1=1 crypto rx=0x5594f28c7710 tx=0x5594f3777140 comp rx=0 tx=0).handle_reconnect no existing connection exists, reseting client
Dec 05 09:57:09 np0005546420.localdomain ceph-mds[283770]: --2- [v2:172.18.0.107:6808/530338393,v1:172.18.0.107:6809/530338393] >> 172.18.0.104:0/2999160258 conn(0x5594f3756800 0x5594f3a80580 secure :-1 s=SESSION_ACCEPTING pgs=5 cs=0 l=0 rev1=1 crypto rx=0x5594f376d890 tx=0x5594f375e000 comp rx=0 tx=0).handle_reconnect no existing connection exists, reseting client
Dec 05 09:57:09 np0005546420.localdomain sshd[292575]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:57:09 np0005546420.localdomain systemd-logind[762]: Session 64 logged out. Waiting for processes to exit.
Dec 05 09:57:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:57:09.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:57:09 np0005546420.localdomain podman[293819]: 
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546419 (monmap changed)...
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' 
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.200:0/1461736714' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: Activating manager daemon np0005546419.zhsnqq
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: osdmap e88: 6 total, 6 up, 6 in
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.200:0/1461736714' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: mgrmap e25: np0005546419.zhsnqq(active, starting, since 0.0436545s), standbys: np0005546420.aoeylc, np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546416"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546418.garyvl", "id": "np0005546418.garyvl"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: Manager daemon np0005546419.zhsnqq is now available
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/mirror_snapshot_schedule"} : dispatch
Dec 05 09:57:09 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/mirror_snapshot_schedule"} : dispatch
Dec 05 09:57:10 np0005546420.localdomain podman[293819]: 2025-12-05 09:57:10.000931039 +0000 UTC m=+0.080648559 container create ae0203d97e6cf5298255c7bbcf9da642e1da1bde8e20ac8ff13516a11af40053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_haslett, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_BRANCH=main, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, distribution-scope=public)
Dec 05 09:57:10 np0005546420.localdomain systemd[1]: Started libpod-conmon-ae0203d97e6cf5298255c7bbcf9da642e1da1bde8e20ac8ff13516a11af40053.scope.
Dec 05 09:57:10 np0005546420.localdomain podman[293819]: 2025-12-05 09:57:09.961217559 +0000 UTC m=+0.040935079 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:10 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:10 np0005546420.localdomain podman[293819]: 2025-12-05 09:57:10.088368366 +0000 UTC m=+0.168085856 container init ae0203d97e6cf5298255c7bbcf9da642e1da1bde8e20ac8ff13516a11af40053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_haslett, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, ceph=True, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_CLEAN=True, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main)
Dec 05 09:57:10 np0005546420.localdomain sshd[293837]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:57:10 np0005546420.localdomain podman[293819]: 2025-12-05 09:57:10.099705374 +0000 UTC m=+0.179422874 container start ae0203d97e6cf5298255c7bbcf9da642e1da1bde8e20ac8ff13516a11af40053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_haslett, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, distribution-scope=public, release=1763362218, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main)
Dec 05 09:57:10 np0005546420.localdomain podman[293819]: 2025-12-05 09:57:10.100059265 +0000 UTC m=+0.179776765 container attach ae0203d97e6cf5298255c7bbcf9da642e1da1bde8e20ac8ff13516a11af40053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_haslett, vcs-type=git, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=)
Dec 05 09:57:10 np0005546420.localdomain systemd[1]: libpod-ae0203d97e6cf5298255c7bbcf9da642e1da1bde8e20ac8ff13516a11af40053.scope: Deactivated successfully.
Dec 05 09:57:10 np0005546420.localdomain naughty_haslett[293834]: 167 167
Dec 05 09:57:10 np0005546420.localdomain podman[293819]: 2025-12-05 09:57:10.106321278 +0000 UTC m=+0.186038828 container died ae0203d97e6cf5298255c7bbcf9da642e1da1bde8e20ac8ff13516a11af40053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_haslett, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True)
Dec 05 09:57:10 np0005546420.localdomain podman[293841]: 2025-12-05 09:57:10.20858773 +0000 UTC m=+0.091419300 container remove ae0203d97e6cf5298255c7bbcf9da642e1da1bde8e20ac8ff13516a11af40053 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_haslett, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., version=7, io.buildah.version=1.41.4, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:57:10 np0005546420.localdomain systemd[1]: libpod-conmon-ae0203d97e6cf5298255c7bbcf9da642e1da1bde8e20ac8ff13516a11af40053.scope: Deactivated successfully.
Dec 05 09:57:10 np0005546420.localdomain sshd[293837]: Accepted publickey for ceph-admin from 192.168.122.106 port 39270 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 09:57:10 np0005546420.localdomain systemd-logind[762]: New session 66 of user ceph-admin.
Dec 05 09:57:10 np0005546420.localdomain systemd[1]: Started Session 66 of User ceph-admin.
Dec 05 09:57:10 np0005546420.localdomain sshd[293837]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 09:57:10 np0005546420.localdomain sudo[293767]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:10 np0005546420.localdomain systemd[1]: session-64.scope: Deactivated successfully.
Dec 05 09:57:10 np0005546420.localdomain systemd[1]: session-64.scope: Consumed 7.789s CPU time.
Dec 05 09:57:10 np0005546420.localdomain systemd-logind[762]: Removed session 64.
Dec 05 09:57:10 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e9 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 09:57:10 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1375427300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:57:10 np0005546420.localdomain sudo[293860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:10 np0005546420.localdomain sudo[293860]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:10 np0005546420.localdomain sudo[293860]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:10 np0005546420.localdomain sudo[293878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:57:10 np0005546420.localdomain sudo[293878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5714ce5863ef2178ca8a28676f1a0fdf679a643472f4919882a9abafb74675b1-merged.mount: Deactivated successfully.
Dec 05 09:57:11 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/trash_purge_schedule"} : dispatch
Dec 05 09:57:11 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/trash_purge_schedule"} : dispatch
Dec 05 09:57:11 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.108:0/1375427300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:57:11 np0005546420.localdomain podman[293968]: 2025-12-05 09:57:11.565820792 +0000 UTC m=+0.089804670 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, distribution-scope=public, RELEASE=main, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 09:57:11 np0005546420.localdomain podman[293968]: 2025-12-05 09:57:11.671447247 +0000 UTC m=+0.195431175 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218)
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:57:10] ENGINE Bus STARTING
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:57:10] ENGINE Serving on http://172.18.0.106:8765
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: mgrmap e26: np0005546419.zhsnqq(active, since 1.34624s), standbys: np0005546420.aoeylc, np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:57:11] ENGINE Serving on https://172.18.0.106:7150
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:57:11] ENGINE Bus STARTED
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: [05/Dec/2025:09:57:11] ENGINE Client ('172.18.0.106', 58674) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.108:0/446772086' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:12 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:12 np0005546420.localdomain sudo[293878]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:12 np0005546420.localdomain sudo[294085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:57:12 np0005546420.localdomain sudo[294085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:12 np0005546420.localdomain sudo[294085]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:12 np0005546420.localdomain sudo[294109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:57:12 np0005546420.localdomain sudo[294109]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:12 np0005546420.localdomain podman[294102]: 2025-12-05 09:57:12.822901827 +0000 UTC m=+0.095606569 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 09:57:12 np0005546420.localdomain podman[294102]: 2025-12-05 09:57:12.839513117 +0000 UTC m=+0.112217859 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 05 09:57:12 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:57:13 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:57:13 np0005546420.localdomain ceph-mon[288331]: pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:13 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:13 np0005546420.localdomain sudo[294109]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:13 np0005546420.localdomain sudo[294172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:13 np0005546420.localdomain sudo[294172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:13 np0005546420.localdomain sudo[294172]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:13 np0005546420.localdomain sudo[294190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 05 09:57:13 np0005546420.localdomain sudo[294190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:14 np0005546420.localdomain sudo[294190]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: mgrmap e27: np0005546419.zhsnqq(active, since 3s), standbys: np0005546420.aoeylc, np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:14 np0005546420.localdomain sudo[294226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:57:14 np0005546420.localdomain sudo[294226]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:14 np0005546420.localdomain sudo[294226]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:14 np0005546420.localdomain sudo[294244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:57:14 np0005546420.localdomain sudo[294244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:14 np0005546420.localdomain sudo[294244]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:14 np0005546420.localdomain sudo[294262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:14 np0005546420.localdomain sudo[294262]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:14 np0005546420.localdomain sudo[294262]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:14 np0005546420.localdomain sudo[294280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:14 np0005546420.localdomain sudo[294280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:14 np0005546420.localdomain sudo[294280]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:14 np0005546420.localdomain sudo[294298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:14 np0005546420.localdomain sudo[294298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:14 np0005546420.localdomain sudo[294298]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:14 np0005546420.localdomain sudo[294332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:14 np0005546420.localdomain sudo[294332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:14 np0005546420.localdomain sudo[294332]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:14 np0005546420.localdomain sudo[294350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:14 np0005546420.localdomain sudo[294350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:14 np0005546420.localdomain sudo[294350]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:14 np0005546420.localdomain sudo[294368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:57:14 np0005546420.localdomain sudo[294368]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:14 np0005546420.localdomain sudo[294368]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain sudo[294386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:57:15 np0005546420.localdomain sudo[294386]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294386]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain sudo[294404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:57:15 np0005546420.localdomain sudo[294404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294404]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain sudo[294422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:15 np0005546420.localdomain sudo[294422]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294422]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain ceph-mon[288331]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M
Dec 05 09:57:15 np0005546420.localdomain ceph-mon[288331]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:57:15 np0005546420.localdomain ceph-mon[288331]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M
Dec 05 09:57:15 np0005546420.localdomain ceph-mon[288331]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:57:15 np0005546420.localdomain ceph-mon[288331]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:15 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:15 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 09:57:15 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:15 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:57:15 np0005546420.localdomain ceph-mon[288331]: Standby manager daemon np0005546416.kmqcnq started
Dec 05 09:57:15 np0005546420.localdomain sudo[294440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:15 np0005546420.localdomain sudo[294440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294440]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain sudo[294458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:15 np0005546420.localdomain sudo[294458]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294458]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain sudo[294492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:15 np0005546420.localdomain sudo[294492]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294492]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain sudo[294510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:15 np0005546420.localdomain sudo[294510]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294510]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain sudo[294528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:15 np0005546420.localdomain sudo[294528]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294528]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain sudo[294546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:57:15 np0005546420.localdomain sudo[294546]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294546]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain sudo[294564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:57:15 np0005546420.localdomain sudo[294564]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294564]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain sudo[294582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:57:15 np0005546420.localdomain sudo[294582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294582]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:15 np0005546420.localdomain sudo[294600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:15 np0005546420.localdomain sudo[294600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:15 np0005546420.localdomain sudo[294600]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain sudo[294618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:57:16 np0005546420.localdomain sudo[294618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294618]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain sudo[294652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:57:16 np0005546420.localdomain sudo[294652]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294652]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain sudo[294670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:57:16 np0005546420.localdomain sudo[294670]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294670]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: mgrmap e28: np0005546419.zhsnqq(active, since 6s), standbys: np0005546420.aoeylc, np0005546421.sukfea, np0005546416.kmqcnq, np0005546418.garyvl
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546416.kmqcnq", "id": "np0005546416.kmqcnq"} : dispatch
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:16 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:16 np0005546420.localdomain sudo[294688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:16 np0005546420.localdomain sudo[294688]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294688]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain sudo[294706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:57:16 np0005546420.localdomain sudo[294706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294706]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain sudo[294724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:57:16 np0005546420.localdomain sudo[294724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294724]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain sudo[294742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:57:16 np0005546420.localdomain sudo[294742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294742]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain sudo[294760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:16 np0005546420.localdomain sudo[294760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294760]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain sudo[294778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:57:16 np0005546420.localdomain sudo[294778]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294778]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain sudo[294812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:57:16 np0005546420.localdomain sudo[294812]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294812]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain sudo[294830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:57:16 np0005546420.localdomain sudo[294830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294830]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:16 np0005546420.localdomain sudo[294848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:16 np0005546420.localdomain sudo[294848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:16 np0005546420.localdomain sudo[294848]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:57:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:57:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:57:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 09:57:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:57:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18212 "" "Go-http-client/1.1"
Dec 05 09:57:17 np0005546420.localdomain sudo[294866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:57:17 np0005546420.localdomain sudo[294866]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:17 np0005546420.localdomain sudo[294866]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:17 np0005546420.localdomain sudo[294884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:17 np0005546420.localdomain sudo[294884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:17 np0005546420.localdomain sudo[294884]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:17 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:17 np0005546420.localdomain sudo[294902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:17 np0005546420.localdomain sudo[294902]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:18 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:57:18 np0005546420.localdomain podman[294937]: 
Dec 05 09:57:18 np0005546420.localdomain podman[294937]: 2025-12-05 09:57:18.192456732 +0000 UTC m=+0.086910141 container create 5710788831b51044dabe5f2e8fd85ddc5edd35c3d5a6eef60a7ecf211d28348c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_allen, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, RELEASE=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4)
Dec 05 09:57:18 np0005546420.localdomain systemd[1]: Started libpod-conmon-5710788831b51044dabe5f2e8fd85ddc5edd35c3d5a6eef60a7ecf211d28348c.scope.
Dec 05 09:57:18 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:18 np0005546420.localdomain podman[294937]: 2025-12-05 09:57:18.156189198 +0000 UTC m=+0.050642647 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:18 np0005546420.localdomain podman[294937]: 2025-12-05 09:57:18.271053827 +0000 UTC m=+0.165507226 container init 5710788831b51044dabe5f2e8fd85ddc5edd35c3d5a6eef60a7ecf211d28348c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_allen, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, name=rhceph, RELEASE=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:57:18 np0005546420.localdomain podman[294937]: 2025-12-05 09:57:18.281054755 +0000 UTC m=+0.175508154 container start 5710788831b51044dabe5f2e8fd85ddc5edd35c3d5a6eef60a7ecf211d28348c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_allen, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main)
Dec 05 09:57:18 np0005546420.localdomain podman[294937]: 2025-12-05 09:57:18.281434007 +0000 UTC m=+0.175887406 container attach 5710788831b51044dabe5f2e8fd85ddc5edd35c3d5a6eef60a7ecf211d28348c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_allen, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 05 09:57:18 np0005546420.localdomain musing_allen[294952]: 167 167
Dec 05 09:57:18 np0005546420.localdomain systemd[1]: libpod-5710788831b51044dabe5f2e8fd85ddc5edd35c3d5a6eef60a7ecf211d28348c.scope: Deactivated successfully.
Dec 05 09:57:18 np0005546420.localdomain podman[294937]: 2025-12-05 09:57:18.286399159 +0000 UTC m=+0.180852558 container died 5710788831b51044dabe5f2e8fd85ddc5edd35c3d5a6eef60a7ecf211d28348c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_allen, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7)
Dec 05 09:57:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-60899efdda96d1abd678ddcd0de964072cffbfb761bdae7f88c8e4bee9aae163-merged.mount: Deactivated successfully.
Dec 05 09:57:18 np0005546420.localdomain podman[294957]: 2025-12-05 09:57:18.391618142 +0000 UTC m=+0.096008991 container remove 5710788831b51044dabe5f2e8fd85ddc5edd35c3d5a6eef60a7ecf211d28348c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_allen, name=rhceph, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, ceph=True, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True)
Dec 05 09:57:18 np0005546420.localdomain systemd[1]: libpod-conmon-5710788831b51044dabe5f2e8fd85ddc5edd35c3d5a6eef60a7ecf211d28348c.scope: Deactivated successfully.
Dec 05 09:57:18 np0005546420.localdomain sudo[294902]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:18 np0005546420.localdomain sudo[294974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:18 np0005546420.localdomain sudo[294974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:18 np0005546420.localdomain sudo[294974]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:18 np0005546420.localdomain sudo[294992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:18 np0005546420.localdomain sudo[294992]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:57:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:57:18 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 09:57:18 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 09:57:18 np0005546420.localdomain ceph-mon[288331]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 0 B/s wr, 17 op/s
Dec 05 09:57:18 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:18 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:18 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 05 09:57:18 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:18 np0005546420.localdomain podman[295011]: 2025-12-05 09:57:18.7619342 +0000 UTC m=+0.075562963 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:57:18 np0005546420.localdomain podman[295011]: 2025-12-05 09:57:18.775816827 +0000 UTC m=+0.089445560 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:57:18 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:57:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:57:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:57:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:57:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:57:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:57:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:57:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:57:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:57:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:57:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:57:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:57:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:57:18 np0005546420.localdomain podman[295009]: 2025-12-05 09:57:18.904301414 +0000 UTC m=+0.218241296 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, managed_by=edpm_ansible, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 09:57:18 np0005546420.localdomain podman[295009]: 2025-12-05 09:57:18.917540811 +0000 UTC m=+0.231480783 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Dec 05 09:57:18 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:57:19 np0005546420.localdomain podman[295070]: 
Dec 05 09:57:19 np0005546420.localdomain podman[295070]: 2025-12-05 09:57:19.157937318 +0000 UTC m=+0.088755849 container create 5dbaa14e5a7520c091b3e6ed3fc0728ad0e98cef9f9ba2e482aae04d3daa7919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_lumiere, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1763362218, io.buildah.version=1.41.4, name=rhceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:57:19 np0005546420.localdomain systemd[1]: Started libpod-conmon-5dbaa14e5a7520c091b3e6ed3fc0728ad0e98cef9f9ba2e482aae04d3daa7919.scope.
Dec 05 09:57:19 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:19 np0005546420.localdomain podman[295070]: 2025-12-05 09:57:19.123085426 +0000 UTC m=+0.053903977 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:19 np0005546420.localdomain podman[295070]: 2025-12-05 09:57:19.22961739 +0000 UTC m=+0.160435891 container init 5dbaa14e5a7520c091b3e6ed3fc0728ad0e98cef9f9ba2e482aae04d3daa7919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_lumiere, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, version=7, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.expose-services=)
Dec 05 09:57:19 np0005546420.localdomain systemd[1]: tmp-crun.zTuKw1.mount: Deactivated successfully.
Dec 05 09:57:19 np0005546420.localdomain podman[295070]: 2025-12-05 09:57:19.248010325 +0000 UTC m=+0.178828946 container start 5dbaa14e5a7520c091b3e6ed3fc0728ad0e98cef9f9ba2e482aae04d3daa7919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_lumiere, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, architecture=x86_64, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 05 09:57:19 np0005546420.localdomain recursing_lumiere[295086]: 167 167
Dec 05 09:57:19 np0005546420.localdomain systemd[1]: libpod-5dbaa14e5a7520c091b3e6ed3fc0728ad0e98cef9f9ba2e482aae04d3daa7919.scope: Deactivated successfully.
Dec 05 09:57:19 np0005546420.localdomain podman[295070]: 2025-12-05 09:57:19.251903105 +0000 UTC m=+0.182721626 container attach 5dbaa14e5a7520c091b3e6ed3fc0728ad0e98cef9f9ba2e482aae04d3daa7919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_lumiere, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, ceph=True, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main)
Dec 05 09:57:19 np0005546420.localdomain podman[295070]: 2025-12-05 09:57:19.257066163 +0000 UTC m=+0.187884694 container died 5dbaa14e5a7520c091b3e6ed3fc0728ad0e98cef9f9ba2e482aae04d3daa7919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_lumiere, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4)
Dec 05 09:57:19 np0005546420.localdomain podman[295091]: 2025-12-05 09:57:19.343650434 +0000 UTC m=+0.083036333 container remove 5dbaa14e5a7520c091b3e6ed3fc0728ad0e98cef9f9ba2e482aae04d3daa7919 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_lumiere, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1763362218, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=)
Dec 05 09:57:19 np0005546420.localdomain systemd[1]: libpod-conmon-5dbaa14e5a7520c091b3e6ed3fc0728ad0e98cef9f9ba2e482aae04d3daa7919.scope: Deactivated successfully.
Dec 05 09:57:19 np0005546420.localdomain sudo[294992]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:19 np0005546420.localdomain sudo[295115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:19 np0005546420.localdomain sudo[295115]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:19 np0005546420.localdomain sudo[295115]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:19 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.1 (monmap changed)...
Dec 05 09:57:19 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:57:19 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:19 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:19 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:19 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:19 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:57:19 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:19 np0005546420.localdomain sudo[295133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:19 np0005546420.localdomain sudo[295133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:20 np0005546420.localdomain systemd[1]: tmp-crun.cD508k.mount: Deactivated successfully.
Dec 05 09:57:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-79ba3f276b170c701581e0c0993dff192a07d2c4501fa5960859dddf689dc23a-merged.mount: Deactivated successfully.
Dec 05 09:57:20 np0005546420.localdomain podman[295168]: 
Dec 05 09:57:20 np0005546420.localdomain podman[295168]: 2025-12-05 09:57:20.275147765 +0000 UTC m=+0.078291246 container create af75cf8853148a6a31c11a44843689d032a1bd560335cd6008656efa99fc2817 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_satoshi, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, release=1763362218, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.)
Dec 05 09:57:20 np0005546420.localdomain systemd[1]: Started libpod-conmon-af75cf8853148a6a31c11a44843689d032a1bd560335cd6008656efa99fc2817.scope.
Dec 05 09:57:20 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:20 np0005546420.localdomain podman[295168]: 2025-12-05 09:57:20.344258008 +0000 UTC m=+0.147401489 container init af75cf8853148a6a31c11a44843689d032a1bd560335cd6008656efa99fc2817 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_satoshi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, release=1763362218, RELEASE=main, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:57:20 np0005546420.localdomain podman[295168]: 2025-12-05 09:57:20.245933667 +0000 UTC m=+0.049077158 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:20 np0005546420.localdomain podman[295168]: 2025-12-05 09:57:20.357801844 +0000 UTC m=+0.160945325 container start af75cf8853148a6a31c11a44843689d032a1bd560335cd6008656efa99fc2817 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_satoshi, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:57:20 np0005546420.localdomain podman[295168]: 2025-12-05 09:57:20.358300299 +0000 UTC m=+0.161443800 container attach af75cf8853148a6a31c11a44843689d032a1bd560335cd6008656efa99fc2817 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_satoshi, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, name=rhceph, ceph=True, distribution-scope=public, release=1763362218, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=)
Dec 05 09:57:20 np0005546420.localdomain exciting_satoshi[295184]: 167 167
Dec 05 09:57:20 np0005546420.localdomain systemd[1]: libpod-af75cf8853148a6a31c11a44843689d032a1bd560335cd6008656efa99fc2817.scope: Deactivated successfully.
Dec 05 09:57:20 np0005546420.localdomain podman[295168]: 2025-12-05 09:57:20.362456828 +0000 UTC m=+0.165600319 container died af75cf8853148a6a31c11a44843689d032a1bd560335cd6008656efa99fc2817 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_satoshi, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, release=1763362218, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 09:57:20 np0005546420.localdomain podman[295189]: 2025-12-05 09:57:20.456194827 +0000 UTC m=+0.083782815 container remove af75cf8853148a6a31c11a44843689d032a1bd560335cd6008656efa99fc2817 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_satoshi, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:57:20 np0005546420.localdomain systemd[1]: libpod-conmon-af75cf8853148a6a31c11a44843689d032a1bd560335cd6008656efa99fc2817.scope: Deactivated successfully.
Dec 05 09:57:20 np0005546420.localdomain sudo[295133]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:20 np0005546420.localdomain sudo[295213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:20 np0005546420.localdomain sudo[295213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:20 np0005546420.localdomain sudo[295213]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:20.852546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928640852586, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2655, "num_deletes": 259, "total_data_size": 11530999, "memory_usage": 12126656, "flush_reason": "Manual Compaction"}
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Dec 05 09:57:20 np0005546420.localdomain sudo[295231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:20 np0005546420.localdomain sudo[295231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928640899767, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 6694734, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13441, "largest_seqno": 16091, "table_properties": {"data_size": 6684751, "index_size": 5976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 26113, "raw_average_key_size": 22, "raw_value_size": 6662694, "raw_average_value_size": 5723, "num_data_blocks": 248, "num_entries": 1164, "num_filter_entries": 1164, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928565, "oldest_key_time": 1764928565, "file_creation_time": 1764928640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c980799-7b55-4c4e-92d8-beaefbaee73e", "db_session_id": "4WA5JLFCDLFMTDS0OOZ2", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 47375 microseconds, and 13716 cpu microseconds.
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.4 (monmap changed)...
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 0 B/s wr, 13 op/s
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:20.899917) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 6694734 bytes OK
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:20.900017) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:20.902013) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:20.902039) EVENT_LOG_v1 {"time_micros": 1764928640902032, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:20.902064) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 11517975, prev total WAL file size 11534802, number of live WAL files 2.
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:20.904665) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303139' seq:72057594037927935, type:22 .. '6B760031323734' seq:0, type:0; will stop at (end)
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(6537KB)], [18(11MB)]
Dec 05 09:57:20 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928640904703, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18279505, "oldest_snapshot_seqno": -1}
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 10142 keys, 17540419 bytes, temperature: kUnknown
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928641063473, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 17540419, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17481819, "index_size": 32131, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 270894, "raw_average_key_size": 26, "raw_value_size": 17307627, "raw_average_value_size": 1706, "num_data_blocks": 1221, "num_entries": 10142, "num_filter_entries": 10142, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928477, "oldest_key_time": 0, "file_creation_time": 1764928640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c980799-7b55-4c4e-92d8-beaefbaee73e", "db_session_id": "4WA5JLFCDLFMTDS0OOZ2", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:21.064493) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 17540419 bytes
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:21.066558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.6 rd, 109.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.4, 11.0 +0.0 blob) out(16.7 +0.0 blob), read-write-amplify(5.4) write-amplify(2.6) OK, records in: 10604, records dropped: 462 output_compression: NoCompression
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:21.066589) EVENT_LOG_v1 {"time_micros": 1764928641066576, "job": 8, "event": "compaction_finished", "compaction_time_micros": 159575, "compaction_time_cpu_micros": 45165, "output_level": 6, "num_output_files": 1, "total_output_size": 17540419, "num_input_records": 10604, "num_output_records": 10142, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928641067835, "job": 8, "event": "table_file_deletion", "file_number": 20}
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928641070181, "job": 8, "event": "table_file_deletion", "file_number": 18}
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:20.904575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:21.070306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:21.070314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:21.070317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:21.070320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:21.070325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ae3e9ce7a66102744d8f385ec564284b64c12aa432bb8b7b2d3b13c74814b191-merged.mount: Deactivated successfully.
Dec 05 09:57:21 np0005546420.localdomain podman[295267]: 
Dec 05 09:57:21 np0005546420.localdomain podman[295267]: 2025-12-05 09:57:21.384573413 +0000 UTC m=+0.081376182 container create 610504340c40290f424a7623c199f8282a27bc452cacedbe9c950e04a65eb0c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_haibt, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:57:21 np0005546420.localdomain systemd[1]: Started libpod-conmon-610504340c40290f424a7623c199f8282a27bc452cacedbe9c950e04a65eb0c1.scope.
Dec 05 09:57:21 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:21 np0005546420.localdomain podman[295267]: 2025-12-05 09:57:21.351658731 +0000 UTC m=+0.048461500 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:21 np0005546420.localdomain podman[295267]: 2025-12-05 09:57:21.459427482 +0000 UTC m=+0.156230251 container init 610504340c40290f424a7623c199f8282a27bc452cacedbe9c950e04a65eb0c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_haibt, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, release=1763362218, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_BRANCH=main)
Dec 05 09:57:21 np0005546420.localdomain podman[295267]: 2025-12-05 09:57:21.471468283 +0000 UTC m=+0.168271052 container start 610504340c40290f424a7623c199f8282a27bc452cacedbe9c950e04a65eb0c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_haibt, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, release=1763362218, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public)
Dec 05 09:57:21 np0005546420.localdomain podman[295267]: 2025-12-05 09:57:21.471770222 +0000 UTC m=+0.168573001 container attach 610504340c40290f424a7623c199f8282a27bc452cacedbe9c950e04a65eb0c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_haibt, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=1763362218, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:57:21 np0005546420.localdomain lucid_haibt[295282]: 167 167
Dec 05 09:57:21 np0005546420.localdomain systemd[1]: libpod-610504340c40290f424a7623c199f8282a27bc452cacedbe9c950e04a65eb0c1.scope: Deactivated successfully.
Dec 05 09:57:21 np0005546420.localdomain podman[295267]: 2025-12-05 09:57:21.476250629 +0000 UTC m=+0.173053448 container died 610504340c40290f424a7623c199f8282a27bc452cacedbe9c950e04a65eb0c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_haibt, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, distribution-scope=public, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, GIT_BRANCH=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218)
Dec 05 09:57:21 np0005546420.localdomain podman[295287]: 2025-12-05 09:57:21.581598586 +0000 UTC m=+0.090232773 container remove 610504340c40290f424a7623c199f8282a27bc452cacedbe9c950e04a65eb0c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_haibt, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, version=7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:57:21 np0005546420.localdomain systemd[1]: libpod-conmon-610504340c40290f424a7623c199f8282a27bc452cacedbe9c950e04a65eb0c1.scope: Deactivated successfully.
Dec 05 09:57:21 np0005546420.localdomain sudo[295231]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:21 np0005546420.localdomain sudo[295303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:21 np0005546420.localdomain sudo[295303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:21 np0005546420.localdomain sudo[295303]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:21 np0005546420.localdomain sudo[295321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:21 np0005546420.localdomain sudo[295321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:21 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-bbfedde9ddee4561eebd7f191235a7e9d9f377df1db453148b7404b1e91862d2-merged.mount: Deactivated successfully.
Dec 05 09:57:22 np0005546420.localdomain podman[295354]: 
Dec 05 09:57:22 np0005546420.localdomain podman[295354]: 2025-12-05 09:57:22.322431808 +0000 UTC m=+0.080836373 container create e0a080b029976494d33ecb962ed3baf0ff2095ab0fb751950870023a6e502db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_dirac, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, CEPH_POINT_RELEASE=, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container)
Dec 05 09:57:22 np0005546420.localdomain systemd[1]: Started libpod-conmon-e0a080b029976494d33ecb962ed3baf0ff2095ab0fb751950870023a6e502db7.scope.
Dec 05 09:57:22 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:22 np0005546420.localdomain podman[295354]: 2025-12-05 09:57:22.380866014 +0000 UTC m=+0.139270579 container init e0a080b029976494d33ecb962ed3baf0ff2095ab0fb751950870023a6e502db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_dirac, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git)
Dec 05 09:57:22 np0005546420.localdomain podman[295354]: 2025-12-05 09:57:22.388578001 +0000 UTC m=+0.146982536 container start e0a080b029976494d33ecb962ed3baf0ff2095ab0fb751950870023a6e502db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_dirac, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:57:22 np0005546420.localdomain podman[295354]: 2025-12-05 09:57:22.388803818 +0000 UTC m=+0.147208443 container attach e0a080b029976494d33ecb962ed3baf0ff2095ab0fb751950870023a6e502db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_dirac, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_CLEAN=True, version=7)
Dec 05 09:57:22 np0005546420.localdomain laughing_dirac[295369]: 167 167
Dec 05 09:57:22 np0005546420.localdomain systemd[1]: libpod-e0a080b029976494d33ecb962ed3baf0ff2095ab0fb751950870023a6e502db7.scope: Deactivated successfully.
Dec 05 09:57:22 np0005546420.localdomain podman[295354]: 2025-12-05 09:57:22.392253994 +0000 UTC m=+0.150658529 container died e0a080b029976494d33ecb962ed3baf0ff2095ab0fb751950870023a6e502db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_dirac, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.buildah.version=1.41.4, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, version=7, release=1763362218, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph)
Dec 05 09:57:22 np0005546420.localdomain podman[295354]: 2025-12-05 09:57:22.29543498 +0000 UTC m=+0.053839595 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:22 np0005546420.localdomain podman[295374]: 2025-12-05 09:57:22.486803419 +0000 UTC m=+0.079629897 container remove e0a080b029976494d33ecb962ed3baf0ff2095ab0fb751950870023a6e502db7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_dirac, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph)
Dec 05 09:57:22 np0005546420.localdomain systemd[1]: libpod-conmon-e0a080b029976494d33ecb962ed3baf0ff2095ab0fb751950870023a6e502db7.scope: Deactivated successfully.
Dec 05 09:57:22 np0005546420.localdomain sudo[295321]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:22 np0005546420.localdomain sudo[295391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:22 np0005546420.localdomain sudo[295391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:57:22 np0005546420.localdomain sudo[295391]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:22 np0005546420.localdomain sudo[295410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:22 np0005546420.localdomain sudo[295410]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:22 np0005546420.localdomain podman[295409]: 2025-12-05 09:57:22.796302529 +0000 UTC m=+0.088744808 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 09:57:22 np0005546420.localdomain podman[295409]: 2025-12-05 09:57:22.867659041 +0000 UTC m=+0.160101270 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller)
Dec 05 09:57:22 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:57:22 np0005546420.localdomain ceph-mon[288331]: from='client.26900 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:57:22 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:57:22 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:57:22 np0005546420.localdomain ceph-mon[288331]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 09:57:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:22 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:23 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:57:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a12851b5941b68fd6adad72c828b103f6c59df084d3049f47fda4ad897515f9f-merged.mount: Deactivated successfully.
Dec 05 09:57:23 np0005546420.localdomain podman[295467]: 
Dec 05 09:57:23 np0005546420.localdomain podman[295467]: 2025-12-05 09:57:23.267316132 +0000 UTC m=+0.106130872 container create e2cce5e7a36764b38d9ca34dba457eb63e8a98990cc0af0f43a0fc4370293a29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_robinson, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, RELEASE=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64)
Dec 05 09:57:23 np0005546420.localdomain systemd[1]: Started libpod-conmon-e2cce5e7a36764b38d9ca34dba457eb63e8a98990cc0af0f43a0fc4370293a29.scope.
Dec 05 09:57:23 np0005546420.localdomain podman[295467]: 2025-12-05 09:57:23.233584875 +0000 UTC m=+0.072399625 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:23 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:23 np0005546420.localdomain podman[295467]: 2025-12-05 09:57:23.374560367 +0000 UTC m=+0.213375117 container init e2cce5e7a36764b38d9ca34dba457eb63e8a98990cc0af0f43a0fc4370293a29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_robinson, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, release=1763362218, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph)
Dec 05 09:57:23 np0005546420.localdomain podman[295467]: 2025-12-05 09:57:23.38606949 +0000 UTC m=+0.224884230 container start e2cce5e7a36764b38d9ca34dba457eb63e8a98990cc0af0f43a0fc4370293a29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_robinson, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.buildah.version=1.41.4, GIT_CLEAN=True, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=7)
Dec 05 09:57:23 np0005546420.localdomain podman[295467]: 2025-12-05 09:57:23.386422561 +0000 UTC m=+0.225237361 container attach e2cce5e7a36764b38d9ca34dba457eb63e8a98990cc0af0f43a0fc4370293a29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_robinson, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, ceph=True, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:57:23 np0005546420.localdomain crazy_robinson[295483]: 167 167
Dec 05 09:57:23 np0005546420.localdomain systemd[1]: libpod-e2cce5e7a36764b38d9ca34dba457eb63e8a98990cc0af0f43a0fc4370293a29.scope: Deactivated successfully.
Dec 05 09:57:23 np0005546420.localdomain podman[295467]: 2025-12-05 09:57:23.392395894 +0000 UTC m=+0.231210844 container died e2cce5e7a36764b38d9ca34dba457eb63e8a98990cc0af0f43a0fc4370293a29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_robinson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:57:23 np0005546420.localdomain podman[295488]: 2025-12-05 09:57:23.480604885 +0000 UTC m=+0.077134061 container remove e2cce5e7a36764b38d9ca34dba457eb63e8a98990cc0af0f43a0fc4370293a29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_robinson, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, release=1763362218, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.41.4, version=7, distribution-scope=public, GIT_CLEAN=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:57:23 np0005546420.localdomain systemd[1]: libpod-conmon-e2cce5e7a36764b38d9ca34dba457eb63e8a98990cc0af0f43a0fc4370293a29.scope: Deactivated successfully.
Dec 05 09:57:23 np0005546420.localdomain sudo[295410]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:23 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546420 (monmap changed)...
Dec 05 09:57:23 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain
Dec 05 09:57:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:23 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-8dd294cc1cc657b0bdfdc47ba3f213b78f4e2814b2253a8f9d1bf00d95e2bc98-merged.mount: Deactivated successfully.
Dec 05 09:57:24 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:57:24 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:57:24 np0005546420.localdomain ceph-mon[288331]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 09:57:24 np0005546420.localdomain ceph-mon[288331]: from='client.44269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546416", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:57:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:57:24 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:25 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adac4fb1e0 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0
Dec 05 09:57:25 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@3(peon) e10  my rank is now 2 (was 3)
Dec 05 09:57:25 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0
Dec 05 09:57:25 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0
Dec 05 09:57:25 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adac4fb080 mon_map magic: 0 from mon.2 v2:172.18.0.107:3300/0
Dec 05 09:57:25 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(probing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546418"} v 0)
Dec 05 09:57:25 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:57:25 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(probing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546419"} v 0)
Dec 05 09:57:25 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:57:25 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(probing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0)
Dec 05 09:57:25 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:57:25 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(probing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0)
Dec 05 09:57:25 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:57:25 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(probing) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:57:27 np0005546420.localdomain ceph-mon[288331]: log_channel(cluster) log [INF] : mon.np0005546420 calling monitor election
Dec 05 09:57:27 np0005546420.localdomain ceph-mon[288331]: paxos.2).electionLogic(44) init, last seen epoch 44
Dec 05 09:57:27 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:57:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:57:29 np0005546420.localdomain podman[295504]: 2025-12-05 09:57:29.512684805 +0000 UTC m=+0.083566159 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 09:57:29 np0005546420.localdomain podman[295504]: 2025-12-05 09:57:29.551406785 +0000 UTC m=+0.122288139 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 05 09:57:29 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:57:32 np0005546420.localdomain ceph-mds[283770]: mds.beacon.mds.np0005546420.eqhasr missed beacon ack from the monitors
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: paxos.2).electionLogic(45) init, last seen epoch 45, mid-election, bumping
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.757761) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652757850, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 519, "num_deletes": 251, "total_data_size": 532969, "memory_usage": 543816, "flush_reason": "Manual Compaction"}
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: from='client.26991 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005546416"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: Remove daemons mon.np0005546416
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "quorum_status"} : dispatch
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: Safe to remove mon.np0005546416: new quorum should be ['np0005546418', 'np0005546421', 'np0005546420', 'np0005546419'] (from ['np0005546418', 'np0005546421', 'np0005546420', 'np0005546419'])
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: Removing monitor np0005546416 from monmap...
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon rm", "name": "np0005546416"} : dispatch
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: Removing daemon mon.np0005546416 from np0005546416.localdomain -- ports []
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546421 calling monitor election
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546419 calling monitor election
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 calling monitor election
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420 calling monitor election
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 is new leader, mons np0005546418,np0005546421,np0005546419 in quorum (ranks 0,1,3)
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: overall HEALTH_OK
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 calling monitor election
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546421 calling monitor election
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546418 is new leader, mons np0005546418,np0005546421,np0005546420,np0005546419 in quorum (ranks 0,1,2,3)
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: monmap epoch 10
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: last_changed 2025-12-05T09:57:25.594278+0000
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: min_mon_release 18 (reef)
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: election_strategy: 1
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: 2: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005546420
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: 3: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: osdmap e88: 6 total, 6 up, 6 in
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mgrmap e28: np0005546419.zhsnqq(active, since 22s), standbys: np0005546420.aoeylc, np0005546421.sukfea, np0005546416.kmqcnq, np0005546418.garyvl
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: overall HEALTH_OK
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652763046, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 321534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16096, "largest_seqno": 16610, "table_properties": {"data_size": 318576, "index_size": 877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8641, "raw_average_key_size": 21, "raw_value_size": 312070, "raw_average_value_size": 780, "num_data_blocks": 35, "num_entries": 400, "num_filter_entries": 400, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928640, "oldest_key_time": 1764928640, "file_creation_time": 1764928652, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c980799-7b55-4c4e-92d8-beaefbaee73e", "db_session_id": "4WA5JLFCDLFMTDS0OOZ2", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 5329 microseconds, and 2321 cpu microseconds.
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.763096) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 321534 bytes OK
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.763121) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.765056) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.765077) EVENT_LOG_v1 {"time_micros": 1764928652765071, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.765104) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 529709, prev total WAL file size 529709, number of live WAL files 2.
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.765725) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end)
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(313KB)], [21(16MB)]
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652765782, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 17861953, "oldest_snapshot_seqno": -1}
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10017 keys, 15762334 bytes, temperature: kUnknown
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652857105, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 15762334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15706375, "index_size": 29810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 269226, "raw_average_key_size": 26, "raw_value_size": 15536168, "raw_average_value_size": 1550, "num_data_blocks": 1119, "num_entries": 10017, "num_filter_entries": 10017, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928477, "oldest_key_time": 0, "file_creation_time": 1764928652, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c980799-7b55-4c4e-92d8-beaefbaee73e", "db_session_id": "4WA5JLFCDLFMTDS0OOZ2", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.857500) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 15762334 bytes
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.859502) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.3 rd, 172.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 16.7 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(104.6) write-amplify(49.0) OK, records in: 10542, records dropped: 525 output_compression: NoCompression
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.859536) EVENT_LOG_v1 {"time_micros": 1764928652859521, "job": 10, "event": "compaction_finished", "compaction_time_micros": 91461, "compaction_time_cpu_micros": 45969, "output_level": 6, "num_output_files": 1, "total_output_size": 15762334, "num_input_records": 10542, "num_output_records": 10017, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652860172, "job": 10, "event": "table_file_deletion", "file_number": 23}
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652864606, "job": 10, "event": "table_file_deletion", "file_number": 21}
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.765632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.864817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.864825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.864829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.864839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: rocksdb: (Original Log Time 2025/12/05-09:57:32.864842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:32 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:33 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:57:33 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:33 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.5 (monmap changed)...
Dec 05 09:57:33 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:33 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 05 09:57:33 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:33 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 09:57:33 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:57:33 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:57:34 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:57:34 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:57:34 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 09:57:34 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:57:34 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:34 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)...
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:35 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:57:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)...
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: from='client.26908 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546416.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: Removed label mon from host np0005546416.localdomain
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:36 np0005546420.localdomain podman[295524]: 2025-12-05 09:57:36.52546569 +0000 UTC m=+0.102025426 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 05 09:57:36 np0005546420.localdomain podman[295524]: 2025-12-05 09:57:36.561047943 +0000 UTC m=+0.137607679 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 09:57:36 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:57:36 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546421 (monmap changed)...
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: from='client.26916 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546416.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: Removed label mgr from host np0005546416.localdomain
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:37 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:57:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:57:38 np0005546420.localdomain ceph-mon[288331]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:38 np0005546420.localdomain ceph-mon[288331]: from='client.44279 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546416.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:57:38 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain.devices.0}] v 0)
Dec 05 09:57:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain}] v 0)
Dec 05 09:57:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:38 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:38 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:57:38 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:57:38 np0005546420.localdomain sudo[295542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:57:38 np0005546420.localdomain sudo[295542]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:38 np0005546420.localdomain sudo[295542]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:38 np0005546420.localdomain sudo[295560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:57:38 np0005546420.localdomain sudo[295560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:38 np0005546420.localdomain sudo[295560]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain.devices.0}] v 0)
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain}] v 0)
Dec 05 09:57:39 np0005546420.localdomain sudo[295578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:39 np0005546420.localdomain sudo[295578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:39 np0005546420.localdomain sudo[295578]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:39 np0005546420.localdomain sudo[295596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:39 np0005546420.localdomain sudo[295596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:39 np0005546420.localdomain sudo[295596]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:39 np0005546420.localdomain sudo[295614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:39 np0005546420.localdomain sudo[295614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:39 np0005546420.localdomain sudo[295614]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:39 np0005546420.localdomain sudo[295648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:39 np0005546420.localdomain sudo[295648]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:39 np0005546420.localdomain sudo[295648]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:39 np0005546420.localdomain sudo[295666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:57:39 np0005546420.localdomain sudo[295666]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:39 np0005546420.localdomain sudo[295666]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:39 np0005546420.localdomain sudo[295684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:57:39 np0005546420.localdomain sudo[295684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:39 np0005546420.localdomain sudo[295684]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:39 np0005546420.localdomain sudo[295702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:57:39 np0005546420.localdomain sudo[295702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:39 np0005546420.localdomain sudo[295702]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:57:39 np0005546420.localdomain sudo[295720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:57:39 np0005546420.localdomain sudo[295720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:39 np0005546420.localdomain sudo[295720]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:39 np0005546420.localdomain podman[295722]: 2025-12-05 09:57:39.732042095 +0000 UTC m=+0.090909836 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 09:57:39 np0005546420.localdomain podman[295722]: 2025-12-05 09:57:39.771461338 +0000 UTC m=+0.130329119 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:57:39 np0005546420.localdomain sudo[295751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:39 np0005546420.localdomain sudo[295751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:39 np0005546420.localdomain sudo[295751]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: Removed label _admin from host np0005546416.localdomain
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: Removing np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: Removing np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: Removing np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:39 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:39 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:57:39 np0005546420.localdomain sudo[295779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:39 np0005546420.localdomain sudo[295779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:39 np0005546420.localdomain sudo[295779]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:39 np0005546420.localdomain sudo[295797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:39 np0005546420.localdomain sudo[295797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:39 np0005546420.localdomain sudo[295797]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:40 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:57:40 np0005546420.localdomain sudo[295831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:40 np0005546420.localdomain sudo[295831]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:40 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:57:40 np0005546420.localdomain sudo[295831]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:40 np0005546420.localdomain sudo[295849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:57:40 np0005546420.localdomain sudo[295849]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:40 np0005546420.localdomain sudo[295849]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:40 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:40 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:40 np0005546420.localdomain sudo[295867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:40 np0005546420.localdomain sudo[295867]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:40 np0005546420.localdomain sudo[295867]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:40 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:57:40 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:57:40 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:57:40 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:57:40 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:41 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:42 np0005546420.localdomain ceph-mon[288331]: Removing daemon mgr.np0005546416.kmqcnq from np0005546416.localdomain -- ports [8765]
Dec 05 09:57:42 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.np0005546416.kmqcnq"} v 0)
Dec 05 09:57:42 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "mgr.np0005546416.kmqcnq"} : dispatch
Dec 05 09:57:42 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 05 09:57:42 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 05 09:57:42 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:57:42 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:57:42 np0005546420.localdomain sudo[295885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:57:42 np0005546420.localdomain sudo[295885]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:57:42 np0005546420.localdomain sudo[295885]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:43 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:57:43 np0005546420.localdomain podman[295903]: 2025-12-05 09:57:43.077007006 +0000 UTC m=+0.099471079 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 05 09:57:43 np0005546420.localdomain podman[295903]: 2025-12-05 09:57:43.093211285 +0000 UTC m=+0.115675408 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:57:43 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:57:43 np0005546420.localdomain ceph-mon[288331]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "mgr.np0005546416.kmqcnq"} : dispatch
Dec 05 09:57:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "mgr.np0005546416.kmqcnq"} : dispatch
Dec 05 09:57:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005546416.kmqcnq"}]': finished
Dec 05 09:57:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:43 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain.devices.0}] v 0)
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: Removing key for mgr.np0005546416.kmqcnq
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain}] v 0)
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:57:44 np0005546420.localdomain sudo[295924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:57:44 np0005546420.localdomain sudo[295924]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:44 np0005546420.localdomain sudo[295924]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546416.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546416.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:44 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546416 (monmap changed)...
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546416.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546416 on np0005546416.localdomain
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546416.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain.devices.0}] v 0)
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain}] v 0)
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:45 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546418 (monmap changed)...
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:46 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:57:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:57:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:57:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 09:57:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:57:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18211 "" "Go-http-client/1.1"
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)...
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:47 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546418 (monmap changed)...
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:57:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:57:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:57:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:57:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:57:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:57:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:57:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:57:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:57:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:57:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:57:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:48 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:57:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:57:49 np0005546420.localdomain podman[295943]: 2025-12-05 09:57:49.515847586 +0000 UTC m=+0.087020396 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:57:49 np0005546420.localdomain podman[295943]: 2025-12-05 09:57:49.554728622 +0000 UTC m=+0.125901442 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:57:49 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:57:49 np0005546420.localdomain podman[295942]: 2025-12-05 09:57:49.573215261 +0000 UTC m=+0.146766944 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:57:49 np0005546420.localdomain podman[295942]: 2025-12-05 09:57:49.593282918 +0000 UTC m=+0.166834541 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350)
Dec 05 09:57:49 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:49 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0)
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.0 (monmap changed)...
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: from='client.26928 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005546416.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: Added label _no_schedule to host np0005546416.localdomain
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005546416.localdomain
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 05 09:57:50 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:51 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.3 (monmap changed)...
Dec 05 09:57:51 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:57:51 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:51 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:51 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 09:57:51 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain"} v 0)
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain"} : dispatch
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: from='client.26936 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005546416.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain"} : dispatch
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain"} : dispatch
Dec 05 09:57:52 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain"}]': finished
Dec 05 09:57:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:57:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:57:53 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:57:53 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:57:53 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:53 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:57:53 np0005546420.localdomain podman[295987]: 2025-12-05 09:57:53.515714086 +0000 UTC m=+0.084139628 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 05 09:57:53 np0005546420.localdomain podman[295987]: 2025-12-05 09:57:53.58377616 +0000 UTC m=+0.152201722 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:57:53 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: from='client.44291 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005546416.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: Removed host np0005546416.localdomain
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:54 np0005546420.localdomain sshd[296013]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:57:54 np0005546420.localdomain sshd[296013]: Accepted publickey for tripleo-admin from 192.168.122.11 port 44154 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:57:54 np0005546420.localdomain systemd-logind[762]: New session 67 of user tripleo-admin.
Dec 05 09:57:54 np0005546420.localdomain systemd[1]: Created slice User Slice of UID 1003.
Dec 05 09:57:54 np0005546420.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:57:54 np0005546420.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:54 np0005546420.localdomain systemd[1]: Starting User Manager for UID 1003...
Dec 05 09:57:54 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 05 09:57:55 np0005546420.localdomain sudo[296019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:55 np0005546420.localdomain sudo[296019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:55 np0005546420.localdomain sudo[296019]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546419 (monmap changed)...
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:57:55 np0005546420.localdomain sudo[296048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:55 np0005546420.localdomain sudo[296048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Queued start job for default target Main User Target.
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Created slice User Application Slice.
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Reached target Paths.
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Reached target Timers.
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Starting D-Bus User Message Bus Socket...
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Starting Create User's Volatile Files and Directories...
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Listening on D-Bus User Message Bus Socket.
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Reached target Sockets.
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Finished Create User's Volatile Files and Directories.
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Reached target Basic System.
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Reached target Main User Target.
Dec 05 09:57:55 np0005546420.localdomain systemd[296017]: Startup finished in 157ms.
Dec 05 09:57:55 np0005546420.localdomain systemd[1]: Started User Manager for UID 1003.
Dec 05 09:57:55 np0005546420.localdomain systemd[1]: Started Session 67 of User tripleo-admin.
Dec 05 09:57:55 np0005546420.localdomain sshd[296013]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 05 09:57:55 np0005546420.localdomain podman[296159]: 
Dec 05 09:57:55 np0005546420.localdomain podman[296159]: 2025-12-05 09:57:55.588427444 +0000 UTC m=+0.083531419 container create 4fe68376f6b28956924a13bcfd28ca291e994187748c1610bc33b2a8bccd8860 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_cray, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z)
Dec 05 09:57:55 np0005546420.localdomain systemd[1]: Started libpod-conmon-4fe68376f6b28956924a13bcfd28ca291e994187748c1610bc33b2a8bccd8860.scope.
Dec 05 09:57:55 np0005546420.localdomain podman[296159]: 2025-12-05 09:57:55.555634896 +0000 UTC m=+0.050738911 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:55 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:55 np0005546420.localdomain podman[296159]: 2025-12-05 09:57:55.674198342 +0000 UTC m=+0.169302317 container init 4fe68376f6b28956924a13bcfd28ca291e994187748c1610bc33b2a8bccd8860 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_cray, CEPH_POINT_RELEASE=, release=1763362218, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, RELEASE=main, GIT_CLEAN=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:57:55 np0005546420.localdomain podman[296159]: 2025-12-05 09:57:55.691895296 +0000 UTC m=+0.186999281 container start 4fe68376f6b28956924a13bcfd28ca291e994187748c1610bc33b2a8bccd8860 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_cray, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git)
Dec 05 09:57:55 np0005546420.localdomain podman[296159]: 2025-12-05 09:57:55.692263407 +0000 UTC m=+0.187367432 container attach 4fe68376f6b28956924a13bcfd28ca291e994187748c1610bc33b2a8bccd8860 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_cray, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph)
Dec 05 09:57:55 np0005546420.localdomain systemd[1]: libpod-4fe68376f6b28956924a13bcfd28ca291e994187748c1610bc33b2a8bccd8860.scope: Deactivated successfully.
Dec 05 09:57:55 np0005546420.localdomain recursing_cray[296209]: 167 167
Dec 05 09:57:55 np0005546420.localdomain podman[296159]: 2025-12-05 09:57:55.696039934 +0000 UTC m=+0.191143909 container died 4fe68376f6b28956924a13bcfd28ca291e994187748c1610bc33b2a8bccd8860 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_cray, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, architecture=x86_64, release=1763362218, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:57:55 np0005546420.localdomain sudo[296229]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mmxigndjqzztrjondzsycmgktlvdnnuu ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764928675.2802587-61813-143922780666449/AnsiballZ_lineinfile.py
Dec 05 09:57:55 np0005546420.localdomain sudo[296229]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 05 09:57:55 np0005546420.localdomain podman[296233]: 2025-12-05 09:57:55.789825198 +0000 UTC m=+0.084583682 container remove 4fe68376f6b28956924a13bcfd28ca291e994187748c1610bc33b2a8bccd8860 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_cray, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main)
Dec 05 09:57:55 np0005546420.localdomain systemd[1]: libpod-conmon-4fe68376f6b28956924a13bcfd28ca291e994187748c1610bc33b2a8bccd8860.scope: Deactivated successfully.
Dec 05 09:57:55 np0005546420.localdomain sudo[296048]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:55 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:55 np0005546420.localdomain python3[296239]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line=    - ip_netmask: 172.18.0.104/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 09:57:55 np0005546420.localdomain systemd[1]: tmp-crun.0qVb4f.mount: Deactivated successfully.
Dec 05 09:57:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0c70e3871566a0110dac59c5cc127573e4d2046c1d0e6ec3370929bfbfb21c2e-merged.mount: Deactivated successfully.
Dec 05 09:57:55 np0005546420.localdomain sudo[296229]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:55 np0005546420.localdomain sudo[296251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:55 np0005546420.localdomain sudo[296251]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:55 np0005546420.localdomain sudo[296251]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:56 np0005546420.localdomain sudo[296272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:56 np0005546420.localdomain sudo[296272]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:56 np0005546420.localdomain podman[296395]: 
Dec 05 09:57:56 np0005546420.localdomain podman[296395]: 2025-12-05 09:57:56.493640681 +0000 UTC m=+0.077144493 container create 73afbcee35ba9284a381d1bbe478f81c364d258f43d9eb392b9a8b717a7a75e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_torvalds, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, version=7, RELEASE=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, release=1763362218)
Dec 05 09:57:56 np0005546420.localdomain systemd[1]: Started libpod-conmon-73afbcee35ba9284a381d1bbe478f81c364d258f43d9eb392b9a8b717a7a75e5.scope.
Dec 05 09:57:56 np0005546420.localdomain systemd[1]: tmp-crun.9IxKEs.mount: Deactivated successfully.
Dec 05 09:57:56 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:56 np0005546420.localdomain podman[296395]: 2025-12-05 09:57:56.461638977 +0000 UTC m=+0.045142809 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:56 np0005546420.localdomain podman[296395]: 2025-12-05 09:57:56.572644911 +0000 UTC m=+0.156148723 container init 73afbcee35ba9284a381d1bbe478f81c364d258f43d9eb392b9a8b717a7a75e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_torvalds, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, name=rhceph, GIT_CLEAN=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container)
Dec 05 09:57:56 np0005546420.localdomain podman[296395]: 2025-12-05 09:57:56.584922468 +0000 UTC m=+0.168426270 container start 73afbcee35ba9284a381d1bbe478f81c364d258f43d9eb392b9a8b717a7a75e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_torvalds, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, version=7, GIT_CLEAN=True, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:57:56 np0005546420.localdomain charming_torvalds[296442]: 167 167
Dec 05 09:57:56 np0005546420.localdomain podman[296395]: 2025-12-05 09:57:56.585144735 +0000 UTC m=+0.168648547 container attach 73afbcee35ba9284a381d1bbe478f81c364d258f43d9eb392b9a8b717a7a75e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_torvalds, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, build-date=2025-11-26T19:44:28Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, RELEASE=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64)
Dec 05 09:57:56 np0005546420.localdomain systemd[1]: libpod-73afbcee35ba9284a381d1bbe478f81c364d258f43d9eb392b9a8b717a7a75e5.scope: Deactivated successfully.
Dec 05 09:57:56 np0005546420.localdomain podman[296395]: 2025-12-05 09:57:56.588315262 +0000 UTC m=+0.171819084 container died 73afbcee35ba9284a381d1bbe478f81c364d258f43d9eb392b9a8b717a7a75e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_torvalds, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:57:56 np0005546420.localdomain sudo[296477]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mvmuemamjjackgzclojtyfccfvzeyznp ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764928676.1771405-61829-26601651507997/AnsiballZ_command.py
Dec 05 09:57:56 np0005546420.localdomain sudo[296477]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 05 09:57:56 np0005546420.localdomain podman[296450]: 2025-12-05 09:57:56.686614355 +0000 UTC m=+0.086839392 container remove 73afbcee35ba9284a381d1bbe478f81c364d258f43d9eb392b9a8b717a7a75e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_torvalds, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, version=7, distribution-scope=public, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:57:56 np0005546420.localdomain systemd[1]: libpod-conmon-73afbcee35ba9284a381d1bbe478f81c364d258f43d9eb392b9a8b717a7a75e5.scope: Deactivated successfully.
Dec 05 09:57:56 np0005546420.localdomain python3[296480]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.104/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:57:56 np0005546420.localdomain sudo[296477]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:56 np0005546420.localdomain sudo[296272]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0)
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:56 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-45018360bf00544f155e0cbf85cda9643c46725c6009d91bf7f2978d30dc524c-merged.mount: Deactivated successfully.
Dec 05 09:57:56 np0005546420.localdomain sudo[296510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:56 np0005546420.localdomain sudo[296510]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:56 np0005546420.localdomain sudo[296510]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:57 np0005546420.localdomain sudo[296544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:57 np0005546420.localdomain sudo[296544]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.1 (monmap changed)...
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:57 np0005546420.localdomain sudo[296682]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htetbgabsxdceugjjnqeoumnowkcaspo ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764928677.0037065-61840-12246603654804/AnsiballZ_command.py
Dec 05 09:57:57 np0005546420.localdomain sudo[296682]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 05 09:57:57 np0005546420.localdomain podman[296689]: 
Dec 05 09:57:57 np0005546420.localdomain podman[296689]: 2025-12-05 09:57:57.510944894 +0000 UTC m=+0.081518978 container create 80bbed3579853d30e5b94a7ebf93f85f2a4d548104808222529d9c4f10a75d07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_hoover, release=1763362218, vendor=Red Hat, Inc., architecture=x86_64, version=7, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 09:57:57 np0005546420.localdomain systemd[1]: Started libpod-conmon-80bbed3579853d30e5b94a7ebf93f85f2a4d548104808222529d9c4f10a75d07.scope.
Dec 05 09:57:57 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:57 np0005546420.localdomain python3[296688]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.104 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 09:57:57 np0005546420.localdomain podman[296689]: 2025-12-05 09:57:57.476904497 +0000 UTC m=+0.047478601 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:57 np0005546420.localdomain podman[296689]: 2025-12-05 09:57:57.590685856 +0000 UTC m=+0.161259930 container init 80bbed3579853d30e5b94a7ebf93f85f2a4d548104808222529d9c4f10a75d07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_hoover, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, name=rhceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 09:57:57 np0005546420.localdomain podman[296689]: 2025-12-05 09:57:57.606013407 +0000 UTC m=+0.176587451 container start 80bbed3579853d30e5b94a7ebf93f85f2a4d548104808222529d9c4f10a75d07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_hoover, version=7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z)
Dec 05 09:57:57 np0005546420.localdomain podman[296689]: 2025-12-05 09:57:57.606205153 +0000 UTC m=+0.176779237 container attach 80bbed3579853d30e5b94a7ebf93f85f2a4d548104808222529d9c4f10a75d07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_hoover, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, CEPH_POINT_RELEASE=)
Dec 05 09:57:57 np0005546420.localdomain friendly_hoover[296704]: 167 167
Dec 05 09:57:57 np0005546420.localdomain systemd[1]: libpod-80bbed3579853d30e5b94a7ebf93f85f2a4d548104808222529d9c4f10a75d07.scope: Deactivated successfully.
Dec 05 09:57:57 np0005546420.localdomain podman[296689]: 2025-12-05 09:57:57.611225087 +0000 UTC m=+0.181799191 container died 80bbed3579853d30e5b94a7ebf93f85f2a4d548104808222529d9c4f10a75d07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_hoover, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1763362218, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Dec 05 09:57:57 np0005546420.localdomain podman[296710]: 2025-12-05 09:57:57.717700192 +0000 UTC m=+0.092650070 container remove 80bbed3579853d30e5b94a7ebf93f85f2a4d548104808222529d9c4f10a75d07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_hoover, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, com.redhat.component=rhceph-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:57:57 np0005546420.localdomain systemd[1]: libpod-conmon-80bbed3579853d30e5b94a7ebf93f85f2a4d548104808222529d9c4f10a75d07.scope: Deactivated successfully.
Dec 05 09:57:57 np0005546420.localdomain sudo[296544]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:57 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:57 np0005546420.localdomain systemd[1]: tmp-crun.LeBco3.mount: Deactivated successfully.
Dec 05 09:57:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-58101714e6ece8a0965f80edf70ece3ec7022a2d63569ca99b68e22b89022513-merged.mount: Deactivated successfully.
Dec 05 09:57:58 np0005546420.localdomain sudo[296734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:58 np0005546420.localdomain sudo[296734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:58 np0005546420.localdomain sudo[296734]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:57:58 np0005546420.localdomain sudo[296752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:58 np0005546420.localdomain sudo[296752]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.4 (monmap changed)...
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:57:58 np0005546420.localdomain podman[296786]: 
Dec 05 09:57:58 np0005546420.localdomain podman[296786]: 2025-12-05 09:57:58.547739127 +0000 UTC m=+0.075916537 container create f9e90f507557c683209a28d3827900265271f73e2147c54e593ada1fc365568e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_black, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=)
Dec 05 09:57:58 np0005546420.localdomain systemd[1]: Started libpod-conmon-f9e90f507557c683209a28d3827900265271f73e2147c54e593ada1fc365568e.scope.
Dec 05 09:57:58 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:58 np0005546420.localdomain podman[296786]: 2025-12-05 09:57:58.517813536 +0000 UTC m=+0.045990966 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:58 np0005546420.localdomain podman[296786]: 2025-12-05 09:57:58.63174256 +0000 UTC m=+0.159919970 container init f9e90f507557c683209a28d3827900265271f73e2147c54e593ada1fc365568e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_black, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, ceph=True)
Dec 05 09:57:58 np0005546420.localdomain podman[296786]: 2025-12-05 09:57:58.642420358 +0000 UTC m=+0.170597758 container start f9e90f507557c683209a28d3827900265271f73e2147c54e593ada1fc365568e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_black, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=1763362218, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container)
Dec 05 09:57:58 np0005546420.localdomain podman[296786]: 2025-12-05 09:57:58.642687546 +0000 UTC m=+0.170864986 container attach f9e90f507557c683209a28d3827900265271f73e2147c54e593ada1fc365568e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_black, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, release=1763362218, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=)
Dec 05 09:57:58 np0005546420.localdomain sad_black[296801]: 167 167
Dec 05 09:57:58 np0005546420.localdomain systemd[1]: libpod-f9e90f507557c683209a28d3827900265271f73e2147c54e593ada1fc365568e.scope: Deactivated successfully.
Dec 05 09:57:58 np0005546420.localdomain podman[296786]: 2025-12-05 09:57:58.647008569 +0000 UTC m=+0.175186009 container died f9e90f507557c683209a28d3827900265271f73e2147c54e593ada1fc365568e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_black, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, name=rhceph)
Dec 05 09:57:58 np0005546420.localdomain podman[296806]: 2025-12-05 09:57:58.750149531 +0000 UTC m=+0.087152831 container remove f9e90f507557c683209a28d3827900265271f73e2147c54e593ada1fc365568e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_black, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, distribution-scope=public, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, vendor=Red Hat, Inc., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218)
Dec 05 09:57:58 np0005546420.localdomain systemd[1]: libpod-conmon-f9e90f507557c683209a28d3827900265271f73e2147c54e593ada1fc365568e.scope: Deactivated successfully.
Dec 05 09:57:58 np0005546420.localdomain sudo[296752]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:58 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:58 np0005546420.localdomain sudo[296822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:58 np0005546420.localdomain sudo[296822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-215cb5580f0fbb88fdefd86b209518af1fab2040a6c8a7acb3bb08683bf391fc-merged.mount: Deactivated successfully.
Dec 05 09:57:58 np0005546420.localdomain sudo[296822]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:59 np0005546420.localdomain sudo[296840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:59 np0005546420.localdomain sudo[296840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:57:59 np0005546420.localdomain podman[296876]: 
Dec 05 09:57:59 np0005546420.localdomain podman[296876]: 2025-12-05 09:57:59.524052789 +0000 UTC m=+0.076645768 container create b779f89def612ffdf52910ce8df54653cad09cf22b769b76f412d8b40d86e77f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_chatterjee, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, version=7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:57:59 np0005546420.localdomain systemd[1]: Started libpod-conmon-b779f89def612ffdf52910ce8df54653cad09cf22b769b76f412d8b40d86e77f.scope.
Dec 05 09:57:59 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:57:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:57:59 np0005546420.localdomain podman[296876]: 2025-12-05 09:57:59.493686585 +0000 UTC m=+0.046279664 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:57:59 np0005546420.localdomain podman[296876]: 2025-12-05 09:57:59.594859946 +0000 UTC m=+0.147452925 container init b779f89def612ffdf52910ce8df54653cad09cf22b769b76f412d8b40d86e77f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_chatterjee, version=7, vcs-type=git, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, ceph=True, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:57:59 np0005546420.localdomain podman[296876]: 2025-12-05 09:57:59.604865374 +0000 UTC m=+0.157458363 container start b779f89def612ffdf52910ce8df54653cad09cf22b769b76f412d8b40d86e77f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_chatterjee, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:57:59 np0005546420.localdomain podman[296876]: 2025-12-05 09:57:59.605253356 +0000 UTC m=+0.157846405 container attach b779f89def612ffdf52910ce8df54653cad09cf22b769b76f412d8b40d86e77f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_chatterjee, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=1763362218, ceph=True, name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public)
Dec 05 09:57:59 np0005546420.localdomain loving_chatterjee[296890]: 167 167
Dec 05 09:57:59 np0005546420.localdomain systemd[1]: libpod-b779f89def612ffdf52910ce8df54653cad09cf22b769b76f412d8b40d86e77f.scope: Deactivated successfully.
Dec 05 09:57:59 np0005546420.localdomain podman[296876]: 2025-12-05 09:57:59.608917948 +0000 UTC m=+0.161510927 container died b779f89def612ffdf52910ce8df54653cad09cf22b769b76f412d8b40d86e77f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_chatterjee, ceph=True, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, release=1763362218, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:57:59 np0005546420.localdomain sudo[296682]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:59 np0005546420.localdomain podman[296893]: 2025-12-05 09:57:59.696789011 +0000 UTC m=+0.100599385 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec 05 09:57:59 np0005546420.localdomain podman[296901]: 2025-12-05 09:57:59.716447206 +0000 UTC m=+0.094147247 container remove b779f89def612ffdf52910ce8df54653cad09cf22b769b76f412d8b40d86e77f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_chatterjee, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 09:57:59 np0005546420.localdomain systemd[1]: libpod-conmon-b779f89def612ffdf52910ce8df54653cad09cf22b769b76f412d8b40d86e77f.scope: Deactivated successfully.
Dec 05 09:57:59 np0005546420.localdomain sudo[296840]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:59 np0005546420.localdomain podman[296893]: 2025-12-05 09:57:59.781518326 +0000 UTC m=+0.185328680 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:57:59 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:57:59 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:57:59 np0005546420.localdomain sudo[296946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:57:59 np0005546420.localdomain sudo[296946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:57:59 np0005546420.localdomain sudo[296946]: pam_unix(sudo:session): session closed for user root
Dec 05 09:57:59 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-b8b0ec9bcf31d1ec1044b9daa67abaa3c1a09c58e33d7d40990601fa3810ce2f-merged.mount: Deactivated successfully.
Dec 05 09:57:59 np0005546420.localdomain sudo[296964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:57:59 np0005546420.localdomain sudo[296964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:00 np0005546420.localdomain podman[296999]: 
Dec 05 09:58:00 np0005546420.localdomain podman[296999]: 2025-12-05 09:58:00.456362869 +0000 UTC m=+0.077563997 container create 3f03809c3ea7889e9cf1e9c05661bdf058a8ba833a105ebf137bfdb1e015e9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_shtern, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, RELEASE=main, release=1763362218, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=)
Dec 05 09:58:00 np0005546420.localdomain systemd[1]: Started libpod-conmon-3f03809c3ea7889e9cf1e9c05661bdf058a8ba833a105ebf137bfdb1e015e9cf.scope.
Dec 05 09:58:00 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:58:00 np0005546420.localdomain podman[296999]: 2025-12-05 09:58:00.424127517 +0000 UTC m=+0.045328685 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:58:00 np0005546420.localdomain podman[296999]: 2025-12-05 09:58:00.533714807 +0000 UTC m=+0.154915935 container init 3f03809c3ea7889e9cf1e9c05661bdf058a8ba833a105ebf137bfdb1e015e9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_shtern, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, vcs-type=git, vendor=Red Hat, Inc.)
Dec 05 09:58:00 np0005546420.localdomain podman[296999]: 2025-12-05 09:58:00.543911021 +0000 UTC m=+0.165112139 container start 3f03809c3ea7889e9cf1e9c05661bdf058a8ba833a105ebf137bfdb1e015e9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_shtern, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, release=1763362218, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph)
Dec 05 09:58:00 np0005546420.localdomain podman[296999]: 2025-12-05 09:58:00.54423278 +0000 UTC m=+0.165433898 container attach 3f03809c3ea7889e9cf1e9c05661bdf058a8ba833a105ebf137bfdb1e015e9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_shtern, release=1763362218, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7)
Dec 05 09:58:00 np0005546420.localdomain compassionate_shtern[297014]: 167 167
Dec 05 09:58:00 np0005546420.localdomain systemd[1]: libpod-3f03809c3ea7889e9cf1e9c05661bdf058a8ba833a105ebf137bfdb1e015e9cf.scope: Deactivated successfully.
Dec 05 09:58:00 np0005546420.localdomain podman[296999]: 2025-12-05 09:58:00.548056619 +0000 UTC m=+0.169257767 container died 3f03809c3ea7889e9cf1e9c05661bdf058a8ba833a105ebf137bfdb1e015e9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_shtern, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1763362218, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, ceph=True)
Dec 05 09:58:00 np0005546420.localdomain podman[297019]: 2025-12-05 09:58:00.650612382 +0000 UTC m=+0.088359338 container remove 3f03809c3ea7889e9cf1e9c05661bdf058a8ba833a105ebf137bfdb1e015e9cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_shtern, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64)
Dec 05 09:58:00 np0005546420.localdomain systemd[1]: libpod-conmon-3f03809c3ea7889e9cf1e9c05661bdf058a8ba833a105ebf137bfdb1e015e9cf.scope: Deactivated successfully.
Dec 05 09:58:00 np0005546420.localdomain sudo[296964]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-db83271693dad1c1432325682ae3cc1da11d8ae76b2dd3c35162889fad9cac87-merged.mount: Deactivated successfully.
Dec 05 09:58:00 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: Reconfiguring mon.np0005546420 (monmap changed)...
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: from='client.44295 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: Saving service mon spec with placement label:mon
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:58:01 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: Reconfiguring osd.2 (monmap changed)...
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:58:02 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:58:03 np0005546420.localdomain sudo[297036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:58:03 np0005546420.localdomain sudo[297036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:03 np0005546420.localdomain sudo[297036]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: from='client.44299 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546420", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.32:0/4101780767' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: from='client.? 172.18.0.32:0/4101780767' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "quorum_status"} : dispatch
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e10 handle_command mon_command({"prefix": "mon rm", "name": "np0005546420"} v 0)
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon rm", "name": "np0005546420"} : dispatch
Dec 05 09:58:03 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adac4fb080 mon_map magic: 0 from mon.2 v2:172.18.0.107:3300/0
Dec 05 09:58:03 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adb5c4c000 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Dec 05 09:58:03 np0005546420.localdomain ceph-mon[288331]: mon.np0005546420@2(peon) e11  removed from monmap, suicide.
Dec 05 09:58:03 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x55adb5c4c160 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Dec 05 09:58:03 np0005546420.localdomain sudo[297054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:03 np0005546420.localdomain sudo[297054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:03 np0005546420.localdomain sudo[297054]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:03 np0005546420.localdomain podman[297070]: 2025-12-05 09:58:03.972609326 +0000 UTC m=+0.064222556 container died 645b8ccfa2de70142d96d855afc4f4edd4e701bee4eab236aac4acc6a90f6630 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546420, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, RELEASE=main)
Dec 05 09:58:03 np0005546420.localdomain sudo[297077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:58:03 np0005546420.localdomain sudo[297077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:03 np0005546420.localdomain sudo[297077]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-21466fbcebca3012089ae16af882543bc406498c642f64ed0cca31d52e5cb588-merged.mount: Deactivated successfully.
Dec 05 09:58:04 np0005546420.localdomain podman[297070]: 2025-12-05 09:58:04.018288381 +0000 UTC m=+0.109901531 container remove 645b8ccfa2de70142d96d855afc4f4edd4e701bee4eab236aac4acc6a90f6630 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546420, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Dec 05 09:58:04 np0005546420.localdomain sudo[297099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 rm-daemon --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b --name mon.np0005546420 --force
Dec 05 09:58:04 np0005546420.localdomain sudo[297099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:58:04 np0005546420.localdomain sudo[297118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297118]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:58:04.117 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:58:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:58:04.119 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:58:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:58:04.120 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:58:04 np0005546420.localdomain sudo[297154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:58:04 np0005546420.localdomain sudo[297154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297154]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain sudo[297173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:04 np0005546420.localdomain sudo[297173]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297173]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain sudo[297191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:58:04 np0005546420.localdomain sudo[297191]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297191]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain sudo[297263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:58:04 np0005546420.localdomain sudo[297263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297263]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain sudo[297284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:58:04 np0005546420.localdomain sudo[297284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297284]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain sudo[297327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:58:04 np0005546420.localdomain sudo[297327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297327]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain sudo[297347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:58:04 np0005546420.localdomain sudo[297347]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297347]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain sudo[297365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:58:04 np0005546420.localdomain sudo[297365]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297365]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain sudo[297393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:58:04 np0005546420.localdomain sudo[297393]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297393]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:04.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:58:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:04.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:58:04 np0005546420.localdomain sudo[297415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:04 np0005546420.localdomain sudo[297415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:04 np0005546420.localdomain sudo[297415]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:04 np0005546420.localdomain systemd[1]: ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b@mon.np0005546420.service: Deactivated successfully.
Dec 05 09:58:04 np0005546420.localdomain systemd[1]: Stopped Ceph mon.np0005546420 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b.
Dec 05 09:58:04 np0005546420.localdomain systemd[1]: ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b@mon.np0005546420.service: Consumed 9.576s CPU time.
Dec 05 09:58:05 np0005546420.localdomain sudo[297433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:58:05 np0005546420.localdomain sudo[297433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:05 np0005546420.localdomain sudo[297433]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:05 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:58:05 np0005546420.localdomain systemd-rc-local-generator[297509]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:58:05 np0005546420.localdomain systemd-sysv-generator[297514]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:58:05 np0005546420.localdomain sudo[297472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:58:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:58:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:05 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:05 np0005546420.localdomain sudo[297472]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:05 np0005546420.localdomain sudo[297472]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:05 np0005546420.localdomain sudo[297099]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:05 np0005546420.localdomain sudo[297525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:58:05 np0005546420.localdomain sudo[297525]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:05 np0005546420.localdomain sudo[297525]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:05 np0005546420.localdomain sudo[297543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:05 np0005546420.localdomain sudo[297543]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:05 np0005546420.localdomain sudo[297543]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:05 np0005546420.localdomain sudo[297561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:58:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:05.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:58:05 np0005546420.localdomain sudo[297561]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:05 np0005546420.localdomain sudo[297561]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:06.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:58:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:06.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:58:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:06.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:58:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:06.886 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:58:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:58:07 np0005546420.localdomain podman[297579]: 2025-12-05 09:58:07.523677176 +0000 UTC m=+0.095552370 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 09:58:07 np0005546420.localdomain podman[297579]: 2025-12-05 09:58:07.52968629 +0000 UTC m=+0.101561524 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:58:07 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:58:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:07.880 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:58:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:08.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:58:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:08.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:58:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:08.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:58:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:08.896 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:58:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:08.896 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:58:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:08.896 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:58:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:08.897 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:58:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:08.897 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:58:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:09.378 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:58:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:09.599 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:58:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:09.601 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12385MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:58:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:09.601 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:58:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:09.602 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:58:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:09.708 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:58:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:09.708 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:58:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:09.729 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:58:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:10.209 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:58:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:10.218 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:58:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:10.243 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:58:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:10.245 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:58:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:10.246 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:58:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:58:10 np0005546420.localdomain podman[297641]: 2025-12-05 09:58:10.497521074 +0000 UTC m=+0.077027019 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:58:10 np0005546420.localdomain podman[297641]: 2025-12-05 09:58:10.510876635 +0000 UTC m=+0.090382570 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:58:10 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:58:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:12.246 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:58:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:58:12.246 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.951 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.952 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 09:58:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 09:58:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:58:13 np0005546420.localdomain systemd[1]: tmp-crun.RmvtMV.mount: Deactivated successfully.
Dec 05 09:58:13 np0005546420.localdomain podman[297665]: 2025-12-05 09:58:13.497066933 +0000 UTC m=+0.074273896 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 09:58:13 np0005546420.localdomain podman[297665]: 2025-12-05 09:58:13.513488188 +0000 UTC m=+0.090695211 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 09:58:13 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:58:14 np0005546420.localdomain sudo[297685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:14 np0005546420.localdomain sudo[297685]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:14 np0005546420.localdomain sudo[297685]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:14 np0005546420.localdomain sudo[297703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:14 np0005546420.localdomain sudo[297703]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:14 np0005546420.localdomain podman[297738]: 
Dec 05 09:58:14 np0005546420.localdomain podman[297738]: 2025-12-05 09:58:14.634269793 +0000 UTC m=+0.080715703 container create 7bfb6aadb0b8336400752ea9a154b6361adfcc7d38522fa9a25f6207f5a23fba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, RELEASE=main, version=7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.41.4, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:58:14 np0005546420.localdomain systemd[1]: Started libpod-conmon-7bfb6aadb0b8336400752ea9a154b6361adfcc7d38522fa9a25f6207f5a23fba.scope.
Dec 05 09:58:14 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:58:14 np0005546420.localdomain podman[297738]: 2025-12-05 09:58:14.599630499 +0000 UTC m=+0.046076439 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:58:14 np0005546420.localdomain podman[297738]: 2025-12-05 09:58:14.712405576 +0000 UTC m=+0.158851496 container init 7bfb6aadb0b8336400752ea9a154b6361adfcc7d38522fa9a25f6207f5a23fba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, GIT_CLEAN=True)
Dec 05 09:58:14 np0005546420.localdomain podman[297738]: 2025-12-05 09:58:14.724679813 +0000 UTC m=+0.171125723 container start 7bfb6aadb0b8336400752ea9a154b6361adfcc7d38522fa9a25f6207f5a23fba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, architecture=x86_64, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:58:14 np0005546420.localdomain podman[297738]: 2025-12-05 09:58:14.724986493 +0000 UTC m=+0.171432463 container attach 7bfb6aadb0b8336400752ea9a154b6361adfcc7d38522fa9a25f6207f5a23fba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:58:14 np0005546420.localdomain vigorous_faraday[297752]: 167 167
Dec 05 09:58:14 np0005546420.localdomain systemd[1]: libpod-7bfb6aadb0b8336400752ea9a154b6361adfcc7d38522fa9a25f6207f5a23fba.scope: Deactivated successfully.
Dec 05 09:58:14 np0005546420.localdomain podman[297738]: 2025-12-05 09:58:14.730409959 +0000 UTC m=+0.176855899 container died 7bfb6aadb0b8336400752ea9a154b6361adfcc7d38522fa9a25f6207f5a23fba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, ceph=True, vendor=Red Hat, Inc.)
Dec 05 09:58:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7c56639276e730ca927cc5dfc3f1748926b38ec4abba49a5b3d1ad90c8ee9071-merged.mount: Deactivated successfully.
Dec 05 09:58:14 np0005546420.localdomain podman[297757]: 2025-12-05 09:58:14.847190501 +0000 UTC m=+0.101808601 container remove 7bfb6aadb0b8336400752ea9a154b6361adfcc7d38522fa9a25f6207f5a23fba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, version=7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=1763362218, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:58:14 np0005546420.localdomain systemd[1]: libpod-conmon-7bfb6aadb0b8336400752ea9a154b6361adfcc7d38522fa9a25f6207f5a23fba.scope: Deactivated successfully.
Dec 05 09:58:14 np0005546420.localdomain sudo[297703]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:15 np0005546420.localdomain sudo[297775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:15 np0005546420.localdomain sudo[297775]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:15 np0005546420.localdomain sudo[297775]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:15 np0005546420.localdomain sudo[297793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:15 np0005546420.localdomain sudo[297793]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:15 np0005546420.localdomain podman[297827]: 
Dec 05 09:58:15 np0005546420.localdomain podman[297827]: 2025-12-05 09:58:15.614187497 +0000 UTC m=+0.080134556 container create c4ece0b1e49e6a93d90b2ce4b79b0e5b722a931d3bb2e4b1831a0637157ed794 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bardeen, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:58:15 np0005546420.localdomain systemd[1]: Started libpod-conmon-c4ece0b1e49e6a93d90b2ce4b79b0e5b722a931d3bb2e4b1831a0637157ed794.scope.
Dec 05 09:58:15 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:58:15 np0005546420.localdomain podman[297827]: 2025-12-05 09:58:15.579802219 +0000 UTC m=+0.045749308 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:58:15 np0005546420.localdomain podman[297827]: 2025-12-05 09:58:15.684496779 +0000 UTC m=+0.150443838 container init c4ece0b1e49e6a93d90b2ce4b79b0e5b722a931d3bb2e4b1831a0637157ed794 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bardeen, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:58:15 np0005546420.localdomain systemd[1]: tmp-crun.glxIAP.mount: Deactivated successfully.
Dec 05 09:58:15 np0005546420.localdomain podman[297827]: 2025-12-05 09:58:15.697088146 +0000 UTC m=+0.163035165 container start c4ece0b1e49e6a93d90b2ce4b79b0e5b722a931d3bb2e4b1831a0637157ed794 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bardeen, name=rhceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:58:15 np0005546420.localdomain podman[297827]: 2025-12-05 09:58:15.697434316 +0000 UTC m=+0.163381385 container attach c4ece0b1e49e6a93d90b2ce4b79b0e5b722a931d3bb2e4b1831a0637157ed794 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bardeen, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:58:15 np0005546420.localdomain bold_bardeen[297843]: 167 167
Dec 05 09:58:15 np0005546420.localdomain systemd[1]: libpod-c4ece0b1e49e6a93d90b2ce4b79b0e5b722a931d3bb2e4b1831a0637157ed794.scope: Deactivated successfully.
Dec 05 09:58:15 np0005546420.localdomain podman[297827]: 2025-12-05 09:58:15.700675777 +0000 UTC m=+0.166622856 container died c4ece0b1e49e6a93d90b2ce4b79b0e5b722a931d3bb2e4b1831a0637157ed794 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bardeen, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218)
Dec 05 09:58:15 np0005546420.localdomain podman[297848]: 2025-12-05 09:58:15.802093045 +0000 UTC m=+0.093048762 container remove c4ece0b1e49e6a93d90b2ce4b79b0e5b722a931d3bb2e4b1831a0637157ed794 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_bardeen, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, vcs-type=git, version=7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Dec 05 09:58:15 np0005546420.localdomain systemd[1]: libpod-conmon-c4ece0b1e49e6a93d90b2ce4b79b0e5b722a931d3bb2e4b1831a0637157ed794.scope: Deactivated successfully.
Dec 05 09:58:15 np0005546420.localdomain sudo[297793]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:16 np0005546420.localdomain sudo[297871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:16 np0005546420.localdomain sudo[297871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:16 np0005546420.localdomain sudo[297871]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:16 np0005546420.localdomain sudo[297889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:16 np0005546420.localdomain sudo[297889]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:16 np0005546420.localdomain podman[297924]: 
Dec 05 09:58:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-428be2c4864e297479266167bbf09356b962c2664b49a93e4915daeeebfa1c73-merged.mount: Deactivated successfully.
Dec 05 09:58:16 np0005546420.localdomain podman[297924]: 2025-12-05 09:58:16.641316462 +0000 UTC m=+0.067145186 container create af85e72757aa2a453fea9f8d44415fb32c7b218791176806e7e341904bc2a876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_davinci, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-type=git, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph)
Dec 05 09:58:16 np0005546420.localdomain systemd[1]: Started libpod-conmon-af85e72757aa2a453fea9f8d44415fb32c7b218791176806e7e341904bc2a876.scope.
Dec 05 09:58:16 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:58:16 np0005546420.localdomain podman[297924]: 2025-12-05 09:58:16.704146454 +0000 UTC m=+0.129975168 container init af85e72757aa2a453fea9f8d44415fb32c7b218791176806e7e341904bc2a876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_davinci, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=1763362218, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:58:16 np0005546420.localdomain podman[297924]: 2025-12-05 09:58:16.611872237 +0000 UTC m=+0.037700991 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:58:16 np0005546420.localdomain podman[297924]: 2025-12-05 09:58:16.714258815 +0000 UTC m=+0.140087589 container start af85e72757aa2a453fea9f8d44415fb32c7b218791176806e7e341904bc2a876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_davinci, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public)
Dec 05 09:58:16 np0005546420.localdomain podman[297924]: 2025-12-05 09:58:16.714527683 +0000 UTC m=+0.140356407 container attach af85e72757aa2a453fea9f8d44415fb32c7b218791176806e7e341904bc2a876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_davinci, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, RELEASE=main, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, build-date=2025-11-26T19:44:28Z, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:58:16 np0005546420.localdomain interesting_davinci[297939]: 167 167
Dec 05 09:58:16 np0005546420.localdomain systemd[1]: libpod-af85e72757aa2a453fea9f8d44415fb32c7b218791176806e7e341904bc2a876.scope: Deactivated successfully.
Dec 05 09:58:16 np0005546420.localdomain podman[297924]: 2025-12-05 09:58:16.717443953 +0000 UTC m=+0.143272747 container died af85e72757aa2a453fea9f8d44415fb32c7b218791176806e7e341904bc2a876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_davinci, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1763362218, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, name=rhceph, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph)
Dec 05 09:58:16 np0005546420.localdomain podman[297944]: 2025-12-05 09:58:16.826680382 +0000 UTC m=+0.095117475 container remove af85e72757aa2a453fea9f8d44415fb32c7b218791176806e7e341904bc2a876 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_davinci, GIT_BRANCH=main, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, distribution-scope=public, version=7, release=1763362218)
Dec 05 09:58:16 np0005546420.localdomain systemd[1]: libpod-conmon-af85e72757aa2a453fea9f8d44415fb32c7b218791176806e7e341904bc2a876.scope: Deactivated successfully.
Dec 05 09:58:17 np0005546420.localdomain sudo[297889]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:17 np0005546420.localdomain sudo[297967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:17 np0005546420.localdomain sudo[297967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:17 np0005546420.localdomain sudo[297967]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:17 np0005546420.localdomain sudo[297986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:17 np0005546420.localdomain sudo[297985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:17 np0005546420.localdomain sudo[297985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:17 np0005546420.localdomain sudo[297986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:17 np0005546420.localdomain sudo[297985]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:58:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:58:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:58:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150826 "" "Go-http-client/1.1"
Dec 05 09:58:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:58:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17731 "" "Go-http-client/1.1"
Dec 05 09:58:17 np0005546420.localdomain sudo[298021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:17 np0005546420.localdomain sudo[298021]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-33a7697bd82d2b20fef7559cbdfc4e3efcd085eba394ee59d1aa55336ead24f4-merged.mount: Deactivated successfully.
Dec 05 09:58:17 np0005546420.localdomain podman[298092]: 
Dec 05 09:58:17 np0005546420.localdomain podman[298092]: 2025-12-05 09:58:17.871942225 +0000 UTC m=+0.105829385 container create 0609221bc0d1721b605de09f82410a0142b9e14276d16bd80daaa45875a390e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mclaren, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:58:17 np0005546420.localdomain podman[298092]: 2025-12-05 09:58:17.8139032 +0000 UTC m=+0.047790400 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:58:17 np0005546420.localdomain systemd[1]: Started libpod-conmon-0609221bc0d1721b605de09f82410a0142b9e14276d16bd80daaa45875a390e1.scope.
Dec 05 09:58:17 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:58:17 np0005546420.localdomain podman[298092]: 2025-12-05 09:58:17.949548581 +0000 UTC m=+0.183435741 container init 0609221bc0d1721b605de09f82410a0142b9e14276d16bd80daaa45875a390e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mclaren, architecture=x86_64, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container)
Dec 05 09:58:17 np0005546420.localdomain podman[298092]: 2025-12-05 09:58:17.960172958 +0000 UTC m=+0.194060158 container start 0609221bc0d1721b605de09f82410a0142b9e14276d16bd80daaa45875a390e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mclaren, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, distribution-scope=public, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, version=7, com.redhat.component=rhceph-container, vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=)
Dec 05 09:58:17 np0005546420.localdomain podman[298092]: 2025-12-05 09:58:17.961269312 +0000 UTC m=+0.195156472 container attach 0609221bc0d1721b605de09f82410a0142b9e14276d16bd80daaa45875a390e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mclaren, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph)
Dec 05 09:58:17 np0005546420.localdomain gallant_mclaren[298108]: 167 167
Dec 05 09:58:17 np0005546420.localdomain systemd[1]: libpod-0609221bc0d1721b605de09f82410a0142b9e14276d16bd80daaa45875a390e1.scope: Deactivated successfully.
Dec 05 09:58:17 np0005546420.localdomain podman[298092]: 2025-12-05 09:58:17.96410638 +0000 UTC m=+0.197993570 container died 0609221bc0d1721b605de09f82410a0142b9e14276d16bd80daaa45875a390e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mclaren, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 09:58:18 np0005546420.localdomain podman[298113]: 2025-12-05 09:58:18.065563729 +0000 UTC m=+0.085180741 container remove 0609221bc0d1721b605de09f82410a0142b9e14276d16bd80daaa45875a390e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mclaren, version=7, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, name=rhceph, release=1763362218, build-date=2025-11-26T19:44:28Z, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: libpod-conmon-0609221bc0d1721b605de09f82410a0142b9e14276d16bd80daaa45875a390e1.scope: Deactivated successfully.
Dec 05 09:58:18 np0005546420.localdomain podman[298129]: 
Dec 05 09:58:18 np0005546420.localdomain podman[298129]: 2025-12-05 09:58:18.193108761 +0000 UTC m=+0.084159969 container create b0aead08acdffeef1a86165b6373e32157ccb369efcc3726e3bb24c8b822c44a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_volhard, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, version=7, ceph=True, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: Started libpod-conmon-b0aead08acdffeef1a86165b6373e32157ccb369efcc3726e3bb24c8b822c44a.scope.
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:58:18 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c29671fb7b39360cbe94b6cc3ab057df07638c5327d904b4c227003434c755b/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:18 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c29671fb7b39360cbe94b6cc3ab057df07638c5327d904b4c227003434c755b/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:18 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c29671fb7b39360cbe94b6cc3ab057df07638c5327d904b4c227003434c755b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:18 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c29671fb7b39360cbe94b6cc3ab057df07638c5327d904b4c227003434c755b/merged/var/lib/ceph/mon/ceph-np0005546420 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:18 np0005546420.localdomain podman[298129]: 2025-12-05 09:58:18.161930152 +0000 UTC m=+0.052981370 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:58:18 np0005546420.localdomain podman[298129]: 2025-12-05 09:58:18.265780476 +0000 UTC m=+0.156831684 container init b0aead08acdffeef1a86165b6373e32157ccb369efcc3726e3bb24c8b822c44a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_volhard, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64)
Dec 05 09:58:18 np0005546420.localdomain podman[298129]: 2025-12-05 09:58:18.279611921 +0000 UTC m=+0.170663129 container start b0aead08acdffeef1a86165b6373e32157ccb369efcc3726e3bb24c8b822c44a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_volhard, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=1763362218, distribution-scope=public, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, com.redhat.component=rhceph-container)
Dec 05 09:58:18 np0005546420.localdomain podman[298129]: 2025-12-05 09:58:18.279999633 +0000 UTC m=+0.171050881 container attach b0aead08acdffeef1a86165b6373e32157ccb369efcc3726e3bb24c8b822c44a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_volhard, distribution-scope=public, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z)
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: libpod-b0aead08acdffeef1a86165b6373e32157ccb369efcc3726e3bb24c8b822c44a.scope: Deactivated successfully.
Dec 05 09:58:18 np0005546420.localdomain podman[298129]: 2025-12-05 09:58:18.388739927 +0000 UTC m=+0.279791135 container died b0aead08acdffeef1a86165b6373e32157ccb369efcc3726e3bb24c8b822c44a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_volhard, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:58:18 np0005546420.localdomain podman[298171]: 2025-12-05 09:58:18.498685708 +0000 UTC m=+0.093854637 container remove b0aead08acdffeef1a86165b6373e32157ccb369efcc3726e3bb24c8b822c44a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_volhard, name=rhceph, vendor=Red Hat, Inc., release=1763362218, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: libpod-conmon-b0aead08acdffeef1a86165b6373e32157ccb369efcc3726e3bb24c8b822c44a.scope: Deactivated successfully.
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:58:18 np0005546420.localdomain systemd-rc-local-generator[298209]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:58:18 np0005546420.localdomain systemd-sysv-generator[298213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:58:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:58:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:58:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:58:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:58:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:58:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:58:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:58:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:58:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:58:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:58:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:58:18 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7b103f21dd73a6c95bb1c13194f96b27b69deff3743443f53b5d5661a1649976-merged.mount: Deactivated successfully.
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: Reloading.
Dec 05 09:58:19 np0005546420.localdomain systemd-rc-local-generator[298252]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 09:58:19 np0005546420.localdomain systemd-sysv-generator[298256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: Starting Ceph mon.np0005546420 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b...
Dec 05 09:58:19 np0005546420.localdomain podman[298316]: 
Dec 05 09:58:19 np0005546420.localdomain podman[298316]: 2025-12-05 09:58:19.802789011 +0000 UTC m=+0.076395570 container create d3eb7a8501e27c5aaeb6ee75362b0ccfbf3d04a86b9a9914cfa42b85afab3811 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546420, release=1763362218, description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container)
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:58:19 np0005546420.localdomain podman[298316]: 2025-12-05 09:58:19.770705783 +0000 UTC m=+0.044312362 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:58:19 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b16e7352336aa5aeff72fedafbee3c18a6bcbc68303014d970bb0e2dbcfaa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:19 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b16e7352336aa5aeff72fedafbee3c18a6bcbc68303014d970bb0e2dbcfaa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:19 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b16e7352336aa5aeff72fedafbee3c18a6bcbc68303014d970bb0e2dbcfaa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:19 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6b16e7352336aa5aeff72fedafbee3c18a6bcbc68303014d970bb0e2dbcfaa6/merged/var/lib/ceph/mon/ceph-np0005546420 supports timestamps until 2038 (0x7fffffff)
Dec 05 09:58:19 np0005546420.localdomain podman[298316]: 2025-12-05 09:58:19.882014687 +0000 UTC m=+0.155621236 container init d3eb7a8501e27c5aaeb6ee75362b0ccfbf3d04a86b9a9914cfa42b85afab3811 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546420, ceph=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 09:58:19 np0005546420.localdomain podman[298316]: 2025-12-05 09:58:19.897084501 +0000 UTC m=+0.170691050 container start d3eb7a8501e27c5aaeb6ee75362b0ccfbf3d04a86b9a9914cfa42b85afab3811 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546420, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, vcs-type=git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64)
Dec 05 09:58:19 np0005546420.localdomain bash[298316]: d3eb7a8501e27c5aaeb6ee75362b0ccfbf3d04a86b9a9914cfa42b85afab3811
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: Started Ceph mon.np0005546420 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b.
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: pidfile_write: ignore empty --pid-file
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: load: jerasure load: lrc 
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: RocksDB version: 7.9.2
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Git sha 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: DB SUMMARY
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: DB Session ID:  BP9PLUSCNVOX5JUVXFD5
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: CURRENT file:  CURRENT
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005546420/store.db dir, Total Num: 0, files: 
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005546420/store.db: 000004.log size: 761 ; 
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                         Options.error_if_exists: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                       Options.create_if_missing: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                                     Options.env: 0x557fb61ca9e0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                                Options.info_log: 0x557fb868ed20
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                              Options.statistics: (nil)
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                               Options.use_fsync: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                              Options.db_log_dir: 
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                                 Options.wal_dir: 
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                    Options.write_buffer_manager: 0x557fb869f540
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                  Options.unordered_write: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                               Options.row_cache: None
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                              Options.wal_filter: None
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.two_write_queues: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.wal_compression: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.atomic_flush: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.max_background_jobs: 2
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.max_background_compactions: -1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.max_subcompactions: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.max_total_wal_size: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                          Options.max_open_files: -1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:       Options.compaction_readahead_size: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Compression algorithms supported:
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         kZSTD supported: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         kXpressCompression supported: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         kBZip2Compression supported: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         kLZ4Compression supported: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         kZlibCompression supported: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         kSnappyCompression supported: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005546420/store.db/MANIFEST-000005
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:           Options.merge_operator: 
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:        Options.compaction_filter: None
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557fb868e980)
                                                             cache_index_and_filter_blocks: 1
                                                             cache_index_and_filter_blocks_with_high_priority: 0
                                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                                             pin_top_level_index_and_filter: 1
                                                             index_type: 0
                                                             data_block_index_type: 0
                                                             index_shortening: 1
                                                             data_block_hash_table_util_ratio: 0.750000
                                                             checksum: 4
                                                             no_block_cache: 0
                                                             block_cache: 0x557fb868b350
                                                             block_cache_name: BinnedLRUCache
                                                             block_cache_options:
                                                               capacity : 536870912
                                                               num_shard_bits : 4
                                                               strict_capacity_limit : 0
                                                               high_pri_pool_ratio: 0.000
                                                             block_cache_compressed: (nil)
                                                             persistent_cache: (nil)
                                                             block_size: 4096
                                                             block_size_deviation: 10
                                                             block_restart_interval: 16
                                                             index_block_restart_interval: 1
                                                             metadata_block_size: 4096
                                                             partition_filters: 0
                                                             use_delta_encoding: 1
                                                             filter_policy: bloomfilter
                                                             whole_key_filtering: 1
                                                             verify_compression: 0
                                                             read_amp_bytes_per_bit: 0
                                                             format_version: 5
                                                             enable_index_compression: 1
                                                             block_align: 0
                                                             max_auto_readahead_size: 262144
                                                             prepopulate_block_cache: 0
                                                             initial_auto_readahead_size: 8192
                                                             num_file_reads_for_auto_readahead: 2
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:        Options.write_buffer_size: 33554432
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:  Options.max_write_buffer_number: 2
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:          Options.compression: NoCompression
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.num_levels: 7
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                   Options.table_properties_collectors: 
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                           Options.bloom_locality: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                               Options.ttl: 2592000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                       Options.enable_blob_files: false
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                           Options.min_blob_size: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005546420/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ff34b52a-187a-4f6e-ae40-2039f644a3dd
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928699950652, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: tmp-crun.9W202K.mount: Deactivated successfully.
Dec 05 09:58:19 np0005546420.localdomain podman[298331]: 2025-12-05 09:58:19.952023699 +0000 UTC m=+0.091747342 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:58:19 np0005546420.localdomain sudo[297986]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928699954882, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928699955219, "job": 1, "event": "recovery_finished"}
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 05 09:58:19 np0005546420.localdomain podman[298331]: 2025-12-05 09:58:19.961757749 +0000 UTC m=+0.101481472 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557fb86b2e00
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: DB pointer 0x557fb87a8000
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      1/0    1.84 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                            Sum      1/0    1.84 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Flush(GB): cumulative 0.000, interval 0.000
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x557fb868b350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0
                                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420 does not exist in monmap, will attempt to join an existing cluster
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: using public_addr v2:172.18.0.104:0/0 -> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: starting mon.np0005546420 rank -1 at public addrs [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] at bind addrs [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005546420 fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(???) e0 preinit fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:19 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing) e11 sync_obtain_latest_monmap
Dec 05 09:58:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing) e11 sync_obtain_latest_monmap obtained monmap e11
Dec 05 09:58:20 np0005546420.localdomain podman[298329]: 2025-12-05 09:58:20.051369375 +0000 UTC m=+0.194881804 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6)
Dec 05 09:58:20 np0005546420.localdomain podman[298329]: 2025-12-05 09:58:20.070750801 +0000 UTC m=+0.214263220 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Dec 05 09:58:20 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:58:20 np0005546420.localdomain podman[298410]: 
Dec 05 09:58:20 np0005546420.localdomain podman[298410]: 2025-12-05 09:58:20.125197235 +0000 UTC m=+0.098912913 container create fe31666e2b45e0a21b236bbe13167f58c5f388315821ffd61e6cf1afecb4fa1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_chaum, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_BRANCH=main, release=1763362218, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:58:20 np0005546420.localdomain systemd[1]: Started libpod-conmon-fe31666e2b45e0a21b236bbe13167f58c5f388315821ffd61e6cf1afecb4fa1e.scope.
Dec 05 09:58:20 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:58:20 np0005546420.localdomain podman[298410]: 2025-12-05 09:58:20.091412236 +0000 UTC m=+0.065127984 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:58:20 np0005546420.localdomain podman[298410]: 2025-12-05 09:58:20.20633432 +0000 UTC m=+0.180049998 container init fe31666e2b45e0a21b236bbe13167f58c5f388315821ffd61e6cf1afecb4fa1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_chaum, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, RELEASE=main, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, architecture=x86_64)
Dec 05 09:58:20 np0005546420.localdomain podman[298410]: 2025-12-05 09:58:20.2180645 +0000 UTC m=+0.191780198 container start fe31666e2b45e0a21b236bbe13167f58c5f388315821ffd61e6cf1afecb4fa1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_chaum, architecture=x86_64, io.buildah.version=1.41.4, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7)
Dec 05 09:58:20 np0005546420.localdomain podman[298410]: 2025-12-05 09:58:20.218363389 +0000 UTC m=+0.192079097 container attach fe31666e2b45e0a21b236bbe13167f58c5f388315821ffd61e6cf1afecb4fa1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_chaum, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, vendor=Red Hat, Inc., version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z)
Dec 05 09:58:20 np0005546420.localdomain upbeat_chaum[298432]: 167 167
Dec 05 09:58:20 np0005546420.localdomain systemd[1]: libpod-fe31666e2b45e0a21b236bbe13167f58c5f388315821ffd61e6cf1afecb4fa1e.scope: Deactivated successfully.
Dec 05 09:58:20 np0005546420.localdomain podman[298437]: 2025-12-05 09:58:20.305278722 +0000 UTC m=+0.062598615 container died fe31666e2b45e0a21b236bbe13167f58c5f388315821ffd61e6cf1afecb4fa1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_chaum, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=1763362218, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main)
Dec 05 09:58:20 np0005546420.localdomain podman[298437]: 2025-12-05 09:58:20.344903901 +0000 UTC m=+0.102223754 container remove fe31666e2b45e0a21b236bbe13167f58c5f388315821ffd61e6cf1afecb4fa1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_chaum, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, name=rhceph, GIT_BRANCH=main, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_CLEAN=True, release=1763362218, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64)
Dec 05 09:58:20 np0005546420.localdomain systemd[1]: libpod-conmon-fe31666e2b45e0a21b236bbe13167f58c5f388315821ffd61e6cf1afecb4fa1e.scope: Deactivated successfully.
Dec 05 09:58:20 np0005546420.localdomain sudo[298021]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing).mds e16 new map
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing).mds e16 print_map
                                                           e16
                                                           enable_multiple, ever_enabled_multiple: 1,1
                                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           legacy client fscid: 1
                                                            
                                                           Filesystem 'cephfs' (1)
                                                           fs_name        cephfs
                                                           epoch        16
                                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                                           created        2025-12-05T08:10:30.749420+0000
                                                           modified        2025-12-05T09:53:37.952087+0000
                                                           tableserver        0
                                                           root        0
                                                           session_timeout        60
                                                           session_autoclose        300
                                                           max_file_size        1099511627776
                                                           required_client_features        {}
                                                           last_failure        0
                                                           last_failure_osd_epoch        84
                                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           max_mds        1
                                                           in        0
                                                           up        {0=26492}
                                                           failed        
                                                           damaged        
                                                           stopped        
                                                           data_pools        [6]
                                                           metadata_pool        7
                                                           inline_data        disabled
                                                           balancer        
                                                           bal_rank_mask        -1
                                                           standby_count_wanted        1
                                                           qdb_cluster        leader: 26492 members: 26492
                                                           [mds.mds.np0005546420.eqhasr{0:26492} state up:active seq 16 addr [v2:172.18.0.107:6808/530338393,v1:172.18.0.107:6809/530338393] compat {c=[1],r=[1],i=[17ff]}]
                                                            
                                                            
                                                           Standby daemons:
                                                            
                                                           [mds.mds.np0005546419.rweotn{-1:16917} state up:standby seq 1 addr [v2:172.18.0.106:6808/2431590011,v1:172.18.0.106:6809/2431590011] compat {c=[1],r=[1],i=[17ff]}]
                                                           [mds.mds.np0005546421.tuudjq{-1:26486} state up:standby seq 1 addr [v2:172.18.0.108:6808/812129975,v1:172.18.0.108:6809/812129975] compat {c=[1],r=[1],i=[17ff]}]
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing).osd e88 crush map has features 3314933000852226048, adjusting msgr requires
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing).osd e88 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing).osd e88 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing).osd e88 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.1 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.4 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mon.np0005546420 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.44295 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Saving service mon spec with placement label:mon
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.2 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.44299 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546420", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4101780767' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4101780767' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.26964 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005546420"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Remove daemons mon.np0005546420
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Safe to remove mon.np0005546420: new quorum should be ['np0005546418', 'np0005546421', 'np0005546419'] (from ['np0005546418', 'np0005546421', 'np0005546419'])
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Removing monitor np0005546420 from monmap...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Removing daemon mon.np0005546420 from np0005546420.localdomain -- ports []
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546419 calling monitor election
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546418 calling monitor election
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546421 calling monitor election
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546418 is new leader, mons np0005546418,np0005546421,np0005546419 in quorum (ranks 0,1,2)
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: monmap epoch 11
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: last_changed 2025-12-05T09:58:03.860592+0000
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: min_mon_release 18 (reef)
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: election_strategy: 1
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: 2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: osdmap e88: 6 total, 6 up, 6 in
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mgrmap e28: np0005546419.zhsnqq(active, since 54s), standbys: np0005546420.aoeylc, np0005546421.sukfea, np0005546416.kmqcnq, np0005546418.garyvl
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: overall HEALTH_OK
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546418 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2722350363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1198660387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.0 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/529344070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.3 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/724970456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1587672163' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/4187605673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.1 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.4 (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: from='client.26994 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005546420.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Deploying daemon mon.np0005546420 on np0005546420.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing).paxosservice(auth 1..39) refresh upgraded, format 0 -> 3
Dec 05 09:58:20 np0005546420.localdomain sudo[298454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:20 np0005546420.localdomain sudo[298454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:20 np0005546420.localdomain sudo[298454]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:20 np0005546420.localdomain sudo[298472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:20 np0005546420.localdomain sudo[298472]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-9453309a30fdd3e7078039cc92ef5a66c94d6a471da72f104b0e0c0566adc87d-merged.mount: Deactivated successfully.
Dec 05 09:58:21 np0005546420.localdomain podman[298508]: 
Dec 05 09:58:21 np0005546420.localdomain podman[298508]: 2025-12-05 09:58:21.155455847 +0000 UTC m=+0.086274685 container create ef134674a1a134880e88d27259db5469a66aca1d3d501b365e9c1d6431ca184e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_mestorf, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, release=1763362218, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:58:21 np0005546420.localdomain systemd[1]: Started libpod-conmon-ef134674a1a134880e88d27259db5469a66aca1d3d501b365e9c1d6431ca184e.scope.
Dec 05 09:58:21 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:58:21 np0005546420.localdomain podman[298508]: 2025-12-05 09:58:21.121477462 +0000 UTC m=+0.052296320 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:58:21 np0005546420.localdomain podman[298508]: 2025-12-05 09:58:21.230440962 +0000 UTC m=+0.161259790 container init ef134674a1a134880e88d27259db5469a66aca1d3d501b365e9c1d6431ca184e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_mestorf, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 09:58:21 np0005546420.localdomain podman[298508]: 2025-12-05 09:58:21.243423301 +0000 UTC m=+0.174242129 container start ef134674a1a134880e88d27259db5469a66aca1d3d501b365e9c1d6431ca184e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_mestorf, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1763362218, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:58:21 np0005546420.localdomain podman[298508]: 2025-12-05 09:58:21.243733691 +0000 UTC m=+0.174552569 container attach ef134674a1a134880e88d27259db5469a66aca1d3d501b365e9c1d6431ca184e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_mestorf, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:58:21 np0005546420.localdomain gifted_mestorf[298523]: 167 167
Dec 05 09:58:21 np0005546420.localdomain systemd[1]: libpod-ef134674a1a134880e88d27259db5469a66aca1d3d501b365e9c1d6431ca184e.scope: Deactivated successfully.
Dec 05 09:58:21 np0005546420.localdomain podman[298508]: 2025-12-05 09:58:21.248212209 +0000 UTC m=+0.179031037 container died ef134674a1a134880e88d27259db5469a66aca1d3d501b365e9c1d6431ca184e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_mestorf, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, release=1763362218, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.buildah.version=1.41.4, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z)
Dec 05 09:58:21 np0005546420.localdomain podman[298528]: 2025-12-05 09:58:21.35006344 +0000 UTC m=+0.089404270 container remove ef134674a1a134880e88d27259db5469a66aca1d3d501b365e9c1d6431ca184e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_mestorf, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:58:21 np0005546420.localdomain systemd[1]: libpod-conmon-ef134674a1a134880e88d27259db5469a66aca1d3d501b365e9c1d6431ca184e.scope: Deactivated successfully.
Dec 05 09:58:21 np0005546420.localdomain sudo[298472]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-96c0e3d515443dcbf48e24bd2ae5fa17035df5cb2d01e36bf20436b978b84fb2-merged.mount: Deactivated successfully.
Dec 05 09:58:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.2 (monmap changed)...
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:24 np0005546420.localdomain systemd[1]: tmp-crun.euanMW.mount: Deactivated successfully.
Dec 05 09:58:24 np0005546420.localdomain podman[298544]: 2025-12-05 09:58:24.532759691 +0000 UTC m=+0.100444240 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:58:24 np0005546420.localdomain podman[298544]: 2025-12-05 09:58:24.59611406 +0000 UTC m=+0.163798629 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 09:58:24 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:58:26 np0005546420.localdomain sudo[298569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:26 np0005546420.localdomain sudo[298569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:26 np0005546420.localdomain sudo[298569]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:26 np0005546420.localdomain sudo[298587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:58:26 np0005546420.localdomain sudo[298587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:27 np0005546420.localdomain systemd[1]: tmp-crun.1g0UAS.mount: Deactivated successfully.
Dec 05 09:58:27 np0005546420.localdomain podman[298677]: 2025-12-05 09:58:27.917224517 +0000 UTC m=+0.098994115 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph)
Dec 05 09:58:28 np0005546420.localdomain podman[298677]: 2025-12-05 09:58:28.036948619 +0000 UTC m=+0.218718207 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, version=7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_CLEAN=True, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True)
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.5 (monmap changed)...
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)...
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)...
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:28 np0005546420.localdomain sudo[298587]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:28 np0005546420.localdomain sudo[298792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:28 np0005546420.localdomain sudo[298792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:28 np0005546420.localdomain sudo[298792]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:28 np0005546420.localdomain sudo[298810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:58:28 np0005546420.localdomain sudo[298810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:29 np0005546420.localdomain sudo[298810]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:29 np0005546420.localdomain sudo[298860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:58:29 np0005546420.localdomain sudo[298860]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:29 np0005546420.localdomain sudo[298860]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:58:30 np0005546420.localdomain podman[298878]: 2025-12-05 09:58:30.512058812 +0000 UTC m=+0.087341477 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:58:30 np0005546420.localdomain podman[298878]: 2025-12-05 09:58:30.554373193 +0000 UTC m=+0.129655828 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 09:58:30 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:32.573818) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928712573883, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11256, "num_deletes": 269, "total_data_size": 18427247, "memory_usage": 19473296, "flush_reason": "Manual Compaction"}
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928712668553, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 18162176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11261, "table_properties": {"data_size": 18098440, "index_size": 35406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27269, "raw_key_size": 289251, "raw_average_key_size": 26, "raw_value_size": 17911118, "raw_average_value_size": 1643, "num_data_blocks": 1360, "num_entries": 10900, "num_filter_entries": 10900, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 1764928699, "file_creation_time": 1764928712, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 95054 microseconds, and 23143 cpu microseconds.
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:32.668658) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 18162176 bytes OK
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:32.668918) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:32.670802) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:32.670837) EVENT_LOG_v1 {"time_micros": 1764928712670825, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:32.670861) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 18348582, prev total WAL file size 18348582, number of live WAL files 2.
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:32.673942) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353131' seq:72057594037927935, type:22 .. '6C6F676D0033373634' seq:0, type:0; will stop at (end)
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(17MB) 8(1887B)]
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928712674024, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 18164063, "oldest_snapshot_seqno": -1}
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 10645 keys, 18158846 bytes, temperature: kUnknown
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928712774900, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 18158846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18095784, "index_size": 35377, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26629, "raw_key_size": 284884, "raw_average_key_size": 26, "raw_value_size": 17911769, "raw_average_value_size": 1682, "num_data_blocks": 1359, "num_entries": 10645, "num_filter_entries": 10645, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764928712, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:32.775599) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 18158846 bytes
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:32.777457) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.8 rd, 179.7 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(17.3, 0.0 +0.0 blob) out(17.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10905, records dropped: 260 output_compression: NoCompression
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:32.777487) EVENT_LOG_v1 {"time_micros": 1764928712777475, "job": 4, "event": "compaction_finished", "compaction_time_micros": 101033, "compaction_time_cpu_micros": 29368, "output_level": 6, "num_output_files": 1, "total_output_size": 18158846, "num_input_records": 10905, "num_output_records": 10645, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928712781178, "job": 4, "event": "table_file_deletion", "file_number": 14}
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928712781387, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 05 09:58:32 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:32.673837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:58:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:58:38 np0005546420.localdomain systemd[1]: tmp-crun.7iDifh.mount: Deactivated successfully.
Dec 05 09:58:38 np0005546420.localdomain podman[298898]: 2025-12-05 09:58:38.524208403 +0000 UTC m=+0.097984135 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:58:38 np0005546420.localdomain podman[298898]: 2025-12-05 09:58:38.555180325 +0000 UTC m=+0.128955997 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 09:58:38 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:58:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:38 np0005546420.localdomain ceph-mon[298353]: pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:38 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/2787489395' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 05 09:58:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:38 np0005546420.localdomain ceph-mon[298353]: pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:38 np0005546420.localdomain ceph-mon[298353]: mgrmap e29: np0005546419.zhsnqq(active, since 86s), standbys: np0005546420.aoeylc, np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:58:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:38 np0005546420.localdomain ceph-mon[298353]: pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:39 np0005546420.localdomain sudo[298917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:39 np0005546420.localdomain sudo[298917]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:39 np0005546420.localdomain sudo[298917]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:39 np0005546420.localdomain sudo[298935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:58:39 np0005546420.localdomain sudo[298935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:40 np0005546420.localdomain systemd[1]: tmp-crun.YpVhHz.mount: Deactivated successfully.
Dec 05 09:58:40 np0005546420.localdomain podman[299021]: 2025-12-05 09:58:40.478385505 +0000 UTC m=+0.103822833 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, architecture=x86_64, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, RELEASE=main)
Dec 05 09:58:40 np0005546420.localdomain podman[299021]: 2025-12-05 09:58:40.583057325 +0000 UTC m=+0.208494623 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_BRANCH=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, name=rhceph)
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='client.27006 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: Reconfig service osd.default_drive_group
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:58:40 np0005546420.localdomain podman[299053]: 2025-12-05 09:58:40.744056045 +0000 UTC m=+0.100194492 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:58:40 np0005546420.localdomain podman[299053]: 2025-12-05 09:58:40.755055883 +0000 UTC m=+0.111194310 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:58:40 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:58:41 np0005546420.localdomain sudo[298935]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr handle_mgr_map Activating!
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr handle_mgr_map I am now activating
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: balancer
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [balancer INFO root] Starting
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [balancer INFO root] Optimize plan auto_2025-12-05_09:58:41
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 05 09:58:41 np0005546420.localdomain sshd[293837]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 09:58:41 np0005546420.localdomain systemd[1]: session-66.scope: Deactivated successfully.
Dec 05 09:58:41 np0005546420.localdomain systemd[1]: session-66.scope: Consumed 30.425s CPU time.
Dec 05 09:58:41 np0005546420.localdomain systemd-logind[762]: Session 66 logged out. Waiting for processes to exit.
Dec 05 09:58:41 np0005546420.localdomain systemd-logind[762]: Removed session 66.
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [cephadm WARNING root] removing stray HostCache host record np0005546416.localdomain.devices.0
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [WRN] : removing stray HostCache host record np0005546416.localdomain.devices.0
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: cephadm
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: crash
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: devicehealth
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [devicehealth INFO root] Starting
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: iostat
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: nfs
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: orchestrator
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: pg_autoscaler
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: progress
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Loading...
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f38d31d1160>, <progress.module.GhostEvent object at 0x7f38d31d13a0>, <progress.module.GhostEvent object at 0x7f38d31d1e80>, <progress.module.GhostEvent object at 0x7f38d31d1e20>, <progress.module.GhostEvent object at 0x7f38d31d1e50>, <progress.module.GhostEvent object at 0x7f38d09a68e0>, <progress.module.GhostEvent object at 0x7f38d09a6910>, <progress.module.GhostEvent object at 0x7f38d09a6940>, <progress.module.GhostEvent object at 0x7f38d09a6970>, <progress.module.GhostEvent object at 0x7f38d09a69a0>, <progress.module.GhostEvent object at 0x7f38d09a69d0>, <progress.module.GhostEvent object at 0x7f38d09a6a00>, <progress.module.GhostEvent object at 0x7f38d09a6a30>, <progress.module.GhostEvent object at 0x7f38d09a6a60>, <progress.module.GhostEvent object at 0x7f38d09a6a90>, <progress.module.GhostEvent object at 0x7f38d09a6ac0>, <progress.module.GhostEvent object at 0x7f38d09a6af0>, <progress.module.GhostEvent object at 0x7f38d09a6b20>, <progress.module.GhostEvent object at 0x7f38d09a6b50>, <progress.module.GhostEvent object at 0x7f38d09a6b80>, <progress.module.GhostEvent object at 0x7f38d09a6bb0>, <progress.module.GhostEvent object at 0x7f38d09a6be0>, <progress.module.GhostEvent object at 0x7f38d09a6c10>, <progress.module.GhostEvent object at 0x7f38d09a6c40>, <progress.module.GhostEvent object at 0x7f38d09a6c70>, <progress.module.GhostEvent object at 0x7f38d09a6ca0>, <progress.module.GhostEvent object at 0x7f38d09a6cd0>, <progress.module.GhostEvent object at 0x7f38d09a6d00>, <progress.module.GhostEvent object at 0x7f38d09a6d30>, <progress.module.GhostEvent object at 0x7f38d09a6d60>, <progress.module.GhostEvent object at 0x7f38d09a6d90>, <progress.module.GhostEvent object at 0x7f38d09a6dc0>, <progress.module.GhostEvent object at 0x7f38d09a6df0>, <progress.module.GhostEvent object at 0x7f38d09a6e20>, <progress.module.GhostEvent object at 0x7f38d09a6e50>, <progress.module.GhostEvent object at 0x7f38d09a6e80>, <progress.module.GhostEvent object at 0x7f38d09a6eb0>, <progress.module.GhostEvent object at 0x7f38d09a6ee0>, <progress.module.GhostEvent object at 0x7f38d09a6f10>, <progress.module.GhostEvent object at 0x7f38d09a6f40>, <progress.module.GhostEvent object at 0x7f38d09a6f70>, <progress.module.GhostEvent object at 0x7f38d09a6fa0>, <progress.module.GhostEvent object at 0x7f38d09a6fd0>, <progress.module.GhostEvent object at 0x7f38d09af040>, <progress.module.GhostEvent object at 0x7f38d09af070>, <progress.module.GhostEvent object at 0x7f38d09af0a0>, <progress.module.GhostEvent object at 0x7f38d09af0d0>, <progress.module.GhostEvent object at 0x7f38d09af100>, <progress.module.GhostEvent object at 0x7f38d09af130>, <progress.module.GhostEvent object at 0x7f38d09af160>] historic events
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Loaded OSDMap, ready.
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] recovery thread starting
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] starting setup
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: rbd_support
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: restful
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [restful INFO root] server_addr: :: server_port: 8003
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: status
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [restful WARNING root] server not running: no certificate configured
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: telemetry
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: mgr load Constructed class from module: volumes
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] PerfHandler: starting
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:58:41.576+0000 7f38bf8d8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:58:41.576+0000 7f38bf8d8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:58:41.576+0000 7f38bf8d8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:58:41.576+0000 7f38bf8d8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:58:41.576+0000 7f38bf8d8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:58:41.577+0000 7f38bc8d2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:58:41.577+0000 7f38bc8d2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:58:41.577+0000 7f38bc8d2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:58:41.577+0000 7f38bc8d2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:58:41.577+0000 7f38bc8d2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] TaskHandler: starting
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 05 09:58:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] setup complete
Dec 05 09:58:41 np0005546420.localdomain sshd[299306]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 09:58:41 np0005546420.localdomain sshd[299306]: Accepted publickey for ceph-admin from 192.168.122.107 port 33464 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 09:58:41 np0005546420.localdomain systemd-logind[762]: New session 69 of user ceph-admin.
Dec 05 09:58:41 np0005546420.localdomain systemd[1]: Started Session 69 of User ceph-admin.
Dec 05 09:58:41 np0005546420.localdomain sshd[299306]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 09:58:41 np0005546420.localdomain sudo[299310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:41 np0005546420.localdomain sudo[299310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:41 np0005546420.localdomain sudo[299310]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:41 np0005546420.localdomain sudo[299328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 09:58:41 np0005546420.localdomain sudo[299328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:42 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:42 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:58:42] ENGINE Bus STARTING
Dec 05 09:58:42 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:58:42] ENGINE Bus STARTING
Dec 05 09:58:42 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:58:42] ENGINE Serving on http://172.18.0.107:8765
Dec 05 09:58:42 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:58:42] ENGINE Serving on http://172.18.0.107:8765
Dec 05 09:58:42 np0005546420.localdomain podman[299425]: 2025-12-05 09:58:42.788781482 +0000 UTC m=+0.096620781 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z)
Dec 05 09:58:42 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:58:42] ENGINE Serving on https://172.18.0.107:7150
Dec 05 09:58:42 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:58:42] ENGINE Serving on https://172.18.0.107:7150
Dec 05 09:58:42 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:58:42] ENGINE Bus STARTED
Dec 05 09:58:42 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:58:42] ENGINE Bus STARTED
Dec 05 09:58:42 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:58:42] ENGINE Client ('172.18.0.107', 59856) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:58:42 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:58:42] ENGINE Client ('172.18.0.107', 59856) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:58:42 np0005546420.localdomain podman[299425]: 2025-12-05 09:58:42.888628143 +0000 UTC m=+0.196467432 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, name=rhceph, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:58:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:43 np0005546420.localdomain sudo[299328]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:43 np0005546420.localdomain ceph-mgr[286940]: [devicehealth INFO root] Check health
Dec 05 09:58:43 np0005546420.localdomain sudo[299557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:43 np0005546420.localdomain sudo[299557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:58:43 np0005546420.localdomain sudo[299557]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:43 np0005546420.localdomain sudo[299583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 09:58:43 np0005546420.localdomain sudo[299583]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:43 np0005546420.localdomain podman[299582]: 2025-12-05 09:58:43.727083156 +0000 UTC m=+0.094243259 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, tcib_managed=true)
Dec 05 09:58:43 np0005546420.localdomain podman[299582]: 2025-12-05 09:58:43.739763097 +0000 UTC m=+0.106923230 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible)
Dec 05 09:58:43 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:58:44 np0005546420.localdomain sudo[299583]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:44 np0005546420.localdomain sudo[299651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:44 np0005546420.localdomain sudo[299651]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:44 np0005546420.localdomain sudo[299651]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:44 np0005546420.localdomain sudo[299669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 05 09:58:44 np0005546420.localdomain sudo[299669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.668931) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928724668986, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 542, "num_deletes": 261, "total_data_size": 2407447, "memory_usage": 2435976, "flush_reason": "Manual Compaction"}
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing).osd e88 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing).osd e88 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(synchronizing).osd e89 e89: 6 total, 6 up, 6 in
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928724686763, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 2337613, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11262, "largest_seqno": 11803, "table_properties": {"data_size": 2333860, "index_size": 1540, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9919, "raw_average_key_size": 21, "raw_value_size": 2325808, "raw_average_value_size": 5056, "num_data_blocks": 62, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928718, "oldest_key_time": 1764928718, "file_creation_time": 1764928724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 17877 microseconds, and 3171 cpu microseconds.
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: mgrmap e30: np0005546419.zhsnqq(active, since 90s), standbys: np0005546420.aoeylc, np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/2749176520' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: Activating manager daemon np0005546420.aoeylc
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: osdmap e89: 6 total, 6 up, 6 in
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/2749176520' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: mgrmap e31: np0005546420.aoeylc(active, starting, since 0.0338754s), standbys: np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' 
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr metadata", "who": "np0005546418.garyvl", "id": "np0005546418.garyvl"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mds metadata"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd metadata"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: Manager daemon np0005546420.aoeylc is now available
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: removing stray HostCache host record np0005546416.localdomain.devices.0
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain.devices.0"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain.devices.0"}]': finished
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain.devices.0"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain.devices.0"}]': finished
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546420.aoeylc/mirror_snapshot_schedule"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546420.aoeylc/trash_purge_schedule"} : dispatch
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: mgrmap e32: np0005546420.aoeylc(active, since 1.08376s), standbys: np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:09:58:42] ENGINE Bus STARTING
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:09:58:42] ENGINE Serving on http://172.18.0.107:8765
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:09:58:42] ENGINE Serving on https://172.18.0.107:7150
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:09:58:42] ENGINE Bus STARTED
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:09:58:42] ENGINE Client ('172.18.0.107', 59856) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: mgrmap e33: np0005546420.aoeylc(active, since 2s), standbys: np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.686805) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 2337613 bytes OK
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.686824) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.688732) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.688748) EVENT_LOG_v1 {"time_micros": 1764928724688743, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.688764) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2403488, prev total WAL file size 2403611, number of live WAL files 2.
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.689385) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end)
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(2282KB)], [15(17MB)]
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928724689485, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 20496459, "oldest_snapshot_seqno": -1}
Dec 05 09:58:44 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:44 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 10568 keys, 17253494 bytes, temperature: kUnknown
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928724809792, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 17253494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17192878, "index_size": 33123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 283861, "raw_average_key_size": 26, "raw_value_size": 17012157, "raw_average_value_size": 1609, "num_data_blocks": 1261, "num_entries": 10568, "num_filter_entries": 10568, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764928724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.810201) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 17253494 bytes
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.812246) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.2 rd, 143.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 17.3 +0.0 blob) out(16.5 +0.0 blob), read-write-amplify(16.1) write-amplify(7.4) OK, records in: 11105, records dropped: 537 output_compression: NoCompression
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.812288) EVENT_LOG_v1 {"time_micros": 1764928724812269, "job": 6, "event": "compaction_finished", "compaction_time_micros": 120404, "compaction_time_cpu_micros": 46214, "output_level": 6, "num_output_files": 1, "total_output_size": 17253494, "num_input_records": 11105, "num_output_records": 10568, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928724812987, "job": 6, "event": "table_file_deletion", "file_number": 17}
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928724817112, "job": 6, "event": "table_file_deletion", "file_number": 15}
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.689272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.817219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.817225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.817228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.817231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:58:44 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:58:44.817234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:58:44 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Adjusting osd_memory_target on np0005546419.localdomain to 836.6M
Dec 05 09:58:44 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546419.localdomain to 836.6M
Dec 05 09:58:44 np0005546420.localdomain ceph-mgr[286940]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:58:44 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:58:45 np0005546420.localdomain sudo[299669]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Adjusting osd_memory_target on np0005546420.localdomain to 836.6M
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546420.localdomain to 836.6M
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Adjusting osd_memory_target on np0005546421.localdomain to 836.6M
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546421.localdomain to 836.6M
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:45 np0005546420.localdomain sudo[299706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:58:45 np0005546420.localdomain sudo[299706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:45 np0005546420.localdomain sudo[299706]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:45 np0005546420.localdomain sudo[299724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:58:45 np0005546420.localdomain sudo[299724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:45 np0005546420.localdomain sudo[299724]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:45 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:45 np0005546420.localdomain sudo[299742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:58:45 np0005546420.localdomain sudo[299742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:45 np0005546420.localdomain sudo[299742]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:45 np0005546420.localdomain sudo[299760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:45 np0005546420.localdomain sudo[299760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:45 np0005546420.localdomain sudo[299760]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:45 np0005546420.localdomain sudo[299778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:58:45 np0005546420.localdomain sudo[299778]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:45 np0005546420.localdomain sudo[299778]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain sudo[299812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:58:46 np0005546420.localdomain sudo[299812]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[299812]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain sudo[299830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:58:46 np0005546420.localdomain sudo[299830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[299830]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain sudo[299848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain sudo[299848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[299848]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain sudo[299866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:58:46 np0005546420.localdomain sudo[299866]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[299866]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mgr.np0005546419.zhsnqq 172.18.0.106:0/3556973025; not ready for session (expect reconnect)
Dec 05 09:58:46 np0005546420.localdomain sudo[299884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:58:46 np0005546420.localdomain sudo[299884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[299884]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain sudo[299902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:58:46 np0005546420.localdomain sudo[299902]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[299902]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain sudo[299920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:46 np0005546420.localdomain sudo[299920]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[299920]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain sudo[299938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:58:46 np0005546420.localdomain sudo[299938]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[299938]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain sudo[299972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:58:46 np0005546420.localdomain sudo[299972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[299972]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: mgrmap e34: np0005546420.aoeylc(active, since 4s), standbys: np0005546421.sukfea, np0005546418.garyvl
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain ceph-mon[298353]: Standby manager daemon np0005546419.zhsnqq started
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:46 np0005546420.localdomain sudo[299990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:58:46 np0005546420.localdomain sudo[299990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[299990]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain sudo[300008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:58:46 np0005546420.localdomain sudo[300008]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[300008]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:46 np0005546420.localdomain sudo[300026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:58:46 np0005546420.localdomain sudo[300026]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:46 np0005546420.localdomain sudo[300026]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:46 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:47 np0005546420.localdomain sudo[300044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:58:47 np0005546420.localdomain sudo[300044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:47 np0005546420.localdomain sudo[300044]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:47 np0005546420.localdomain sudo[300062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:58:47 np0005546420.localdomain sudo[300062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:47 np0005546420.localdomain sudo[300062]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:58:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:58:47 np0005546420.localdomain sudo[300080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:47 np0005546420.localdomain sudo[300080]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:47 np0005546420.localdomain sudo[300080]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:58:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 09:58:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:58:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18212 "" "Go-http-client/1.1"
Dec 05 09:58:47 np0005546420.localdomain sudo[300098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:58:47 np0005546420.localdomain sudo[300098]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:47 np0005546420.localdomain sudo[300098]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:47 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:47 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:47 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:47 np0005546420.localdomain sudo[300132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:58:47 np0005546420.localdomain sudo[300132]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:47 np0005546420.localdomain sudo[300132]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:47 np0005546420.localdomain sudo[300150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 09:58:47 np0005546420.localdomain sudo[300150]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:47 np0005546420.localdomain sudo[300150]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:47 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:47 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:47 np0005546420.localdomain sudo[300168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:47 np0005546420.localdomain sudo[300168]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:47 np0005546420.localdomain sudo[300168]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:47 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:47 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:47 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:47 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:47 np0005546420.localdomain sudo[300186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:58:47 np0005546420.localdomain sudo[300186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:47 np0005546420.localdomain sudo[300186]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:47 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:47 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:47 np0005546420.localdomain sudo[300204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:58:47 np0005546420.localdomain sudo[300204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:47 np0005546420.localdomain sudo[300204]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:47 np0005546420.localdomain sudo[300222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:58:47 np0005546420.localdomain sudo[300222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:47 np0005546420.localdomain sudo[300222]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:47 np0005546420.localdomain sudo[300240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:47 np0005546420.localdomain sudo[300240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:47 np0005546420.localdomain sudo[300240]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:48 np0005546420.localdomain sudo[300258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:58:48 np0005546420.localdomain sudo[300258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:48 np0005546420.localdomain sudo[300258]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:48 np0005546420.localdomain sudo[300292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:58:48 np0005546420.localdomain sudo[300292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:48 np0005546420.localdomain sudo[300292]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:48 np0005546420.localdomain sudo[300310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 09:58:48 np0005546420.localdomain sudo[300310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:48 np0005546420.localdomain sudo[300310]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:48 np0005546420.localdomain sudo[300328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:48 np0005546420.localdomain sudo[300328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:48 np0005546420.localdomain sudo[300328]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:48 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] update: starting ev 792c119f-56cd-4189-990d-9b1875dd9a20 (Updating node-proxy deployment (+4 -> 4))
Dec 05 09:58:48 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] complete: finished ev 792c119f-56cd-4189-990d-9b1875dd9a20 (Updating node-proxy deployment (+4 -> 4))
Dec 05 09:58:48 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Completed event 792c119f-56cd-4189-990d-9b1875dd9a20 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Dec 05 09:58:48 np0005546420.localdomain sudo[300346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:58:48 np0005546420.localdomain sudo[300346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:48 np0005546420.localdomain sudo[300346]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:48 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:48 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: mgrmap e35: np0005546420.aoeylc(active, since 5s), standbys: np0005546421.sukfea, np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} : dispatch
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:58:48 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:58:48 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:58:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:58:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:58:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:58:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:58:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:58:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:58:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:58:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:58:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:58:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:58:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:58:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:58:49 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Dec 05 09:58:49 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:49 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:49 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:58:49 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:58:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:58:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:58:50 np0005546420.localdomain systemd[1]: tmp-crun.L30y4g.mount: Deactivated successfully.
Dec 05 09:58:50 np0005546420.localdomain podman[300364]: 2025-12-05 09:58:50.531340024 +0000 UTC m=+0.100225923 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 09:58:50 np0005546420.localdomain podman[300364]: 2025-12-05 09:58:50.549296446 +0000 UTC m=+0.118182365 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Dec 05 09:58:50 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:58:50 np0005546420.localdomain podman[300365]: 2025-12-05 09:58:50.632923587 +0000 UTC m=+0.197102121 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:58:50 np0005546420.localdomain podman[300365]: 2025-12-05 09:58:50.673401793 +0000 UTC m=+0.237580327 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:58:50 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:58:50 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:50 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:50 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:58:50 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:58:51 np0005546420.localdomain sudo[300405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:51 np0005546420.localdomain sudo[300405]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:51 np0005546420.localdomain sudo[300405]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:51 np0005546420.localdomain sudo[300423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:51 np0005546420.localdomain sudo[300423]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:51 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Dec 05 09:58:51 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Writing back 50 completed events
Dec 05 09:58:51 np0005546420.localdomain systemd[1]: tmp-crun.iwNora.mount: Deactivated successfully.
Dec 05 09:58:51 np0005546420.localdomain podman[300458]: 
Dec 05 09:58:51 np0005546420.localdomain podman[300458]: 2025-12-05 09:58:51.586512742 +0000 UTC m=+0.089122032 container create dbcc346bfe94a9d600650f4a8d6d7e0c9de9c4da05f62530e5e9284167a93fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_knuth, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 09:58:51 np0005546420.localdomain podman[300458]: 2025-12-05 09:58:51.550643689 +0000 UTC m=+0.053253019 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:58:51 np0005546420.localdomain systemd[1]: Started libpod-conmon-dbcc346bfe94a9d600650f4a8d6d7e0c9de9c4da05f62530e5e9284167a93fff.scope.
Dec 05 09:58:51 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:58:51 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:51 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:51 np0005546420.localdomain podman[300458]: 2025-12-05 09:58:51.70971411 +0000 UTC m=+0.212323390 container init dbcc346bfe94a9d600650f4a8d6d7e0c9de9c4da05f62530e5e9284167a93fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_knuth, release=1763362218, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, architecture=x86_64)
Dec 05 09:58:51 np0005546420.localdomain podman[300458]: 2025-12-05 09:58:51.725162975 +0000 UTC m=+0.227772255 container start dbcc346bfe94a9d600650f4a8d6d7e0c9de9c4da05f62530e5e9284167a93fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_knuth, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:58:51 np0005546420.localdomain podman[300458]: 2025-12-05 09:58:51.725467824 +0000 UTC m=+0.228077104 container attach dbcc346bfe94a9d600650f4a8d6d7e0c9de9c4da05f62530e5e9284167a93fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_knuth, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, ceph=True)
Dec 05 09:58:51 np0005546420.localdomain silly_knuth[300473]: 167 167
Dec 05 09:58:51 np0005546420.localdomain systemd[1]: libpod-dbcc346bfe94a9d600650f4a8d6d7e0c9de9c4da05f62530e5e9284167a93fff.scope: Deactivated successfully.
Dec 05 09:58:51 np0005546420.localdomain podman[300458]: 2025-12-05 09:58:51.729443887 +0000 UTC m=+0.232053197 container died dbcc346bfe94a9d600650f4a8d6d7e0c9de9c4da05f62530e5e9284167a93fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_knuth, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, release=1763362218, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 09:58:51 np0005546420.localdomain podman[300478]: 2025-12-05 09:58:51.835030123 +0000 UTC m=+0.090529344 container remove dbcc346bfe94a9d600650f4a8d6d7e0c9de9c4da05f62530e5e9284167a93fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_knuth, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.openshift.expose-services=, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True, com.redhat.component=rhceph-container)
Dec 05 09:58:51 np0005546420.localdomain systemd[1]: libpod-conmon-dbcc346bfe94a9d600650f4a8d6d7e0c9de9c4da05f62530e5e9284167a93fff.scope: Deactivated successfully.
Dec 05 09:58:52 np0005546420.localdomain sudo[300423]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:52 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:58:52 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:58:52 np0005546420.localdomain sudo[300501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:58:52 np0005546420.localdomain sudo[300501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:52 np0005546420.localdomain sudo[300501]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:52 np0005546420.localdomain sudo[300519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:58:52 np0005546420.localdomain sudo[300519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-723b0fc618bfe97cf2288e0496a25e6af667e62bc9413e4d7bf4e0c3209384dd-merged.mount: Deactivated successfully.
Dec 05 09:58:52 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:52 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:52 np0005546420.localdomain podman[300554]: 
Dec 05 09:58:52 np0005546420.localdomain podman[300554]: 2025-12-05 09:58:52.771106929 +0000 UTC m=+0.062473502 container create b3d2b9427627593f83af9218f7400ab0602beacb5a97cef7f3d20248c46f4ab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_babbage, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, ceph=True, distribution-scope=public)
Dec 05 09:58:52 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:52 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:52 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:52 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:52 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:52 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:58:52 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:58:52 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:52 np0005546420.localdomain systemd[1]: Started libpod-conmon-b3d2b9427627593f83af9218f7400ab0602beacb5a97cef7f3d20248c46f4ab5.scope.
Dec 05 09:58:52 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:58:52 np0005546420.localdomain podman[300554]: 2025-12-05 09:58:52.745044308 +0000 UTC m=+0.036410891 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:58:52 np0005546420.localdomain podman[300554]: 2025-12-05 09:58:52.853782621 +0000 UTC m=+0.145149204 container init b3d2b9427627593f83af9218f7400ab0602beacb5a97cef7f3d20248c46f4ab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_babbage, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, vcs-type=git, release=1763362218, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, build-date=2025-11-26T19:44:28Z)
Dec 05 09:58:52 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.44402 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:58:52 np0005546420.localdomain podman[300554]: 2025-12-05 09:58:52.864755719 +0000 UTC m=+0.156122292 container start b3d2b9427627593f83af9218f7400ab0602beacb5a97cef7f3d20248c46f4ab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_babbage, name=rhceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, version=7, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, release=1763362218, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:58:52 np0005546420.localdomain podman[300554]: 2025-12-05 09:58:52.865066478 +0000 UTC m=+0.156433051 container attach b3d2b9427627593f83af9218f7400ab0602beacb5a97cef7f3d20248c46f4ab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_babbage, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, architecture=x86_64, ceph=True, GIT_CLEAN=True, vcs-type=git, version=7, CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container)
Dec 05 09:58:52 np0005546420.localdomain gallant_babbage[300568]: 167 167
Dec 05 09:58:52 np0005546420.localdomain systemd[1]: libpod-b3d2b9427627593f83af9218f7400ab0602beacb5a97cef7f3d20248c46f4ab5.scope: Deactivated successfully.
Dec 05 09:58:52 np0005546420.localdomain podman[300554]: 2025-12-05 09:58:52.868868335 +0000 UTC m=+0.160234958 container died b3d2b9427627593f83af9218f7400ab0602beacb5a97cef7f3d20248c46f4ab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_babbage, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_BRANCH=main, release=1763362218, name=rhceph, io.openshift.expose-services=, version=7, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git)
Dec 05 09:58:52 np0005546420.localdomain podman[300573]: 2025-12-05 09:58:52.977025631 +0000 UTC m=+0.097781138 container remove b3d2b9427627593f83af9218f7400ab0602beacb5a97cef7f3d20248c46f4ab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_babbage, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=1763362218, build-date=2025-11-26T19:44:28Z, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:58:52 np0005546420.localdomain systemd[1]: libpod-conmon-b3d2b9427627593f83af9218f7400ab0602beacb5a97cef7f3d20248c46f4ab5.scope: Deactivated successfully.
Dec 05 09:58:53 np0005546420.localdomain sudo[300519]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:53 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:58:53 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:58:53 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 09:58:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-b6241e21656484a4d20adbb1503851f16b39b449a6ccd96e6cb67f07015833fc-merged.mount: Deactivated successfully.
Dec 05 09:58:53 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:53 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:54 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 09:58:54 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 09:58:54 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:54 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='client.44402 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 09:58:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(probing) e11 handle_auth_request failed to assign global_id
Dec 05 09:58:55 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 09:58:55 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.27057 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:58:55 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Saving service mon spec with placement label:mon
Dec 05 09:58:55 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon
Dec 05 09:58:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:58:55 np0005546420.localdomain podman[300597]: 2025-12-05 09:58:55.517181184 +0000 UTC m=+0.091159885 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 09:58:55 np0005546420.localdomain podman[300597]: 2025-12-05 09:58:55.561341802 +0000 UTC m=+0.135320503 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 09:58:55 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:58:55 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:55 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:55 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005546421 (monmap changed)...
Dec 05 09:58:55 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005546421 (monmap changed)...
Dec 05 09:58:55 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:58:55 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:58:56 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:56 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:56 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] update: starting ev 5003effe-f990-455c-91e4-280add23abdd (Updating node-proxy deployment (+4 -> 4))
Dec 05 09:58:56 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] complete: finished ev 5003effe-f990-455c-91e4-280add23abdd (Updating node-proxy deployment (+4 -> 4))
Dec 05 09:58:56 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Completed event 5003effe-f990-455c-91e4-280add23abdd (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Dec 05 09:58:56 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.27063 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546420", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: from='client.27057 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: Saving service mon spec with placement label:mon
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:58:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:58:57 np0005546420.localdomain sudo[300621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:58:57 np0005546420.localdomain sudo[300621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:58:57 np0005546420.localdomain sudo[300621]: pam_unix(sudo:session): session closed for user root
Dec 05 09:58:57 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005546418 (monmap changed)...
Dec 05 09:58:57 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005546418 (monmap changed)...
Dec 05 09:58:57 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain
Dec 05 09:58:57 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain
Dec 05 09:58:57 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 09:58:57 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:57 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:58 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005546419 (monmap changed)...
Dec 05 09:58:58 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005546419 (monmap changed)...
Dec 05 09:58:58 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 09:58:58 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 09:58:58 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:58 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:58:59 np0005546420.localdomain sshd[296068]: Received disconnect from 192.168.122.11 port 44154:11: disconnected by user
Dec 05 09:58:59 np0005546420.localdomain sshd[296068]: Disconnected from user tripleo-admin 192.168.122.11 port 44154
Dec 05 09:58:59 np0005546420.localdomain sshd[296013]: pam_unix(sshd:session): session closed for user tripleo-admin
Dec 05 09:58:59 np0005546420.localdomain systemd[1]: session-67.scope: Deactivated successfully.
Dec 05 09:58:59 np0005546420.localdomain systemd[1]: session-67.scope: Consumed 1.876s CPU time.
Dec 05 09:58:59 np0005546420.localdomain systemd-logind[762]: Session 67 logged out. Waiting for processes to exit.
Dec 05 09:58:59 np0005546420.localdomain systemd-logind[762]: Removed session 67.
Dec 05 09:58:59 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 09:58:59 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:58:59 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory
Dec 05 09:59:00 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (22) Invalid argument
Dec 05 09:59:00 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:59:00 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (22) Invalid argument
Dec 05 09:59:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@-1(probing) e12  my rank is now 3 (was -1)
Dec 05 09:59:00 np0005546420.localdomain ceph-mon[298353]: log_channel(cluster) log [INF] : mon.np0005546420 calling monitor election
Dec 05 09:59:00 np0005546420.localdomain ceph-mon[298353]: paxos.3).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 05 09:59:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(electing) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:01 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:59:01 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Writing back 50 completed events
Dec 05 09:59:01 np0005546420.localdomain systemd[292579]: Starting Mark boot as successful...
Dec 05 09:59:01 np0005546420.localdomain podman[300639]: 2025-12-05 09:59:01.511118593 +0000 UTC m=+0.090394491 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:59:01 np0005546420.localdomain systemd[292579]: Finished Mark boot as successful.
Dec 05 09:59:01 np0005546420.localdomain podman[300639]: 2025-12-05 09:59:01.523442562 +0000 UTC m=+0.102718460 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:59:01 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:59:01 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:59:01 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (22) Invalid argument
Dec 05 09:59:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:01.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:01.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 09:59:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:01.887 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 09:59:02 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:59:02 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (22) Invalid argument
Dec 05 09:59:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(electing) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:03 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(electing) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:03 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:59:03 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (22) Invalid argument
Dec 05 09:59:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(electing) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:03.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:03.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 09:59:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:59:04.119 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:59:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:59:04.120 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:59:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 09:59:04.121 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:59:04 np0005546420.localdomain ceph-mds[283770]: mds.beacon.mds.np0005546420.eqhasr missed beacon ack from the monitors
Dec 05 09:59:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(electing) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:04 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:59:04 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (22) Invalid argument
Dec 05 09:59:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:04.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:04.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 09:59:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:04.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:05 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:05 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:59:05 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546420: (22) Invalid argument
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(electing) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(electing) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mon.np0005546421 (monmap changed)...
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='client.27063 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546420", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mon.np0005546418 (monmap changed)...
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mon.np0005546419 (monmap changed)...
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/3129033862' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546419 calling monitor election
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546418 calling monitor election
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546421 calling monitor election
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546418 is new leader, mons np0005546418,np0005546421,np0005546419 in quorum (ranks 0,1,2)
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: monmap epoch 12
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: last_changed 2025-12-05T09:59:00.442079+0000
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: min_mon_release 18 (reef)
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: election_strategy: 1
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: 2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: 3: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546420
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: osdmap e89: 6 total, 6 up, 6 in
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mgrmap e35: np0005546420.aoeylc(active, since 24s), standbys: np0005546421.sukfea, np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: Health check failed: 1/4 mons down, quorum np0005546418,np0005546421,np0005546419 (MON_DOWN)
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: Health detail: HEALTH_WARN 1/4 mons down, quorum np0005546418,np0005546421,np0005546419
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: [WRN] MON_DOWN: 1/4 mons down, quorum np0005546418,np0005546421,np0005546419
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]:     mon.np0005546420 (rank 3) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum)
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: log_channel(cluster) log [INF] : mon.np0005546420 calling monitor election
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: paxos.3).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(electing) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(electing) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(electing) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(peon) e12 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(peon) e12 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(peon) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:05 np0005546420.localdomain ceph-mon[298353]: mgrc update_daemon_metadata mon.np0005546420 metadata {addrs=[v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005546420.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005546420.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux}
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(peon) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(peon) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(peon) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(peon) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420 calling monitor election
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420 calling monitor election
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546419 calling monitor election
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546418 calling monitor election
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546421 calling monitor election
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546418 is new leader, mons np0005546418,np0005546421,np0005546419,np0005546420 in quorum (ranks 0,1,2,3)
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: monmap epoch 12
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: last_changed 2025-12-05T09:59:00.442079+0000
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: min_mon_release 18 (reef)
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: election_strategy: 1
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546418
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: 2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: 3: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546420
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: osdmap e89: 6 total, 6 up, 6 in
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: mgrmap e35: np0005546420.aoeylc(active, since 24s), standbys: np0005546421.sukfea, np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: Health check cleared: MON_DOWN (was: 1/4 mons down, quorum np0005546418,np0005546421,np0005546419)
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: Cluster is now healthy
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: overall HEALTH_OK
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/1399919335' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2791024277' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 09:59:06 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2791024277' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 09:59:06 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect)
Dec 05 09:59:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:06.884 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:06.885 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 09:59:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:06.885 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 09:59:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:06.898 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 09:59:07 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:07 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(peon) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:07 np0005546420.localdomain ceph-mon[298353]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:07 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T09:59:07.710+0000 7f38eb2af640 -1 mgr.server handle_report got status from non-daemon mon.np0005546420
Dec 05 09:59:07 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_report got status from non-daemon mon.np0005546420
Dec 05 09:59:07 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.44434 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546418", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:59:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:07.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:07.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:08 np0005546420.localdomain ceph-mon[298353]: from='client.44434 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546418", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:59:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:08.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(peon) e12 handle_auth_request failed to assign global_id
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:59:09 np0005546420.localdomain systemd[1]: Stopping User Manager for UID 1003...
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Activating special unit Exit the Session...
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Stopped target Main User Target.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Stopped target Basic System.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Stopped target Paths.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Stopped target Sockets.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Stopped target Timers.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Closed D-Bus User Message Bus Socket.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Stopped Create User's Volatile Files and Directories.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Removed slice User Application Slice.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Reached target Shutdown.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Finished Exit the Session.
Dec 05 09:59:09 np0005546420.localdomain systemd[296017]: Reached target Exit the Session.
Dec 05 09:59:09 np0005546420.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Dec 05 09:59:09 np0005546420.localdomain systemd[1]: Stopped User Manager for UID 1003.
Dec 05 09:59:09 np0005546420.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Dec 05 09:59:09 np0005546420.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Dec 05 09:59:09 np0005546420.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Dec 05 09:59:09 np0005546420.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Dec 05 09:59:09 np0005546420.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Dec 05 09:59:09 np0005546420.localdomain systemd[1]: user-1003.slice: Consumed 2.509s CPU time.
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.34441 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005546418"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Remove daemons mon.np0005546418
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005546418
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005546418: new quorum should be ['np0005546421', 'np0005546419', 'np0005546420'] (from ['np0005546421', 'np0005546419', 'np0005546420'])
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005546418: new quorum should be ['np0005546421', 'np0005546419', 'np0005546420'] (from ['np0005546421', 'np0005546419', 'np0005546420'])
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005546418 from monmap...
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Removing monitor np0005546418 from monmap...
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005546418 from np0005546418.localdomain -- ports []
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005546418 from np0005546418.localdomain -- ports []
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: client.44390 ms_handle_reset on v2:172.18.0.103:3300/0
Dec 05 09:59:09 np0005546420.localdomain podman[300659]: 2025-12-05 09:59:09.530265379 +0000 UTC m=+0.105620219 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@3(peon) e13  my rank is now 2 (was 3)
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: client.44390 ms_handle_reset on v2:172.18.0.104:3300/0
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546419"} v 0)
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0)
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0)
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: log_channel(cluster) log [INF] : mon.np0005546420 calling monitor election
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: paxos.2).electionLogic(56) init, last seen epoch 56
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:09 np0005546420.localdomain ceph-mgr[286940]: client.34396 ms_handle_reset on v2:172.18.0.108:3300/0
Dec 05 09:59:09 np0005546420.localdomain podman[300659]: 2025-12-05 09:59:09.564799491 +0000 UTC m=+0.140154361 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 09:59:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:09 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:59:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:09.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:09.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:09.892 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:59:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:09.893 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:59:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:09.893 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:59:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:09.894 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 09:59:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:09.894 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:59:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:11 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:59:11 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:11 np0005546420.localdomain podman[300689]: 2025-12-05 09:59:11.499358591 +0000 UTC m=+0.080571169 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:59:11 np0005546420.localdomain podman[300689]: 2025-12-05 09:59:11.509057469 +0000 UTC m=+0.090270087 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:59:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:11 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:59:11 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:13 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:14 np0005546420.localdomain podman[300713]: 2025-12-05 09:59:14.515102568 +0000 UTC m=+0.094154906 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Dec 05 09:59:14 np0005546420.localdomain podman[300713]: 2025-12-05 09:59:14.532431441 +0000 UTC m=+0.111483779 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:14 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:59:14 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:14 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:14 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:14 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:14 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:14 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:14 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:14 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:14 np0005546420.localdomain sudo[300732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:59:14 np0005546420.localdomain sudo[300732]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:14 np0005546420.localdomain sudo[300732]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_auth_request failed to assign global_id
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: from='client.34441 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005546418"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: Remove daemons mon.np0005546418
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: Safe to remove mon.np0005546418: new quorum should be ['np0005546421', 'np0005546419', 'np0005546420'] (from ['np0005546421', 'np0005546419', 'np0005546420'])
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: Removing monitor np0005546418 from monmap...
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: Removing daemon mon.np0005546418 from np0005546418.localdomain -- ports []
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420 calling monitor election
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546421 calling monitor election
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546421 is new leader, mons np0005546421,np0005546420 in quorum (ranks 0,2)
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: monmap epoch 13
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: last_changed 2025-12-05T09:59:09.512410+0000
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: min_mon_release 18 (reef)
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: election_strategy: 1
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546420
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: osdmap e89: 6 total, 6 up, 6 in
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mgrmap e35: np0005546420.aoeylc(active, since 33s), standbys: np0005546421.sukfea, np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: Health check failed: 1/3 mons down, quorum np0005546421,np0005546420 (MON_DOWN)
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005546421,np0005546420
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005546421,np0005546420
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]:     mon.np0005546419 (rank 1) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum)
Dec 05 09:59:14 np0005546420.localdomain sudo[300750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:59:14 np0005546420.localdomain sudo[300750]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:14 np0005546420.localdomain sudo[300750]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:14 np0005546420.localdomain sudo[300768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:59:14 np0005546420.localdomain sudo[300768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:14 np0005546420.localdomain sudo[300768]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:14 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.34446 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546418.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:59:14 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Removed label mon from host np0005546418.localdomain
Dec 05 09:59:14 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Removed label mon from host np0005546418.localdomain
Dec 05 09:59:14 np0005546420.localdomain sudo[300786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:14 np0005546420.localdomain sudo[300786]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:14 np0005546420.localdomain sudo[300786]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon).osd e89 _set_new_cache_sizes cache_size:1019728276 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:59:14 np0005546420.localdomain sudo[300804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:59:14 np0005546420.localdomain sudo[300804]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:14 np0005546420.localdomain sudo[300804]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 np0005546420.localdomain sudo[300838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:59:15 np0005546420.localdomain sudo[300838]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:15 np0005546420.localdomain sudo[300838]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 np0005546420.localdomain sudo[300856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:59:15 np0005546420.localdomain sudo[300856]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:15 np0005546420.localdomain sudo[300856]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain sudo[300874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain sudo[300874]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:15 np0005546420.localdomain sudo[300874]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:15 np0005546420.localdomain sudo[300892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:59:15 np0005546420.localdomain sudo[300892]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:15 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain sudo[300892]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 np0005546420.localdomain sudo[300910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:59:15 np0005546420.localdomain sudo[300910]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:15 np0005546420.localdomain sudo[300910]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 np0005546420.localdomain sudo[300928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:59:15 np0005546420.localdomain sudo[300928]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:15 np0005546420.localdomain sudo[300928]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 np0005546420.localdomain sudo[300946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:15 np0005546420.localdomain sudo[300946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:15 np0005546420.localdomain sudo[300946]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 np0005546420.localdomain sudo[300964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:59:15 np0005546420.localdomain sudo[300964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:15 np0005546420.localdomain sudo[300964]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 np0005546420.localdomain sudo[300998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:59:15 np0005546420.localdomain sudo[300998]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:15 np0005546420.localdomain sudo[300998]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: from='client.34446 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546418.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: Removed label mon from host np0005546418.localdomain
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/641198174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:59:15 np0005546420.localdomain sudo[301016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:59:15 np0005546420.localdomain sudo[301016]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:15 np0005546420.localdomain sudo[301016]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:59:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:59:15 np0005546420.localdomain sudo[301034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:16 np0005546420.localdomain sudo[301034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:16 np0005546420.localdomain sudo[301034]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] update: starting ev 4381e0bd-3357-4290-92e3-317d58669126 (Updating node-proxy deployment (+4 -> 4))
Dec 05 09:59:16 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] complete: finished ev 4381e0bd-3357-4290-92e3-317d58669126 (Updating node-proxy deployment (+4 -> 4))
Dec 05 09:59:16 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Completed event 4381e0bd-3357-4290-92e3-317d58669126 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:59:16 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.34458 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546418.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Removed label mgr from host np0005546418.localdomain
Dec 05 09:59:16 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005546418.localdomain
Dec 05 09:59:16 np0005546420.localdomain sudo[301061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:59:16 np0005546420.localdomain sudo[301061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:16 np0005546420.localdomain sudo[301061]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/202535236' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:59:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:16.437 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:59:16 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546418.garyvl (monmap changed)...
Dec 05 09:59:16 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546418.garyvl (monmap changed)...
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:16 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain
Dec 05 09:59:16 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:16.637 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 09:59:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:16.638 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12311MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 09:59:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:16.639 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 09:59:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:16.639 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 09:59:16 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2420115631' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:59:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:16.818 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 09:59:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:16.819 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 09:59:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:16.900 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 09:59:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:16.957 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 09:59:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:16.957 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 09:59:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:16.975 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 09:59:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:17.002 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 09:59:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:17.016 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 09:59:17 np0005546420.localdomain podman[240363]: time="2025-12-05T09:59:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:59:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:59:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1348487981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546419 calling monitor election
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: from='client.34458 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546418.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: Removed label mgr from host np0005546418.localdomain
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)...
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546421 calling monitor election
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546421 is new leader, mons np0005546421,np0005546419,np0005546420 in quorum (ranks 0,1,2)
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: monmap epoch 13
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: last_changed 2025-12-05T09:59:09.512410+0000
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: min_mon_release 18 (reef)
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: election_strategy: 1
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005546421
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546420
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: osdmap e89: 6 total, 6 up, 6 in
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: mgrmap e35: np0005546420.aoeylc(active, since 35s), standbys: np0005546421.sukfea, np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005546421,np0005546420)
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: Cluster is now healthy
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2749529254' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: overall HEALTH_OK
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2420115631' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:59:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:59:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18218 "" "Go-http-client/1.1"
Dec 05 09:59:17 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4102967610' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:59:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:17.505 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 09:59:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:17.512 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 09:59:17 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546418 (monmap changed)...
Dec 05 09:59:17 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546418 (monmap changed)...
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:17 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain
Dec 05 09:59:17 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain
Dec 05 09:59:17 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.54137 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546418.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:59:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:17.564 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 09:59:17 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Removed label _admin from host np0005546418.localdomain
Dec 05 09:59:17 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005546418.localdomain
Dec 05 09:59:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:17.567 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 09:59:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:17.567 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0)
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0)
Dec 05 09:59:18 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 09:59:18 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:18 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 09:59:18 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/4102967610' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546418 (monmap changed)...
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='client.54137 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546418.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: Removed label _admin from host np0005546418.localdomain
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:18.564 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:18.588 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:18.589 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:59:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:59:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:59:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:59:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:59:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:59:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:59:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:59:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:59:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:59:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:59:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:19 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec 05 09:59:19 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:19 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:59:19 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:59:19 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon).osd e89 _set_new_cache_sizes cache_size:1020049020 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:20 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)...
Dec 05 09:59:20 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)...
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0)
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:20 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:59:20 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.0 (monmap changed)...
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:20 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Writing back 50 completed events
Dec 05 09:59:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:21 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 09:59:21 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:21 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 09:59:21 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 09:59:21 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:59:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:59:21 np0005546420.localdomain podman[301103]: 2025-12-05 09:59:21.512035221 +0000 UTC m=+0.085053786 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, container_name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc.)
Dec 05 09:59:21 np0005546420.localdomain podman[301103]: 2025-12-05 09:59:21.554447915 +0000 UTC m=+0.127466490 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container)
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.3 (monmap changed)...
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:21 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:59:21 np0005546420.localdomain podman[301104]: 2025-12-05 09:59:21.562461732 +0000 UTC m=+0.135313862 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 09:59:21 np0005546420.localdomain podman[301104]: 2025-12-05 09:59:21.642231004 +0000 UTC m=+0.215083114 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 09:59:21 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:22 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 09:59:22 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:22 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:59:22 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:23 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005546419 (monmap changed)...
Dec 05 09:59:23 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005546419 (monmap changed)...
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:23 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 09:59:23 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 09:59:23 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:24 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 09:59:24 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:24 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 09:59:24 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 09:59:24 np0005546420.localdomain sudo[301146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:24 np0005546420.localdomain sudo[301146]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:24 np0005546420.localdomain sudo[301146]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:24 np0005546420.localdomain sudo[301164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:24 np0005546420.localdomain sudo[301164]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mon.np0005546419 (monmap changed)...
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:24 np0005546420.localdomain podman[301199]: 
Dec 05 09:59:24 np0005546420.localdomain podman[301199]: 2025-12-05 09:59:24.652282267 +0000 UTC m=+0.085570223 container create 9a7efb3841f5665b0612e33bee62f4475b4c4acf800004342a386f56ee5baaa4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mcclintock, version=7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:59:24 np0005546420.localdomain systemd[1]: Started libpod-conmon-9a7efb3841f5665b0612e33bee62f4475b4c4acf800004342a386f56ee5baaa4.scope.
Dec 05 09:59:24 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:59:24 np0005546420.localdomain podman[301199]: 2025-12-05 09:59:24.620359475 +0000 UTC m=+0.053647451 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:59:24 np0005546420.localdomain podman[301199]: 2025-12-05 09:59:24.722153176 +0000 UTC m=+0.155441122 container init 9a7efb3841f5665b0612e33bee62f4475b4c4acf800004342a386f56ee5baaa4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mcclintock, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:59:24 np0005546420.localdomain podman[301199]: 2025-12-05 09:59:24.731330508 +0000 UTC m=+0.164618454 container start 9a7efb3841f5665b0612e33bee62f4475b4c4acf800004342a386f56ee5baaa4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mcclintock, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z)
Dec 05 09:59:24 np0005546420.localdomain podman[301199]: 2025-12-05 09:59:24.731571305 +0000 UTC m=+0.164859251 container attach 9a7efb3841f5665b0612e33bee62f4475b4c4acf800004342a386f56ee5baaa4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mcclintock, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.41.4, RELEASE=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, release=1763362218, CEPH_POINT_RELEASE=, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 09:59:24 np0005546420.localdomain gallant_mcclintock[301214]: 167 167
Dec 05 09:59:24 np0005546420.localdomain systemd[1]: libpod-9a7efb3841f5665b0612e33bee62f4475b4c4acf800004342a386f56ee5baaa4.scope: Deactivated successfully.
Dec 05 09:59:24 np0005546420.localdomain podman[301199]: 2025-12-05 09:59:24.736220028 +0000 UTC m=+0.169508054 container died 9a7efb3841f5665b0612e33bee62f4475b4c4acf800004342a386f56ee5baaa4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mcclintock, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=1763362218, vcs-type=git, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:59:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-6f492afd885d21fb74142a39e52d1ecfd87322e02005b7c1bece97f0b04cb94e-merged.mount: Deactivated successfully.
Dec 05 09:59:24 np0005546420.localdomain podman[301219]: 2025-12-05 09:59:24.838129072 +0000 UTC m=+0.092945259 container remove 9a7efb3841f5665b0612e33bee62f4475b4c4acf800004342a386f56ee5baaa4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_mcclintock, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True)
Dec 05 09:59:24 np0005546420.localdomain systemd[1]: libpod-conmon-9a7efb3841f5665b0612e33bee62f4475b4c4acf800004342a386f56ee5baaa4.scope: Deactivated successfully.
Dec 05 09:59:24 np0005546420.localdomain sudo[301164]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:24 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec 05 09:59:24 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:24 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:59:24 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:59:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054631 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:59:25 np0005546420.localdomain sudo[301236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:25 np0005546420.localdomain sudo[301236]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:25 np0005546420.localdomain sudo[301236]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:25 np0005546420.localdomain sudo[301254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:25 np0005546420.localdomain sudo[301254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:25 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:25 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 09:59:25 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 09:59:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:25 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.1 (monmap changed)...
Dec 05 09:59:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 05 09:59:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:25 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:59:25 np0005546420.localdomain ceph-mon[298353]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:25 np0005546420.localdomain podman[301288]: 
Dec 05 09:59:25 np0005546420.localdomain podman[301288]: 2025-12-05 09:59:25.634207902 +0000 UTC m=+0.078454664 container create 1f5a8a2ff8fd1349d563653ea97c0aa9875442257ac444a0db6a2aa87ddf5d10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_faraday, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, version=7, vendor=Red Hat, Inc., ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph)
Dec 05 09:59:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:59:25 np0005546420.localdomain systemd[1]: Started libpod-conmon-1f5a8a2ff8fd1349d563653ea97c0aa9875442257ac444a0db6a2aa87ddf5d10.scope.
Dec 05 09:59:25 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:59:25 np0005546420.localdomain podman[301288]: 2025-12-05 09:59:25.606110538 +0000 UTC m=+0.050357370 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:59:25 np0005546420.localdomain podman[301288]: 2025-12-05 09:59:25.718625288 +0000 UTC m=+0.162872080 container init 1f5a8a2ff8fd1349d563653ea97c0aa9875442257ac444a0db6a2aa87ddf5d10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_faraday, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, release=1763362218, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.41.4, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main)
Dec 05 09:59:25 np0005546420.localdomain nostalgic_faraday[301309]: 167 167
Dec 05 09:59:25 np0005546420.localdomain systemd[1]: libpod-1f5a8a2ff8fd1349d563653ea97c0aa9875442257ac444a0db6a2aa87ddf5d10.scope: Deactivated successfully.
Dec 05 09:59:25 np0005546420.localdomain podman[301302]: 2025-12-05 09:59:25.772097023 +0000 UTC m=+0.105570558 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 09:59:25 np0005546420.localdomain podman[301288]: 2025-12-05 09:59:25.788049163 +0000 UTC m=+0.232295925 container start 1f5a8a2ff8fd1349d563653ea97c0aa9875442257ac444a0db6a2aa87ddf5d10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_faraday, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, io.openshift.expose-services=)
Dec 05 09:59:25 np0005546420.localdomain podman[301288]: 2025-12-05 09:59:25.788326631 +0000 UTC m=+0.232573393 container attach 1f5a8a2ff8fd1349d563653ea97c0aa9875442257ac444a0db6a2aa87ddf5d10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_faraday, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, distribution-scope=public, RELEASE=main, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main)
Dec 05 09:59:25 np0005546420.localdomain podman[301288]: 2025-12-05 09:59:25.790645102 +0000 UTC m=+0.234891864 container died 1f5a8a2ff8fd1349d563653ea97c0aa9875442257ac444a0db6a2aa87ddf5d10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_faraday, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, version=7, vcs-type=git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, CEPH_POINT_RELEASE=)
Dec 05 09:59:25 np0005546420.localdomain podman[301320]: 2025-12-05 09:59:25.841947061 +0000 UTC m=+0.085412898 container remove 1f5a8a2ff8fd1349d563653ea97c0aa9875442257ac444a0db6a2aa87ddf5d10 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_faraday, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, release=1763362218, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:59:25 np0005546420.localdomain podman[301302]: 2025-12-05 09:59:25.845299964 +0000 UTC m=+0.178773499 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 09:59:25 np0005546420.localdomain systemd[1]: libpod-conmon-1f5a8a2ff8fd1349d563653ea97c0aa9875442257ac444a0db6a2aa87ddf5d10.scope: Deactivated successfully.
Dec 05 09:59:25 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:59:26 np0005546420.localdomain sudo[301254]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:26 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)...
Dec 05 09:59:26 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)...
Dec 05 09:59:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0)
Dec 05 09:59:26 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:59:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:26 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:26 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:59:26 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:59:26 np0005546420.localdomain sudo[301357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:26 np0005546420.localdomain sudo[301357]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:26 np0005546420.localdomain sudo[301357]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:26 np0005546420.localdomain sudo[301375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:26 np0005546420.localdomain sudo[301375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:26 np0005546420.localdomain systemd[1]: tmp-crun.l3Xj6Q.mount: Deactivated successfully.
Dec 05 09:59:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-b35d4997153cfb859521120d8dd05bfbace2798c09293db2361d5f25b1470e93-merged.mount: Deactivated successfully.
Dec 05 09:59:26 np0005546420.localdomain podman[301409]: 
Dec 05 09:59:26 np0005546420.localdomain podman[301409]: 2025-12-05 09:59:26.744308689 +0000 UTC m=+0.065298530 container create dc2379a62b28fa87f8b83531de90f22eee91da70acb1d78bb27d1a60d2399f74 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cannon, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 09:59:26 np0005546420.localdomain systemd[1]: Started libpod-conmon-dc2379a62b28fa87f8b83531de90f22eee91da70acb1d78bb27d1a60d2399f74.scope.
Dec 05 09:59:26 np0005546420.localdomain podman[301409]: 2025-12-05 09:59:26.7140982 +0000 UTC m=+0.035088071 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:59:26 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:59:26 np0005546420.localdomain podman[301409]: 2025-12-05 09:59:26.830621772 +0000 UTC m=+0.151611613 container init dc2379a62b28fa87f8b83531de90f22eee91da70acb1d78bb27d1a60d2399f74 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cannon, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218)
Dec 05 09:59:26 np0005546420.localdomain podman[301409]: 2025-12-05 09:59:26.840575359 +0000 UTC m=+0.161565200 container start dc2379a62b28fa87f8b83531de90f22eee91da70acb1d78bb27d1a60d2399f74 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cannon, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, architecture=x86_64, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, release=1763362218, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=)
Dec 05 09:59:26 np0005546420.localdomain podman[301409]: 2025-12-05 09:59:26.840795946 +0000 UTC m=+0.161785787 container attach dc2379a62b28fa87f8b83531de90f22eee91da70acb1d78bb27d1a60d2399f74 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cannon, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, release=1763362218, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_BRANCH=main)
Dec 05 09:59:26 np0005546420.localdomain infallible_cannon[301424]: 167 167
Dec 05 09:59:26 np0005546420.localdomain systemd[1]: libpod-dc2379a62b28fa87f8b83531de90f22eee91da70acb1d78bb27d1a60d2399f74.scope: Deactivated successfully.
Dec 05 09:59:26 np0005546420.localdomain podman[301409]: 2025-12-05 09:59:26.844886282 +0000 UTC m=+0.165876133 container died dc2379a62b28fa87f8b83531de90f22eee91da70acb1d78bb27d1a60d2399f74 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cannon, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7)
Dec 05 09:59:26 np0005546420.localdomain podman[301429]: 2025-12-05 09:59:26.946205298 +0000 UTC m=+0.087924126 container remove dc2379a62b28fa87f8b83531de90f22eee91da70acb1d78bb27d1a60d2399f74 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cannon, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, architecture=x86_64, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git)
Dec 05 09:59:26 np0005546420.localdomain systemd[1]: libpod-conmon-dc2379a62b28fa87f8b83531de90f22eee91da70acb1d78bb27d1a60d2399f74.scope: Deactivated successfully.
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.4 (monmap changed)...
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:59:27 np0005546420.localdomain sudo[301375]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:27 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:59:27 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:27 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:27 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:59:27 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:59:27 np0005546420.localdomain sudo[301453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:27 np0005546420.localdomain sudo[301453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:27 np0005546420.localdomain sudo[301453]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:27 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:27 np0005546420.localdomain sudo[301471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:27 np0005546420.localdomain sudo[301471]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-966e7068011a9c725d66e42f6232f00f01432320aaa307d00bf593557a4326ee-merged.mount: Deactivated successfully.
Dec 05 09:59:27 np0005546420.localdomain podman[301506]: 
Dec 05 09:59:27 np0005546420.localdomain podman[301506]: 2025-12-05 09:59:27.855298973 +0000 UTC m=+0.076215275 container create ee3eedf95dfb5f84da1a7a83b1eae463cb22c33429a26dfbc9ea4626ddc1dcb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_chebyshev, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, RELEASE=main, release=1763362218, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=)
Dec 05 09:59:27 np0005546420.localdomain systemd[1]: Started libpod-conmon-ee3eedf95dfb5f84da1a7a83b1eae463cb22c33429a26dfbc9ea4626ddc1dcb7.scope.
Dec 05 09:59:27 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:59:27 np0005546420.localdomain podman[301506]: 2025-12-05 09:59:27.919885999 +0000 UTC m=+0.140802341 container init ee3eedf95dfb5f84da1a7a83b1eae463cb22c33429a26dfbc9ea4626ddc1dcb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_chebyshev, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, build-date=2025-11-26T19:44:28Z, vcs-type=git, com.redhat.component=rhceph-container, name=rhceph)
Dec 05 09:59:27 np0005546420.localdomain podman[301506]: 2025-12-05 09:59:27.824065942 +0000 UTC m=+0.044982314 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:59:27 np0005546420.localdomain podman[301506]: 2025-12-05 09:59:27.928914987 +0000 UTC m=+0.149831329 container start ee3eedf95dfb5f84da1a7a83b1eae463cb22c33429a26dfbc9ea4626ddc1dcb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_chebyshev, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=1763362218, distribution-scope=public, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_BRANCH=main, io.openshift.expose-services=)
Dec 05 09:59:27 np0005546420.localdomain podman[301506]: 2025-12-05 09:59:27.929224877 +0000 UTC m=+0.150141249 container attach ee3eedf95dfb5f84da1a7a83b1eae463cb22c33429a26dfbc9ea4626ddc1dcb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_chebyshev, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, ceph=True, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=1763362218, io.openshift.expose-services=, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git)
Dec 05 09:59:27 np0005546420.localdomain amazing_chebyshev[301521]: 167 167
Dec 05 09:59:27 np0005546420.localdomain systemd[1]: libpod-ee3eedf95dfb5f84da1a7a83b1eae463cb22c33429a26dfbc9ea4626ddc1dcb7.scope: Deactivated successfully.
Dec 05 09:59:27 np0005546420.localdomain podman[301506]: 2025-12-05 09:59:27.932733714 +0000 UTC m=+0.153650096 container died ee3eedf95dfb5f84da1a7a83b1eae463cb22c33429a26dfbc9ea4626ddc1dcb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_chebyshev, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, ceph=True, version=7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.expose-services=, vcs-type=git, RELEASE=main, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7)
Dec 05 09:59:28 np0005546420.localdomain podman[301526]: 2025-12-05 09:59:28.001700594 +0000 UTC m=+0.063189383 container remove ee3eedf95dfb5f84da1a7a83b1eae463cb22c33429a26dfbc9ea4626ddc1dcb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_chebyshev, name=rhceph, version=7, vendor=Red Hat, Inc., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:59:28 np0005546420.localdomain systemd[1]: libpod-conmon-ee3eedf95dfb5f84da1a7a83b1eae463cb22c33429a26dfbc9ea4626ddc1dcb7.scope: Deactivated successfully.
Dec 05 09:59:28 np0005546420.localdomain sudo[301471]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:28 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:59:28 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:28 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:59:28 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:28 np0005546420.localdomain sudo[301543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:28 np0005546420.localdomain sudo[301543]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:28 np0005546420.localdomain sudo[301543]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:28 np0005546420.localdomain sudo[301561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:28 np0005546420.localdomain sudo[301561]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f2766996ee5c4c9ad4a62577a0a22504fc9b3cff7da4f3b45f7a24afe7f0bb3c-merged.mount: Deactivated successfully.
Dec 05 09:59:28 np0005546420.localdomain podman[301596]: 
Dec 05 09:59:28 np0005546420.localdomain podman[301596]: 2025-12-05 09:59:28.775405467 +0000 UTC m=+0.072959104 container create 17a09fde1c8a33d0f2c0cf585112232ceb073903f367e0cdd160d258b968ae39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_fermi, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.41.4, RELEASE=main, version=7, architecture=x86_64, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git)
Dec 05 09:59:28 np0005546420.localdomain systemd[1]: Started libpod-conmon-17a09fde1c8a33d0f2c0cf585112232ceb073903f367e0cdd160d258b968ae39.scope.
Dec 05 09:59:28 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:59:28 np0005546420.localdomain podman[301596]: 2025-12-05 09:59:28.746882249 +0000 UTC m=+0.044435926 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:59:28 np0005546420.localdomain podman[301596]: 2025-12-05 09:59:28.846829893 +0000 UTC m=+0.144383520 container init 17a09fde1c8a33d0f2c0cf585112232ceb073903f367e0cdd160d258b968ae39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_fermi, GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_BRANCH=main, release=1763362218, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7)
Dec 05 09:59:28 np0005546420.localdomain podman[301596]: 2025-12-05 09:59:28.855322514 +0000 UTC m=+0.152876141 container start 17a09fde1c8a33d0f2c0cf585112232ceb073903f367e0cdd160d258b968ae39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_fermi, name=rhceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., RELEASE=main, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, release=1763362218, io.openshift.expose-services=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:59:28 np0005546420.localdomain podman[301596]: 2025-12-05 09:59:28.85551785 +0000 UTC m=+0.153071477 container attach 17a09fde1c8a33d0f2c0cf585112232ceb073903f367e0cdd160d258b968ae39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_fermi, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.41.4, version=7, architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z)
Dec 05 09:59:28 np0005546420.localdomain elated_fermi[301612]: 167 167
Dec 05 09:59:28 np0005546420.localdomain systemd[1]: libpod-17a09fde1c8a33d0f2c0cf585112232ceb073903f367e0cdd160d258b968ae39.scope: Deactivated successfully.
Dec 05 09:59:28 np0005546420.localdomain podman[301596]: 2025-12-05 09:59:28.859080631 +0000 UTC m=+0.156634278 container died 17a09fde1c8a33d0f2c0cf585112232ceb073903f367e0cdd160d258b968ae39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_fermi, ceph=True, io.buildah.version=1.41.4, release=1763362218, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:59:28 np0005546420.localdomain podman[301617]: 2025-12-05 09:59:28.948986965 +0000 UTC m=+0.080585609 container remove 17a09fde1c8a33d0f2c0cf585112232ceb073903f367e0cdd160d258b968ae39 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_fermi, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:59:28 np0005546420.localdomain systemd[1]: libpod-conmon-17a09fde1c8a33d0f2c0cf585112232ceb073903f367e0cdd160d258b968ae39.scope: Deactivated successfully.
Dec 05 09:59:29 np0005546420.localdomain sudo[301561]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005546420 (monmap changed)...
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005546420 (monmap changed)...
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain
Dec 05 09:59:29 np0005546420.localdomain sudo[301633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:29 np0005546420.localdomain sudo[301633]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:29 np0005546420.localdomain sudo[301633]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:29 np0005546420.localdomain sudo[301651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:29 np0005546420.localdomain sudo[301651]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.44455 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005546418.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Added label _no_schedule to host np0005546418.localdomain
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005546418.localdomain
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005546418.localdomain
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005546418.localdomain
Dec 05 09:59:29 np0005546420.localdomain podman[301685]: 
Dec 05 09:59:29 np0005546420.localdomain podman[301685]: 2025-12-05 09:59:29.648999511 +0000 UTC m=+0.078627180 container create 7f77019f50571c93549f7ba4f9ca9fd436013967d38add87577c3a822639b5c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cartwright, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, RELEASE=main, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_BRANCH=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public)
Dec 05 09:59:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-519eacd8b25559d6aafe1acefbda734bd8eb25d6c6cdb5fd776a1165095898fa-merged.mount: Deactivated successfully.
Dec 05 09:59:29 np0005546420.localdomain systemd[1]: Started libpod-conmon-7f77019f50571c93549f7ba4f9ca9fd436013967d38add87577c3a822639b5c0.scope.
Dec 05 09:59:29 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:59:29 np0005546420.localdomain podman[301685]: 2025-12-05 09:59:29.709839661 +0000 UTC m=+0.139467330 container init 7f77019f50571c93549f7ba4f9ca9fd436013967d38add87577c3a822639b5c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cartwright, ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhceph ceph)
Dec 05 09:59:29 np0005546420.localdomain podman[301685]: 2025-12-05 09:59:29.617177402 +0000 UTC m=+0.046805141 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:59:29 np0005546420.localdomain podman[301685]: 2025-12-05 09:59:29.720454098 +0000 UTC m=+0.150081787 container start 7f77019f50571c93549f7ba4f9ca9fd436013967d38add87577c3a822639b5c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cartwright, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True)
Dec 05 09:59:29 np0005546420.localdomain podman[301685]: 2025-12-05 09:59:29.720805099 +0000 UTC m=+0.150432768 container attach 7f77019f50571c93549f7ba4f9ca9fd436013967d38add87577c3a822639b5c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cartwright, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, RELEASE=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z)
Dec 05 09:59:29 np0005546420.localdomain infallible_cartwright[301700]: 167 167
Dec 05 09:59:29 np0005546420.localdomain systemd[1]: libpod-7f77019f50571c93549f7ba4f9ca9fd436013967d38add87577c3a822639b5c0.scope: Deactivated successfully.
Dec 05 09:59:29 np0005546420.localdomain podman[301685]: 2025-12-05 09:59:29.723279715 +0000 UTC m=+0.152907444 container died 7f77019f50571c93549f7ba4f9ca9fd436013967d38add87577c3a822639b5c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cartwright, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, version=7, architecture=x86_64, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:59:29 np0005546420.localdomain podman[301705]: 2025-12-05 09:59:29.821153675 +0000 UTC m=+0.087839122 container remove 7f77019f50571c93549f7ba4f9ca9fd436013967d38add87577c3a822639b5c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_cartwright, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, name=rhceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, release=1763362218, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 09:59:29 np0005546420.localdomain systemd[1]: libpod-conmon-7f77019f50571c93549f7ba4f9ca9fd436013967d38add87577c3a822639b5c0.scope: Deactivated successfully.
Dec 05 09:59:29 np0005546420.localdomain sudo[301651]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:59:29 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:59:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mon.np0005546420 (monmap changed)...
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: from='client.44455 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005546418.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: Added label _no_schedule to host np0005546418.localdomain
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005546418.localdomain
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a94cb6f69bcb1d697e210067ae230466219af1ab414620e98b290354eb6d2e91-merged.mount: Deactivated successfully.
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.828442) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770828503, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1682, "num_deletes": 279, "total_data_size": 3529168, "memory_usage": 3572248, "flush_reason": "Manual Compaction"}
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770850747, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 2437478, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11807, "largest_seqno": 13485, "table_properties": {"data_size": 2429570, "index_size": 4413, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 21340, "raw_average_key_size": 22, "raw_value_size": 2411651, "raw_average_value_size": 2493, "num_data_blocks": 187, "num_entries": 967, "num_filter_entries": 967, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928724, "oldest_key_time": 1764928724, "file_creation_time": 1764928770, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 22359 microseconds, and 9120 cpu microseconds.
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.850803) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 2437478 bytes OK
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.850829) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.852329) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.852352) EVENT_LOG_v1 {"time_micros": 1764928770852346, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.852371) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 3519563, prev total WAL file size 3519563, number of live WAL files 2.
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.853434) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323733' seq:72057594037927935, type:22 .. '6B760031353331' seq:0, type:0; will stop at (end)
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(2380KB)], [18(16MB)]
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770853500, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 19690972, "oldest_snapshot_seqno": -1}
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:59:30 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.44461 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005546418.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 11021 keys, 18796373 bytes, temperature: kUnknown
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770952365, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 18796373, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18732303, "index_size": 35425, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 296643, "raw_average_key_size": 26, "raw_value_size": 18542935, "raw_average_value_size": 1682, "num_data_blocks": 1341, "num_entries": 11021, "num_filter_entries": 11021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764928770, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.952695) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 18796373 bytes
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.954563) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.0 rd, 189.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 16.5 +0.0 blob) out(17.9 +0.0 blob), read-write-amplify(15.8) write-amplify(7.7) OK, records in: 11535, records dropped: 514 output_compression: NoCompression
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.954595) EVENT_LOG_v1 {"time_micros": 1764928770954581, "job": 8, "event": "compaction_finished", "compaction_time_micros": 98965, "compaction_time_cpu_micros": 51474, "output_level": 6, "num_output_files": 1, "total_output_size": 18796373, "num_input_records": 11535, "num_output_records": 11021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770955074, "job": 8, "event": "table_file_deletion", "file_number": 20}
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770957317, "job": 8, "event": "table_file_deletion", "file_number": 18}
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.853345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.957349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.957354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.957357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.957359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:30.957362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:59:31 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)...
Dec 05 09:59:31 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)...
Dec 05 09:59:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 05 09:59:31 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:59:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:31 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:31 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:59:31 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:59:31 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:31 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:59:31 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:59:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:59:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:59:32 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)...
Dec 05 09:59:32 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)...
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:32 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 09:59:32 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 09:59:32 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.44467 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005546418.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain"} v 0)
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain"} : dispatch
Dec 05 09:59:32 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Removed host np0005546418.localdomain
Dec 05 09:59:32 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Removed host np0005546418.localdomain
Dec 05 09:59:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: from='client.44461 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005546418.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.2 (monmap changed)...
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain"} : dispatch
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain"} : dispatch
Dec 05 09:59:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain"}]': finished
Dec 05 09:59:32 np0005546420.localdomain podman[301722]: 2025-12-05 09:59:32.528670223 +0000 UTC m=+0.097367555 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec 05 09:59:32 np0005546420.localdomain podman[301722]: 2025-12-05 09:59:32.567006462 +0000 UTC m=+0.135703804 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:59:32 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:59:33 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)...
Dec 05 09:59:33 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)...
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:33 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain
Dec 05 09:59:33 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain
Dec 05 09:59:33 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.5 (monmap changed)...
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: from='client.44467 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005546418.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: Removed host np0005546418.localdomain
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:59:34 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546421.sukfea (monmap changed)...
Dec 05 09:59:34 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546421.sukfea (monmap changed)...
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:34 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain
Dec 05 09:59:34 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)...
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:59:35 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005546421 (monmap changed)...
Dec 05 09:59:35 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005546421 (monmap changed)...
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:35 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:59:35 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:59:35 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)...
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:59:36 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] update: starting ev eb70882d-f5da-438e-8bdf-ff2e8621b0a6 (Updating node-proxy deployment (+3 -> 3))
Dec 05 09:59:36 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] complete: finished ev eb70882d-f5da-438e-8bdf-ff2e8621b0a6 (Updating node-proxy deployment (+3 -> 3))
Dec 05 09:59:36 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Completed event eb70882d-f5da-438e-8bdf-ff2e8621b0a6 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:59:36 np0005546420.localdomain sudo[301741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:59:36 np0005546420.localdomain sudo[301741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:36 np0005546420.localdomain sudo[301741]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mon.np0005546421 (monmap changed)...
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:59:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:37 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:38 np0005546420.localdomain ceph-mon[298353]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:39 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 09:59:39 np0005546420.localdomain podman[301759]: 2025-12-05 09:59:39.829206341 +0000 UTC m=+0.070502009 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 09:59:39 np0005546420.localdomain podman[301759]: 2025-12-05 09:59:39.864468586 +0000 UTC m=+0.105764244 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 09:59:39 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 09:59:39 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:59:40 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.44473 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:40 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Saving service mon spec with placement label:mon
Dec 05 09:59:40 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:59:40 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] update: starting ev bc829827-5b73-4190-b852-c68714e049e4 (Updating node-proxy deployment (+3 -> 3))
Dec 05 09:59:40 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] complete: finished ev bc829827-5b73-4190-b852-c68714e049e4 (Updating node-proxy deployment (+3 -> 3))
Dec 05 09:59:40 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Completed event bc829827-5b73-4190-b852-c68714e049e4 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:59:40 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Writing back 50 completed events
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:59:40 np0005546420.localdomain sudo[301778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:59:40 np0005546420.localdomain sudo[301778]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:40 np0005546420.localdomain sudo[301778]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:59:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [balancer INFO root] Optimize plan auto_2025-12-05_09:59:41
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [balancer INFO root] do_upmap
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [balancer INFO root] pools ['vms', 'volumes', 'manila_data', '.mgr', 'images', 'backups', 'manila_metadata']
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [balancer INFO root] prepared 0/10 changes
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32)
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16)
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:59:41 np0005546420.localdomain ceph-mon[298353]: from='client.44473 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:41 np0005546420.localdomain ceph-mon[298353]: Saving service mon spec with placement label:mon
Dec 05 09:59:41 np0005546420.localdomain ceph-mon[298353]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 09:59:41 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.44476 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546421", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:59:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 09:59:42 np0005546420.localdomain podman[301796]: 2025-12-05 09:59:42.510815674 +0000 UTC m=+0.089338948 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 09:59:42 np0005546420.localdomain podman[301796]: 2025-12-05 09:59:42.525510096 +0000 UTC m=+0.104033330 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 09:59:42 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 09:59:42 np0005546420.localdomain ceph-mon[298353]: from='client.44476 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546421", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.44482 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005546421"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Remove daemons mon.np0005546421
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005546421
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "quorum_status"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005546421: new quorum should be ['np0005546419', 'np0005546420'] (from ['np0005546419', 'np0005546420'])
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005546421: new quorum should be ['np0005546419', 'np0005546420'] (from ['np0005546419', 'np0005546420'])
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005546421 from monmap...
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Removing monitor np0005546421 from monmap...
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e13 handle_command mon_command({"prefix": "mon rm", "name": "np0005546421"} v 0)
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon rm", "name": "np0005546421"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005546421 from np0005546421.localdomain -- ports []
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005546421 from np0005546421.localdomain -- ports []
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@2(peon) e14  my rank is now 1 (was 2)
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: client.44390 ms_handle_reset on v2:172.18.0.104:3300/0
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: client.34396 ms_handle_reset on v2:172.18.0.104:3300/0
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546419"} v 0)
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0)
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: log_channel(cluster) log [INF] : mon.np0005546420 calling monitor election
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: paxos.1).electionLogic(62) init, last seen epoch 62
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: Remove daemons mon.np0005546421
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "quorum_status"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: Safe to remove mon.np0005546421: new quorum should be ['np0005546419', 'np0005546420'] (from ['np0005546419', 'np0005546420'])
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: Removing monitor np0005546421 from monmap...
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon rm", "name": "np0005546421"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: Removing daemon mon.np0005546421 from np0005546421.localdomain -- ports []
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420 calling monitor election
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546419 calling monitor election
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546419 is new leader, mons np0005546419,np0005546420 in quorum (ranks 0,1)
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: monmap epoch 14
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: last_changed 2025-12-05T09:59:43.133976+0000
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: created 2025-12-05T07:49:07.934655+0000
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: min_mon_release 18 (reef)
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: election_strategy: 1
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546420
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: osdmap e89: 6 total, 6 up, 6 in
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: mgrmap e35: np0005546420.aoeylc(active, since 61s), standbys: np0005546421.sukfea, np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 09:59:43 np0005546420.localdomain ceph-mon[298353]: overall HEALTH_OK
Dec 05 09:59:43 np0005546420.localdomain sudo[301819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 09:59:43 np0005546420.localdomain sudo[301819]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:43 np0005546420.localdomain sudo[301819]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:43 np0005546420.localdomain sudo[301837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 09:59:43 np0005546420.localdomain sudo[301837]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:43 np0005546420.localdomain sudo[301837]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:43 np0005546420.localdomain sudo[301855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:59:43 np0005546420.localdomain sudo[301855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:43 np0005546420.localdomain sudo[301855]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:43 np0005546420.localdomain sudo[301873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:43 np0005546420.localdomain sudo[301873]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:43 np0005546420.localdomain sudo[301873]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:43 np0005546420.localdomain sudo[301891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:59:43 np0005546420.localdomain sudo[301891]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:43 np0005546420.localdomain sudo[301891]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:43 np0005546420.localdomain sudo[301925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:59:43 np0005546420.localdomain sudo[301925]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:43 np0005546420.localdomain sudo[301925]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:43 np0005546420.localdomain sudo[301943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 09:59:43 np0005546420.localdomain sudo[301943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:43 np0005546420.localdomain sudo[301943]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain sudo[301961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain sudo[301961]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:43 np0005546420.localdomain sudo[301961]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:43 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:44 np0005546420.localdomain sudo[301979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:59:44 np0005546420.localdomain sudo[301979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:44 np0005546420.localdomain sudo[301979]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:44 np0005546420.localdomain sudo[301997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 09:59:44 np0005546420.localdomain sudo[301997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:44 np0005546420.localdomain sudo[301997]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:44 np0005546420.localdomain sudo[302015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:59:44 np0005546420.localdomain sudo[302015]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:44 np0005546420.localdomain sudo[302015]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:44 np0005546420.localdomain sudo[302033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:44 np0005546420.localdomain sudo[302033]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:44 np0005546420.localdomain sudo[302033]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:44 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:44 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:44 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 09:59:44 np0005546420.localdomain ceph-mon[298353]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:44 np0005546420.localdomain sudo[302051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:59:44 np0005546420.localdomain sudo[302051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:44 np0005546420.localdomain sudo[302051]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:44 np0005546420.localdomain sudo[302085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:59:44 np0005546420.localdomain sudo[302085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:44 np0005546420.localdomain sudo[302085]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:44 np0005546420.localdomain sudo[302103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 09:59:44 np0005546420.localdomain sudo[302103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:44 np0005546420.localdomain sudo[302103]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:44 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:44 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:44 np0005546420.localdomain sudo[302121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:44 np0005546420.localdomain sudo[302121]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 09:59:44 np0005546420.localdomain sudo[302121]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:44 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:44 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:44 np0005546420.localdomain podman[302139]: 2025-12-05 09:59:44.736094143 +0000 UTC m=+0.088011377 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 09:59:44 np0005546420.localdomain podman[302139]: 2025-12-05 09:59:44.749079113 +0000 UTC m=+0.100996347 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 09:59:44 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 09:59:44 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 09:59:45 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] update: starting ev 9c361157-ddc0-4f4c-b7c5-231a6af5bfcc (Updating node-proxy deployment (+3 -> 3))
Dec 05 09:59:45 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] complete: finished ev 9c361157-ddc0-4f4c-b7c5-231a6af5bfcc (Updating node-proxy deployment (+3 -> 3))
Dec 05 09:59:45 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Completed event 9c361157-ddc0-4f4c-b7c5-231a6af5bfcc (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:59:45 np0005546420.localdomain sudo[302158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 09:59:45 np0005546420.localdomain sudo[302158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:45 np0005546420.localdomain sudo[302158]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:45 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 09:59:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:45 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 09:59:45 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:45 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Writing back 50 completed events
Dec 05 09:59:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:46 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec 05 09:59:46 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:46 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:59:46 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.0 (monmap changed)...
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:46 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 09:59:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 09:59:46.728 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 09:59:47 np0005546420.localdomain podman[240363]: time="2025-12-05T09:59:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 09:59:47 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:47 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:59:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 09:59:47 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)...
Dec 05 09:59:47 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)...
Dec 05 09:59:47 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0)
Dec 05 09:59:47 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 05 09:59:47 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:47 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:47 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:59:47 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:59:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:09:59:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18221 "" "Go-http-client/1.1"
Dec 05 09:59:47 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.3 (monmap changed)...
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:48 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 09:59:48 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:48 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:48 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 09:59:48 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 09:59:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:59:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 09:59:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:59:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:59:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:59:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 09:59:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:59:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 09:59:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:59:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   09:59:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 09:59:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:49 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 09:59:49 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:49 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:59:49 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:59:49 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:49 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.182626) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790182826, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 921, "num_deletes": 252, "total_data_size": 1337426, "memory_usage": 1361120, "flush_reason": "Manual Compaction"}
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790193133, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 780080, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13491, "largest_seqno": 14406, "table_properties": {"data_size": 775758, "index_size": 1857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11895, "raw_average_key_size": 21, "raw_value_size": 766300, "raw_average_value_size": 1413, "num_data_blocks": 79, "num_entries": 542, "num_filter_entries": 542, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928770, "oldest_key_time": 1764928770, "file_creation_time": 1764928790, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 10581 microseconds, and 4783 cpu microseconds.
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.193202) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 780080 bytes OK
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.193234) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.195295) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.195321) EVENT_LOG_v1 {"time_micros": 1764928790195315, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.195357) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 1332407, prev total WAL file size 1332407, number of live WAL files 2.
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.196239) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end)
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(761KB)], [21(17MB)]
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790196311, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 19576453, "oldest_snapshot_seqno": -1}
Dec 05 09:59:50 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 09:59:50 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:50 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 09:59:50 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 11030 keys, 15486713 bytes, temperature: kUnknown
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790293454, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 15486713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15424438, "index_size": 33630, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 297780, "raw_average_key_size": 26, "raw_value_size": 15236758, "raw_average_value_size": 1381, "num_data_blocks": 1266, "num_entries": 11030, "num_filter_entries": 11030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764928790, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 09:59:50 np0005546420.localdomain sudo[302176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.293874) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 15486713 bytes
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.298269) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.2 rd, 159.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 17.9 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(44.9) write-amplify(19.9) OK, records in: 11563, records dropped: 533 output_compression: NoCompression
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.298320) EVENT_LOG_v1 {"time_micros": 1764928790298301, "job": 10, "event": "compaction_finished", "compaction_time_micros": 97280, "compaction_time_cpu_micros": 47808, "output_level": 6, "num_output_files": 1, "total_output_size": 15486713, "num_input_records": 11563, "num_output_records": 11030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790298607, "job": 10, "event": "table_file_deletion", "file_number": 23}
Dec 05 09:59:50 np0005546420.localdomain sudo[302176]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790303202, "job": 10, "event": "table_file_deletion", "file_number": 21}
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.196168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.303310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.303318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.303322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.303325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:50 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-09:59:50.303328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 09:59:50 np0005546420.localdomain sudo[302176]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:50 np0005546420.localdomain sudo[302194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:50 np0005546420.localdomain sudo[302194]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:50 np0005546420.localdomain podman[302228]: 
Dec 05 09:59:50 np0005546420.localdomain podman[302228]: 2025-12-05 09:59:50.85808388 +0000 UTC m=+0.060266104 container create dc1bbdebfe0c6fd86d0621db9f18d207d21fbf82b65faf2a2e27abc694522f0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_mccarthy, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.openshift.tags=rhceph ceph)
Dec 05 09:59:50 np0005546420.localdomain systemd[1]: Started libpod-conmon-dc1bbdebfe0c6fd86d0621db9f18d207d21fbf82b65faf2a2e27abc694522f0f.scope.
Dec 05 09:59:50 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:59:50 np0005546420.localdomain podman[302228]: 2025-12-05 09:59:50.83039742 +0000 UTC m=+0.032579634 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:59:50 np0005546420.localdomain podman[302228]: 2025-12-05 09:59:50.938786792 +0000 UTC m=+0.140969026 container init dc1bbdebfe0c6fd86d0621db9f18d207d21fbf82b65faf2a2e27abc694522f0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_mccarthy, RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, ceph=True, GIT_BRANCH=main)
Dec 05 09:59:50 np0005546420.localdomain podman[302228]: 2025-12-05 09:59:50.950235324 +0000 UTC m=+0.152417548 container start dc1bbdebfe0c6fd86d0621db9f18d207d21fbf82b65faf2a2e27abc694522f0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_mccarthy, RELEASE=main, io.openshift.expose-services=, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, name=rhceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True)
Dec 05 09:59:50 np0005546420.localdomain podman[302228]: 2025-12-05 09:59:50.950458801 +0000 UTC m=+0.152641065 container attach dc1bbdebfe0c6fd86d0621db9f18d207d21fbf82b65faf2a2e27abc694522f0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_mccarthy, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, version=7, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True)
Dec 05 09:59:50 np0005546420.localdomain strange_mccarthy[302243]: 167 167
Dec 05 09:59:50 np0005546420.localdomain podman[302228]: 2025-12-05 09:59:50.956323981 +0000 UTC m=+0.158506255 container died dc1bbdebfe0c6fd86d0621db9f18d207d21fbf82b65faf2a2e27abc694522f0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_mccarthy, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, name=rhceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_BRANCH=main)
Dec 05 09:59:50 np0005546420.localdomain systemd[1]: libpod-dc1bbdebfe0c6fd86d0621db9f18d207d21fbf82b65faf2a2e27abc694522f0f.scope: Deactivated successfully.
Dec 05 09:59:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-9adeb36a55c32170bdf201d81f3f3788b83b389609e505d9001c42d292dd6ed2-merged.mount: Deactivated successfully.
Dec 05 09:59:51 np0005546420.localdomain podman[302248]: 2025-12-05 09:59:51.065199449 +0000 UTC m=+0.096440886 container remove dc1bbdebfe0c6fd86d0621db9f18d207d21fbf82b65faf2a2e27abc694522f0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_mccarthy, build-date=2025-11-26T19:44:28Z, name=rhceph, version=7, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=1763362218, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:59:51 np0005546420.localdomain systemd[1]: libpod-conmon-dc1bbdebfe0c6fd86d0621db9f18d207d21fbf82b65faf2a2e27abc694522f0f.scope: Deactivated successfully.
Dec 05 09:59:51 np0005546420.localdomain sudo[302194]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:51 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec 05 09:59:51 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:51 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:59:51 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:59:51 np0005546420.localdomain sudo[302266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:51 np0005546420.localdomain sudo[302266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:51 np0005546420.localdomain sudo[302266]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 05 09:59:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:51 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:51 np0005546420.localdomain sudo[302284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:51 np0005546420.localdomain sudo[302284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 09:59:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 09:59:51 np0005546420.localdomain podman[302317]: 2025-12-05 09:59:51.83300276 +0000 UTC m=+0.093993041 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 09:59:51 np0005546420.localdomain podman[302317]: 2025-12-05 09:59:51.843913235 +0000 UTC m=+0.104903516 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 09:59:51 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 09:59:51 np0005546420.localdomain podman[302330]: 
Dec 05 09:59:51 np0005546420.localdomain podman[302330]: 2025-12-05 09:59:51.89218028 +0000 UTC m=+0.137412327 container create 8f63f3f2a381b7e47e8212f32addf01bf56798f49a878c62afe719374f408861 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_mclaren, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, release=1763362218, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, version=7)
Dec 05 09:59:51 np0005546420.localdomain systemd[1]: Started libpod-conmon-8f63f3f2a381b7e47e8212f32addf01bf56798f49a878c62afe719374f408861.scope.
Dec 05 09:59:51 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:59:51 np0005546420.localdomain podman[302330]: 2025-12-05 09:59:51.859507825 +0000 UTC m=+0.104739892 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:59:51 np0005546420.localdomain podman[302330]: 2025-12-05 09:59:51.963152862 +0000 UTC m=+0.208384899 container init 8f63f3f2a381b7e47e8212f32addf01bf56798f49a878c62afe719374f408861 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_mclaren, build-date=2025-11-26T19:44:28Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., distribution-scope=public, GIT_CLEAN=True, io.openshift.expose-services=, release=1763362218)
Dec 05 09:59:51 np0005546420.localdomain podman[302330]: 2025-12-05 09:59:51.976044088 +0000 UTC m=+0.221276125 container start 8f63f3f2a381b7e47e8212f32addf01bf56798f49a878c62afe719374f408861 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_mclaren, vcs-type=git, name=rhceph, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, io.buildah.version=1.41.4, release=1763362218, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, distribution-scope=public, version=7)
Dec 05 09:59:51 np0005546420.localdomain podman[302330]: 2025-12-05 09:59:51.976919626 +0000 UTC m=+0.222151703 container attach 8f63f3f2a381b7e47e8212f32addf01bf56798f49a878c62afe719374f408861 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_mclaren, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, release=1763362218, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, RELEASE=main, name=rhceph, io.openshift.tags=rhceph ceph)
Dec 05 09:59:51 np0005546420.localdomain zen_mclaren[302365]: 167 167
Dec 05 09:59:51 np0005546420.localdomain systemd[1]: libpod-8f63f3f2a381b7e47e8212f32addf01bf56798f49a878c62afe719374f408861.scope: Deactivated successfully.
Dec 05 09:59:51 np0005546420.localdomain podman[302330]: 2025-12-05 09:59:51.982608681 +0000 UTC m=+0.227840808 container died 8f63f3f2a381b7e47e8212f32addf01bf56798f49a878c62afe719374f408861 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_mclaren, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.buildah.version=1.41.4, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 09:59:52 np0005546420.localdomain podman[302316]: 2025-12-05 09:59:51.986615484 +0000 UTC m=+0.247869523 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 09:59:52 np0005546420.localdomain podman[302316]: 2025-12-05 09:59:52.130138256 +0000 UTC m=+0.391392305 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Dec 05 09:59:52 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 09:59:52 np0005546420.localdomain podman[302374]: 2025-12-05 09:59:52.184823069 +0000 UTC m=+0.191303613 container remove 8f63f3f2a381b7e47e8212f32addf01bf56798f49a878c62afe719374f408861 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_mclaren, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:59:52 np0005546420.localdomain systemd[1]: libpod-conmon-8f63f3f2a381b7e47e8212f32addf01bf56798f49a878c62afe719374f408861.scope: Deactivated successfully.
Dec 05 09:59:52 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.1 (monmap changed)...
Dec 05 09:59:52 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 09:59:52 np0005546420.localdomain ceph-mon[298353]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:52 np0005546420.localdomain sudo[302284]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:52 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:52 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:52 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)...
Dec 05 09:59:52 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)...
Dec 05 09:59:52 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0)
Dec 05 09:59:52 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:59:52 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:52 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:52 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:59:52 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:59:52 np0005546420.localdomain sudo[302401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:52 np0005546420.localdomain sudo[302401]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:52 np0005546420.localdomain sudo[302401]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:52 np0005546420.localdomain sudo[302419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:52 np0005546420.localdomain sudo[302419]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e60468ed9efcfd01c2d19ff321ab4f8346679d58d2ac8eb3a21f02e2de5247af-merged.mount: Deactivated successfully.
Dec 05 09:59:52 np0005546420.localdomain podman[302453]: 
Dec 05 09:59:53 np0005546420.localdomain podman[302453]: 2025-12-05 09:59:53.010409837 +0000 UTC m=+0.068085926 container create 4e7e9f71c379dc4763bd834c16766e33906fc522cbbb58d35de97166c7fc0037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_goldwasser, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, CEPH_POINT_RELEASE=, ceph=True, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, name=rhceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:59:53 np0005546420.localdomain systemd[1]: Started libpod-conmon-4e7e9f71c379dc4763bd834c16766e33906fc522cbbb58d35de97166c7fc0037.scope.
Dec 05 09:59:53 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:59:53 np0005546420.localdomain podman[302453]: 2025-12-05 09:59:52.977163974 +0000 UTC m=+0.034840103 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:59:53 np0005546420.localdomain podman[302453]: 2025-12-05 09:59:53.080329967 +0000 UTC m=+0.138006076 container init 4e7e9f71c379dc4763bd834c16766e33906fc522cbbb58d35de97166c7fc0037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_goldwasser, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, vcs-type=git, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 09:59:53 np0005546420.localdomain systemd[1]: tmp-crun.psGjye.mount: Deactivated successfully.
Dec 05 09:59:53 np0005546420.localdomain podman[302453]: 2025-12-05 09:59:53.093169171 +0000 UTC m=+0.150845280 container start 4e7e9f71c379dc4763bd834c16766e33906fc522cbbb58d35de97166c7fc0037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_goldwasser, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, ceph=True, release=1763362218, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.openshift.expose-services=)
Dec 05 09:59:53 np0005546420.localdomain podman[302453]: 2025-12-05 09:59:53.093425129 +0000 UTC m=+0.151101218 container attach 4e7e9f71c379dc4763bd834c16766e33906fc522cbbb58d35de97166c7fc0037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_goldwasser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_BRANCH=main, release=1763362218, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7)
Dec 05 09:59:53 np0005546420.localdomain clever_goldwasser[302468]: 167 167
Dec 05 09:59:53 np0005546420.localdomain systemd[1]: libpod-4e7e9f71c379dc4763bd834c16766e33906fc522cbbb58d35de97166c7fc0037.scope: Deactivated successfully.
Dec 05 09:59:53 np0005546420.localdomain podman[302453]: 2025-12-05 09:59:53.098717572 +0000 UTC m=+0.156393701 container died 4e7e9f71c379dc4763bd834c16766e33906fc522cbbb58d35de97166c7fc0037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_goldwasser, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, version=7)
Dec 05 09:59:53 np0005546420.localdomain podman[302473]: 2025-12-05 09:59:53.192091674 +0000 UTC m=+0.079498046 container remove 4e7e9f71c379dc4763bd834c16766e33906fc522cbbb58d35de97166c7fc0037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_goldwasser, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 09:59:53 np0005546420.localdomain systemd[1]: libpod-conmon-4e7e9f71c379dc4763bd834c16766e33906fc522cbbb58d35de97166c7fc0037.scope: Deactivated successfully.
Dec 05 09:59:53 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:53 np0005546420.localdomain sudo[302419]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.4 (monmap changed)...
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:53 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:59:53 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:53 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:53 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:59:53 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:59:53 np0005546420.localdomain sudo[302497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:53 np0005546420.localdomain sudo[302497]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:53 np0005546420.localdomain sudo[302497]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:53 np0005546420.localdomain sudo[302515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:53 np0005546420.localdomain sudo[302515]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:53 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-95d7e87a4c54b297f9355f6a7bc81867abe48c9131039be0368531937885b45c-merged.mount: Deactivated successfully.
Dec 05 09:59:54 np0005546420.localdomain podman[302549]: 
Dec 05 09:59:54 np0005546420.localdomain podman[302549]: 2025-12-05 09:59:54.049196221 +0000 UTC m=+0.081928261 container create 8c561797a8f46ae0364aff543543c4c4b7405637f122c7868dafb3aae3cd3e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_khorana, io.buildah.version=1.41.4, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_CLEAN=True, ceph=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:59:54 np0005546420.localdomain systemd[1]: Started libpod-conmon-8c561797a8f46ae0364aff543543c4c4b7405637f122c7868dafb3aae3cd3e96.scope.
Dec 05 09:59:54 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:59:54 np0005546420.localdomain podman[302549]: 2025-12-05 09:59:54.016381461 +0000 UTC m=+0.049113551 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:59:54 np0005546420.localdomain podman[302549]: 2025-12-05 09:59:54.116333464 +0000 UTC m=+0.149065474 container init 8c561797a8f46ae0364aff543543c4c4b7405637f122c7868dafb3aae3cd3e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_khorana, build-date=2025-11-26T19:44:28Z, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, ceph=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Dec 05 09:59:54 np0005546420.localdomain podman[302549]: 2025-12-05 09:59:54.127144177 +0000 UTC m=+0.159876217 container start 8c561797a8f46ae0364aff543543c4c4b7405637f122c7868dafb3aae3cd3e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_khorana, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, distribution-scope=public, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, release=1763362218, RELEASE=main, ceph=True, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 09:59:54 np0005546420.localdomain podman[302549]: 2025-12-05 09:59:54.127466677 +0000 UTC m=+0.160198797 container attach 8c561797a8f46ae0364aff543543c4c4b7405637f122c7868dafb3aae3cd3e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_khorana, ceph=True, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=7, release=1763362218, vcs-type=git)
Dec 05 09:59:54 np0005546420.localdomain zealous_khorana[302564]: 167 167
Dec 05 09:59:54 np0005546420.localdomain systemd[1]: libpod-8c561797a8f46ae0364aff543543c4c4b7405637f122c7868dafb3aae3cd3e96.scope: Deactivated successfully.
Dec 05 09:59:54 np0005546420.localdomain podman[302549]: 2025-12-05 09:59:54.131014467 +0000 UTC m=+0.163746557 container died 8c561797a8f46ae0364aff543543c4c4b7405637f122c7868dafb3aae3cd3e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_khorana, io.buildah.version=1.41.4, architecture=x86_64, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=)
Dec 05 09:59:54 np0005546420.localdomain podman[302569]: 2025-12-05 09:59:54.221569851 +0000 UTC m=+0.077029379 container remove 8c561797a8f46ae0364aff543543c4c4b7405637f122c7868dafb3aae3cd3e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_khorana, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, io.openshift.expose-services=, vcs-type=git, release=1763362218, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public)
Dec 05 09:59:54 np0005546420.localdomain systemd[1]: libpod-conmon-8c561797a8f46ae0364aff543543c4c4b7405637f122c7868dafb3aae3cd3e96.scope: Deactivated successfully.
Dec 05 09:59:54 np0005546420.localdomain sudo[302515]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:54 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:59:54 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:54 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:59:54 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:59:54 np0005546420.localdomain sudo[302585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 09:59:54 np0005546420.localdomain sudo[302585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:54 np0005546420.localdomain sudo[302585]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:54 np0005546420.localdomain sudo[302603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 09:59:54 np0005546420.localdomain sudo[302603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 09:59:54 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-6f847fa6785bb814efdfb06ee821d83c189509d70bbb543c65fa3e404d09da8c-merged.mount: Deactivated successfully.
Dec 05 09:59:54 np0005546420.localdomain podman[302638]: 
Dec 05 09:59:54 np0005546420.localdomain podman[302638]: 2025-12-05 09:59:54.938984332 +0000 UTC m=+0.085278433 container create 03fdbf54ef292ddd0698542ab17c37f40f5b1c40193e8b405afd60a754abebfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_hawking, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, ceph=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=1763362218, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 09:59:54 np0005546420.localdomain systemd[1]: Started libpod-conmon-03fdbf54ef292ddd0698542ab17c37f40f5b1c40193e8b405afd60a754abebfa.scope.
Dec 05 09:59:54 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 09:59:54 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 09:59:55 np0005546420.localdomain podman[302638]: 2025-12-05 09:59:54.905218274 +0000 UTC m=+0.051512405 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 09:59:55 np0005546420.localdomain podman[302638]: 2025-12-05 09:59:55.005992973 +0000 UTC m=+0.152287074 container init 03fdbf54ef292ddd0698542ab17c37f40f5b1c40193e8b405afd60a754abebfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_hawking, architecture=x86_64, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, ceph=True, version=7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 05 09:59:55 np0005546420.localdomain podman[302638]: 2025-12-05 09:59:55.01500847 +0000 UTC m=+0.161302571 container start 03fdbf54ef292ddd0698542ab17c37f40f5b1c40193e8b405afd60a754abebfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_hawking, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, version=7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z)
Dec 05 09:59:55 np0005546420.localdomain podman[302638]: 2025-12-05 09:59:55.015299779 +0000 UTC m=+0.161593880 container attach 03fdbf54ef292ddd0698542ab17c37f40f5b1c40193e8b405afd60a754abebfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_hawking, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64)
Dec 05 09:59:55 np0005546420.localdomain adoring_hawking[302653]: 167 167
Dec 05 09:59:55 np0005546420.localdomain systemd[1]: libpod-03fdbf54ef292ddd0698542ab17c37f40f5b1c40193e8b405afd60a754abebfa.scope: Deactivated successfully.
Dec 05 09:59:55 np0005546420.localdomain podman[302638]: 2025-12-05 09:59:55.019343173 +0000 UTC m=+0.165637284 container died 03fdbf54ef292ddd0698542ab17c37f40f5b1c40193e8b405afd60a754abebfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_hawking, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container)
Dec 05 09:59:55 np0005546420.localdomain podman[302658]: 2025-12-05 09:59:55.114055286 +0000 UTC m=+0.086641835 container remove 03fdbf54ef292ddd0698542ab17c37f40f5b1c40193e8b405afd60a754abebfa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_hawking, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1763362218, com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git)
Dec 05 09:59:55 np0005546420.localdomain systemd[1]: libpod-conmon-03fdbf54ef292ddd0698542ab17c37f40f5b1c40193e8b405afd60a754abebfa.scope: Deactivated successfully.
Dec 05 09:59:55 np0005546420.localdomain sudo[302603]: pam_unix(sudo:session): session closed for user root
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 09:59:55 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:59:55 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:55 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:59:55 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:59:55 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 09:59:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:55 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-552d7d0195489522bbc3a0357398a040628b959db949b3f9b64e9df26482477d-merged.mount: Deactivated successfully.
Dec 05 09:59:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 09:59:55 np0005546420.localdomain podman[302675]: 2025-12-05 09:59:55.986052911 +0000 UTC m=+0.088197054 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:59:56 np0005546420.localdomain podman[302675]: 2025-12-05 09:59:56.05466779 +0000 UTC m=+0.156811943 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 09:59:56 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:59:56 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)...
Dec 05 09:59:56 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)...
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:59:56 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.44484 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005546421.localdomain:172.18.0.105", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:56 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:59:56 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:56 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:59:56 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 09:59:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 09:59:57 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:57 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.2 (monmap changed)...
Dec 05 09:59:57 np0005546420.localdomain ceph-mon[298353]: from='client.44484 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005546421.localdomain:172.18.0.105", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 09:59:57 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 09:59:57 np0005546420.localdomain ceph-mon[298353]: Deploying daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 09:59:58 np0005546420.localdomain ceph-mon[298353]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 09:59:59 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Dec 05 09:59:59 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546421 172.18.0.108:0/2329815954; not ready for session (expect reconnect)
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0)
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:59:59 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546421: (2) No such file or directory
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546419"} v 0)
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: log_channel(cluster) log [INF] : mon.np0005546420 calling monitor election
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: paxos.1).electionLogic(64) init, last seen epoch 64
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0)
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0)
Dec 05 09:59:59 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 09:59:59 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546421: (22) Invalid argument
Dec 05 10:00:00 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546421 172.18.0.108:0/2329815954; not ready for session (expect reconnect)
Dec 05 10:00:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0)
Dec 05 10:00:00 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:00 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546421: (22) Invalid argument
Dec 05 10:00:01 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:01 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546421 172.18.0.108:0/2329815954; not ready for session (expect reconnect)
Dec 05 10:00:01 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0)
Dec 05 10:00:01 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:01 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546421: (22) Invalid argument
Dec 05 10:00:02 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546421 172.18.0.108:0/2329815954; not ready for session (expect reconnect)
Dec 05 10:00:02 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0)
Dec 05 10:00:02 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:02 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546421: (22) Invalid argument
Dec 05 10:00:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 handle_auth_request failed to assign global_id
Dec 05 10:00:03 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:00:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 handle_auth_request failed to assign global_id
Dec 05 10:00:03 np0005546420.localdomain podman[302700]: 2025-12-05 10:00:03.504724527 +0000 UTC m=+0.078989920 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:00:03 np0005546420.localdomain podman[302700]: 2025-12-05 10:00:03.542010524 +0000 UTC m=+0.116275907 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 05 10:00:03 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:00:03 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546421 172.18.0.108:0/2329815954; not ready for session (expect reconnect)
Dec 05 10:00:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0)
Dec 05 10:00:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:03 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546421: (22) Invalid argument
Dec 05 10:00:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 handle_auth_request failed to assign global_id
Dec 05 10:00:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:00:04.121 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:00:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:00:04.123 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:00:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:00:04.124 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:00:04 np0005546420.localdomain ceph-mds[283770]: mds.beacon.mds.np0005546420.eqhasr missed beacon ack from the monitors
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 handle_auth_request failed to assign global_id
Dec 05 10:00:04 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546421 172.18.0.108:0/2329815954; not ready for session (expect reconnect)
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0)
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mgr[286940]: mgr finish mon failed to return metadata for mon.np0005546421: (22) Invalid argument
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: paxos.1).electionLogic(65) init, last seen epoch 65, mid-election, bumping
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(electing) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 10:00:04 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)...
Dec 05 10:00:04 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)...
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546419 calling monitor election
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420 calling monitor election
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546421 calling monitor election
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546419 is new leader, mons np0005546419,np0005546420,np0005546421 in quorum (ranks 0,1,2)
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: monmap epoch 15
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: last_changed 2025-12-05T09:59:59.724612+0000
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: created 2025-12-05T07:49:07.934655+0000
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: min_mon_release 18 (reef)
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: election_strategy: 1
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546420
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546421
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: osdmap e89: 6 total, 6 up, 6 in
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mgrmap e35: np0005546420.aoeylc(active, since 83s), standbys: np0005546421.sukfea, np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: overall HEALTH_OK
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:04 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 10:00:04 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 10:00:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:04.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:00:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:04.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:00:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:00:05 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:05 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_open ignoring open from mon.np0005546421 172.18.0.108:0/2329815954; not ready for session (expect reconnect)
Dec 05 10:00:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0)
Dec 05 10:00:05 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:05 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.5 (monmap changed)...
Dec 05 10:00:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 05 10:00:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:05 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 10:00:05 np0005546420.localdomain ceph-mon[298353]: pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 10:00:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 10:00:06 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)...
Dec 05 10:00:06 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)...
Dec 05 10:00:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 05 10:00:06 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 10:00:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:00:06 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:06 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain
Dec 05 10:00:06 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain
Dec 05 10:00:06 np0005546420.localdomain ceph-mgr[286940]: mgr.server handle_report got status from non-daemon mon.np0005546421
Dec 05 10:00:06 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:06.727+0000 7f38eb2af640 -1 mgr.server handle_report got status from non-daemon mon.np0005546421
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 10:00:07 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546421.sukfea (monmap changed)...
Dec 05 10:00:07 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546421.sukfea (monmap changed)...
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)...
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3768139666' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3768139666' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:00:07 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:07 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain
Dec 05 10:00:07 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain
Dec 05 10:00:07 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:07.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:00:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:07.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:00:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:07.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)...
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:08 np0005546420.localdomain sudo[302720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:08 np0005546420.localdomain sudo[302720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:08 np0005546420.localdomain sudo[302720]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:08.240 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:00:08 np0005546420.localdomain sudo[302738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:00:08 np0005546420.localdomain sudo[302738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:08.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:00:08 np0005546420.localdomain sudo[302738]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:09 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:09.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:00:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:09.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:00:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:00:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 10:00:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 10:00:10 np0005546420.localdomain ceph-mon[298353]: pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:00:10 np0005546420.localdomain podman[302789]: 2025-12-05 10:00:10.513018309 +0000 UTC m=+0.088925815 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:00:10 np0005546420.localdomain podman[302789]: 2025-12-05 10:00:10.545541759 +0000 UTC m=+0.121449245 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:00:10 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:00:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 05 10:00:10 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/61651472' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 05 10:00:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:00:10 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 05 10:00:10 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:00:10 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:10 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:10 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:10 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:10 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:10 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:11 np0005546420.localdomain sudo[302809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 10:00:11 np0005546420.localdomain sudo[302809]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[302809]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:11 np0005546420.localdomain sudo[302827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 10:00:11 np0005546420.localdomain sudo[302827]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[302827]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:11 np0005546420.localdomain sudo[302845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:00:11 np0005546420.localdomain sudo[302845]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[302845]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:11 np0005546420.localdomain sudo[302863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:11 np0005546420.localdomain sudo[302863]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[302863]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:11 np0005546420.localdomain sudo[302881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:00:11 np0005546420.localdomain sudo[302881]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[302881]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:11 np0005546420.localdomain sudo[302915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:00:11 np0005546420.localdomain sudo[302915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[302915]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f38c38e7190>)]
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f38c38e73d0>)]
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 05 10:00:11 np0005546420.localdomain sudo[302933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:00:11 np0005546420.localdomain sudo[302933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[302933]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:11 np0005546420.localdomain sudo[302951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 10:00:11 np0005546420.localdomain sudo[302951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[302951]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:11 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/61651472' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 05 10:00:11 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:11 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:00:11 np0005546420.localdomain sudo[302969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 10:00:11 np0005546420.localdomain sudo[302969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[302969]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:11 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:11 np0005546420.localdomain sudo[302987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 10:00:11 np0005546420.localdomain sudo[302987]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[302987]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:11 np0005546420.localdomain sudo[303005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:00:11 np0005546420.localdomain sudo[303005]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[303005]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:11.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:00:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:11.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:00:11 np0005546420.localdomain sudo[303023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:11 np0005546420.localdomain sudo[303023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:11 np0005546420.localdomain sudo[303023]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:12 np0005546420.localdomain sudo[303041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:00:12 np0005546420.localdomain sudo[303041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:12 np0005546420.localdomain sudo[303041]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:12 np0005546420.localdomain ceph-mgr[286940]: log_channel(audit) log [DBG] : from='client.64106 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:00:12 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO root] Reconfig service osd.default_drive_group
Dec 05 10:00:12 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 10:00:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:12.047 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:00:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:12.048 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:00:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:12.048 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:00:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:12.049 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:00:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:12.049 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 10:00:12 np0005546420.localdomain sudo[303076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:00:12 np0005546420.localdomain sudo[303076]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:12 np0005546420.localdomain sudo[303076]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 10:00:12 np0005546420.localdomain sudo[303104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:00:12 np0005546420.localdomain sudo[303104]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:12 np0005546420.localdomain sudo[303104]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 10:00:12 np0005546420.localdomain sudo[303131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:12 np0005546420.localdomain sudo[303131]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:12 np0005546420.localdomain sudo[303131]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3529354564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:00:12 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] update: starting ev b31af23f-8641-4803-b62d-e6b70bbbb80f (Updating node-proxy deployment (+3 -> 3))
Dec 05 10:00:12 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] complete: finished ev b31af23f-8641-4803-b62d-e6b70bbbb80f (Updating node-proxy deployment (+3 -> 3))
Dec 05 10:00:12 np0005546420.localdomain ceph-mgr[286940]: [progress INFO root] Completed event b31af23f-8641-4803-b62d-e6b70bbbb80f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 05 10:00:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:12.563 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='client.64106 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: Reconfig service osd.default_drive_group
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3529354564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:00:12 np0005546420.localdomain sudo[303151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:00:12 np0005546420.localdomain sudo[303151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:00:12 np0005546420.localdomain sudo[303151]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:12.773 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:00:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:12.775 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12284MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:00:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:12.775 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:00:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:12.775 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:00:12 np0005546420.localdomain podman[303169]: 2025-12-05 10:00:12.810776598 +0000 UTC m=+0.070385386 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:00:12 np0005546420.localdomain podman[303169]: 2025-12-05 10:00:12.818564507 +0000 UTC m=+0.078173305 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:00:12 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:00:12 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 10:00:12 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:00:12 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:12 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 10:00:12 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:00:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:00:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:13.081 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:00:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:13.082 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:00:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:13.102 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2454246170' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:00:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:13.555 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:00:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:13.562 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0)
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0)
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [INF] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 10:00:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:13.671 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:00:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:13.673 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:00:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:13.674 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546419 (monmap changed)...
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: pgmap v49: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2454246170' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' 
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 e90: 6 total, 6 up, 6 in
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr handle_mgr_map I was active but no longer am
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  1: '-n'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  2: 'mgr.np0005546420.aoeylc'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  3: '-f'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  4: '--setuser'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  5: 'ceph'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  6: '--setgroup'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  7: 'ceph'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  8: '--default-log-to-file=false'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  9: '--default-log-to-journald=true'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 05 10:00:13 np0005546420.localdomain ceph-mgr[286940]: mgr respawn  exe_path /proc/self/exe
Dec 05 10:00:13 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:13.884+0000 7f3947801640 -1 mgr handle_mgr_map I was active but no longer am
Dec 05 10:00:13 np0005546420.localdomain sshd[299306]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 10:00:13 np0005546420.localdomain systemd[1]: session-69.scope: Deactivated successfully.
Dec 05 10:00:13 np0005546420.localdomain systemd[1]: session-69.scope: Consumed 22.259s CPU time.
Dec 05 10:00:13 np0005546420.localdomain systemd-logind[762]: Session 69 logged out. Waiting for processes to exit.
Dec 05 10:00:13 np0005546420.localdomain systemd-logind[762]: Removed session 69.
Dec 05 10:00:13 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: ignoring --setuser ceph since I am not root
Dec 05 10:00:13 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: ignoring --setgroup ceph since I am not root
Dec 05 10:00:14 np0005546420.localdomain ceph-mgr[286940]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2
Dec 05 10:00:14 np0005546420.localdomain ceph-mgr[286940]: pidfile_write: ignore empty --pid-file
Dec 05 10:00:14 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'alerts'
Dec 05 10:00:14 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 10:00:14 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'balancer'
Dec 05 10:00:14 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:14.144+0000 7fe910222140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 10:00:14 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 10:00:14 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'cephadm'
Dec 05 10:00:14 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:14.215+0000 7fe910222140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 10:00:14 np0005546420.localdomain sshd[303239]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:00:14 np0005546420.localdomain sshd[303239]: Accepted publickey for ceph-admin from 192.168.122.108 port 54608 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 10:00:14 np0005546420.localdomain systemd-logind[762]: New session 70 of user ceph-admin.
Dec 05 10:00:14 np0005546420.localdomain systemd[1]: Started Session 70 of User ceph-admin.
Dec 05 10:00:14 np0005546420.localdomain sshd[303239]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 10:00:14 np0005546420.localdomain sudo[303243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:14 np0005546420.localdomain sudo[303243]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:14 np0005546420.localdomain sudo[303243]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:14 np0005546420.localdomain sudo[303261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 10:00:14 np0005546420.localdomain sudo[303261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: mgrmap e36: np0005546420.aoeylc(active, since 92s), standbys: np0005546421.sukfea, np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/464711851' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: Activating manager daemon np0005546421.sukfea
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: osdmap e90: 6 total, 6 up, 6 in
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/464711851' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: mgrmap e37: np0005546421.sukfea(active, starting, since 0.0513066s), standbys: np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mgr metadata", "who": "np0005546418.garyvl", "id": "np0005546418.garyvl"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mds metadata"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "osd metadata"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mon metadata"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: Manager daemon np0005546421.sukfea is now available
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: removing stray HostCache host record np0005546418.localdomain.devices.0
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"}]': finished
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"}]': finished
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/mirror_snapshot_schedule"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/mirror_snapshot_schedule"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/trash_purge_schedule"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/trash_purge_schedule"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2624193339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:00:14 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'crash'
Dec 05 10:00:14 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 10:00:14 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'dashboard'
Dec 05 10:00:14 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:14.903+0000 7fe910222140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 10:00:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:00:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:00:15 np0005546420.localdomain podman[303327]: 2025-12-05 10:00:15.314669195 +0000 UTC m=+0.144362701 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd)
Dec 05 10:00:15 np0005546420.localdomain podman[303327]: 2025-12-05 10:00:15.352650082 +0000 UTC m=+0.182343568 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 10:00:15 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:00:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'devicehealth'
Dec 05 10:00:15 np0005546420.localdomain podman[303373]: 2025-12-05 10:00:15.424072969 +0000 UTC m=+0.085643215 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, distribution-scope=public, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4)
Dec 05 10:00:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 10:00:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'diskprediction_local'
Dec 05 10:00:15 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:15.486+0000 7fe910222140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 10:00:15 np0005546420.localdomain podman[303373]: 2025-12-05 10:00:15.530568524 +0000 UTC m=+0.192138990 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, version=7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git)
Dec 05 10:00:15 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 05 10:00:15 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 05 10:00:15 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]:   from numpy import show_config as show_numpy_config
Dec 05 10:00:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 10:00:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'influx'
Dec 05 10:00:15 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:15.629+0000 7fe910222140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 10:00:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:15.674 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:00:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:00:15.675 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:00:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 10:00:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'insights'
Dec 05 10:00:15 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:15.698+0000 7fe910222140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 10:00:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'iostat'
Dec 05 10:00:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 10:00:15 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:15.817+0000 7fe910222140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 10:00:15 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'k8sevents'
Dec 05 10:00:15 np0005546420.localdomain ceph-mon[298353]: mgrmap e38: np0005546421.sukfea(active, since 1.05362s), standbys: np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 10:00:15 np0005546420.localdomain ceph-mon[298353]: pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/4060403813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:00:15 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:10:00:15] ENGINE Bus STARTING
Dec 05 10:00:15 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:10:00:15] ENGINE Serving on https://172.18.0.108:7150
Dec 05 10:00:15 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:10:00:15] ENGINE Client ('172.18.0.108', 32990) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 10:00:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1917504463' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:00:15 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:10:00:15] ENGINE Serving on http://172.18.0.108:8765
Dec 05 10:00:15 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:10:00:15] ENGINE Bus STARTED
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'localpool'
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'mds_autoscaler'
Dec 05 10:00:16 np0005546420.localdomain sudo[303261]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:16 np0005546420.localdomain sudo[303493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:16 np0005546420.localdomain sudo[303493]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:16 np0005546420.localdomain sudo[303493]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'mirroring'
Dec 05 10:00:16 np0005546420.localdomain sudo[303511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:00:16 np0005546420.localdomain sudo[303511]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'nfs'
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'orchestrator'
Dec 05 10:00:16 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:16.617+0000 7fe910222140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 10:00:16 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:16.783+0000 7fe910222140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'osd_perf_query'
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 10:00:16 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:16.853+0000 7fe910222140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'osd_support'
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 10:00:16 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:16.911+0000 7fe910222140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'pg_autoscaler'
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 10:00:16 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:16.979+0000 7fe910222140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 10:00:16 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'progress'
Dec 05 10:00:17 np0005546420.localdomain ceph-mon[298353]: pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2827031822' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:00:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 10:00:17 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:17.043+0000 7fe910222140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 10:00:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'prometheus'
Dec 05 10:00:17 np0005546420.localdomain sudo[303511]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:00:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:00:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:00:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:00:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:00:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18222 "" "Go-http-client/1.1"
Dec 05 10:00:17 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:17.350+0000 7fe910222140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 10:00:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 10:00:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'rbd_support'
Dec 05 10:00:17 np0005546420.localdomain sudo[303562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:17 np0005546420.localdomain sudo[303562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:17 np0005546420.localdomain sudo[303562]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 10:00:17 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:17.437+0000 7fe910222140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 10:00:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'restful'
Dec 05 10:00:17 np0005546420.localdomain sudo[303580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 05 10:00:17 np0005546420.localdomain sudo[303580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'rgw'
Dec 05 10:00:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 10:00:17 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:17.773+0000 7fe910222140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 10:00:17 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'rook'
Dec 05 10:00:17 np0005546420.localdomain sudo[303580]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: mgrmap e39: np0005546421.sukfea(active, since 3s), standbys: np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:00:18 np0005546420.localdomain sudo[303617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 10:00:18 np0005546420.localdomain sudo[303617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303617]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain sudo[303635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 10:00:18 np0005546420.localdomain sudo[303635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303635]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain sudo[303653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:00:18 np0005546420.localdomain sudo[303653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303653]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'selftest'
Dec 05 10:00:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:18.219+0000 7fe910222140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain sudo[303671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:18 np0005546420.localdomain sudo[303671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303671]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:18.284+0000 7fe910222140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'snap_schedule'
Dec 05 10:00:18 np0005546420.localdomain sudo[303689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:00:18 np0005546420.localdomain sudo[303689]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303689]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'stats'
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'status'
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'telegraf'
Dec 05 10:00:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:18.478+0000 7fe910222140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain sudo[303723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:00:18 np0005546420.localdomain sudo[303723]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303723]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'telemetry'
Dec 05 10:00:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:18.536+0000 7fe910222140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain sudo[303741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:00:18 np0005546420.localdomain sudo[303741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303741]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain sudo[303759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 10:00:18 np0005546420.localdomain sudo[303759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303759]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:18.669+0000 7fe910222140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'test_orchestrator'
Dec 05 10:00:18 np0005546420.localdomain sudo[303777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 10:00:18 np0005546420.localdomain sudo[303777]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303777]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain sudo[303795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:18.818+0000 7fe910222140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 10:00:18 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'volumes'
Dec 05 10:00:18 np0005546420.localdomain sudo[303795]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303795]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:00:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:00:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:00:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:00:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:00:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:00:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:00:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:00:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:00:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:00:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:00:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:00:18 np0005546420.localdomain sudo[303813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:00:18 np0005546420.localdomain sudo[303813]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303813]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:18 np0005546420.localdomain sudo[303831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:18 np0005546420.localdomain sudo[303831]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:18 np0005546420.localdomain sudo[303831]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: mgrmap e40: np0005546421.sukfea(active, since 5s), standbys: np0005546418.garyvl, np0005546419.zhsnqq
Dec 05 10:00:19 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 10:00:19 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:19.021+0000 7fe910222140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 10:00:19 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Loading python module 'zabbix'
Dec 05 10:00:19 np0005546420.localdomain sudo[303849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:00:19 np0005546420.localdomain sudo[303849]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:19 np0005546420.localdomain sudo[303849]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain ceph-mgr[286940]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 10:00:19 np0005546420.localdomain ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546420-aoeylc[286936]: 2025-12-05T10:00:19.081+0000 7fe910222140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 10:00:19 np0005546420.localdomain ceph-mgr[286940]: ms_deliver_dispatch: unhandled message 0x562c1b911600 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0
Dec 05 10:00:19 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.108:6810/3248290592
Dec 05 10:00:19 np0005546420.localdomain sudo[303883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:00:19 np0005546420.localdomain sudo[303883]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:19 np0005546420.localdomain sudo[303883]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain sudo[303901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:00:19 np0005546420.localdomain sudo[303901]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:19 np0005546420.localdomain sudo[303901]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain sudo[303919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:19 np0005546420.localdomain sudo[303919]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:19 np0005546420.localdomain sudo[303919]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain sudo[303937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 10:00:19 np0005546420.localdomain sudo[303937]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:19 np0005546420.localdomain sudo[303937]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain sudo[303955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 10:00:19 np0005546420.localdomain sudo[303955]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:19 np0005546420.localdomain sudo[303955]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain sudo[303973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 10:00:19 np0005546420.localdomain sudo[303973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:19 np0005546420.localdomain sudo[303973]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain sudo[303991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:19 np0005546420.localdomain sudo[303991]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:19 np0005546420.localdomain sudo[303991]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain sudo[304009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 10:00:19 np0005546420.localdomain sudo[304009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:19 np0005546420.localdomain sudo[304009]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain sudo[304043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 10:00:19 np0005546420.localdomain sudo[304043]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:19 np0005546420.localdomain sudo[304043]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain sudo[304061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 10:00:19 np0005546420.localdomain sudo[304061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:19 np0005546420.localdomain sudo[304061]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:00:20 np0005546420.localdomain sudo[304079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 10:00:20 np0005546420.localdomain sudo[304079]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:20 np0005546420.localdomain sudo[304079]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:00:20 np0005546420.localdomain ceph-mon[298353]: Standby manager daemon np0005546420.aoeylc started
Dec 05 10:00:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 10:00:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 10:00:20 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 10:00:20 np0005546420.localdomain sudo[304097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 10:00:20 np0005546420.localdomain sudo[304097]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:20 np0005546420.localdomain sudo[304097]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:20 np0005546420.localdomain sudo[304115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 10:00:20 np0005546420.localdomain sudo[304115]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:20 np0005546420.localdomain sudo[304115]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:20 np0005546420.localdomain sudo[304133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 10:00:20 np0005546420.localdomain sudo[304133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:20 np0005546420.localdomain sudo[304133]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:20 np0005546420.localdomain sudo[304151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:20 np0005546420.localdomain sudo[304151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:20 np0005546420.localdomain sudo[304151]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:20 np0005546420.localdomain sudo[304169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 10:00:20 np0005546420.localdomain sudo[304169]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:20 np0005546420.localdomain sudo[304169]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:20 np0005546420.localdomain sudo[304203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 10:00:20 np0005546420.localdomain sudo[304203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:20 np0005546420.localdomain sudo[304203]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:20 np0005546420.localdomain sudo[304221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 10:00:20 np0005546420.localdomain sudo[304221]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:20 np0005546420.localdomain sudo[304221]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:20 np0005546420.localdomain sudo[304239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 10:00:20 np0005546420.localdomain sudo[304239]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:20 np0005546420.localdomain sudo[304239]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: mgrmap e41: np0005546421.sukfea(active, since 6s), standbys: np0005546418.garyvl, np0005546419.zhsnqq, np0005546420.aoeylc
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} : dispatch
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:00:21 np0005546420.localdomain sudo[304257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:00:21 np0005546420.localdomain sudo[304257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:21 np0005546420.localdomain sudo[304257]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:22 np0005546420.localdomain ceph-mon[298353]: pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 0 B/s wr, 19 op/s
Dec 05 10:00:22 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.0 (monmap changed)...
Dec 05 10:00:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 05 10:00:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:22 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.0 on np0005546419.localdomain
Dec 05 10:00:22 np0005546420.localdomain ceph-mon[298353]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Dec 05 10:00:22 np0005546420.localdomain ceph-mon[298353]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Dec 05 10:00:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:00:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:00:22 np0005546420.localdomain podman[304275]: 2025-12-05 10:00:22.529004612 +0000 UTC m=+0.099577084 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Dec 05 10:00:22 np0005546420.localdomain podman[304276]: 2025-12-05 10:00:22.577237275 +0000 UTC m=+0.147907800 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:00:22 np0005546420.localdomain podman[304275]: 2025-12-05 10:00:22.603775202 +0000 UTC m=+0.174347674 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Dec 05 10:00:22 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:00:22 np0005546420.localdomain podman[304276]: 2025-12-05 10:00:22.617651588 +0000 UTC m=+0.188322163 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:00:22 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:00:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:23 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.3 (monmap changed)...
Dec 05 10:00:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 05 10:00:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:23 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.3 on np0005546419.localdomain
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 0 B/s wr, 14 op/s
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)...
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:00:25 np0005546420.localdomain sudo[304318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:25 np0005546420.localdomain sudo[304318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:25 np0005546420.localdomain sudo[304318]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:25 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)...
Dec 05 10:00:25 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain
Dec 05 10:00:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 10:00:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 10:00:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:25 np0005546420.localdomain sudo[304336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:25 np0005546420.localdomain sudo[304336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:25 np0005546420.localdomain podman[304371]: 
Dec 05 10:00:25 np0005546420.localdomain podman[304371]: 2025-12-05 10:00:25.739275712 +0000 UTC m=+0.080450026 container create 820a815746412eea4d3ccfd2c682e67790448afde33b772d74d3fa7b9d8c76e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_spence, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, RELEASE=main, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, name=rhceph, GIT_CLEAN=True, ceph=True, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 10:00:25 np0005546420.localdomain systemd[1]: Started libpod-conmon-820a815746412eea4d3ccfd2c682e67790448afde33b772d74d3fa7b9d8c76e2.scope.
Dec 05 10:00:25 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:00:25 np0005546420.localdomain podman[304371]: 2025-12-05 10:00:25.707315618 +0000 UTC m=+0.048489982 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 10:00:25 np0005546420.localdomain podman[304371]: 2025-12-05 10:00:25.820091797 +0000 UTC m=+0.161266121 container init 820a815746412eea4d3ccfd2c682e67790448afde33b772d74d3fa7b9d8c76e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_spence, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-11-26T19:44:28Z, name=rhceph, release=1763362218, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc.)
Dec 05 10:00:25 np0005546420.localdomain podman[304371]: 2025-12-05 10:00:25.830748794 +0000 UTC m=+0.171923118 container start 820a815746412eea4d3ccfd2c682e67790448afde33b772d74d3fa7b9d8c76e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_spence, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.41.4, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.openshift.tags=rhceph ceph)
Dec 05 10:00:25 np0005546420.localdomain podman[304371]: 2025-12-05 10:00:25.831200969 +0000 UTC m=+0.172375333 container attach 820a815746412eea4d3ccfd2c682e67790448afde33b772d74d3fa7b9d8c76e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_spence, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 05 10:00:25 np0005546420.localdomain elated_spence[304386]: 167 167
Dec 05 10:00:25 np0005546420.localdomain systemd[1]: libpod-820a815746412eea4d3ccfd2c682e67790448afde33b772d74d3fa7b9d8c76e2.scope: Deactivated successfully.
Dec 05 10:00:25 np0005546420.localdomain podman[304371]: 2025-12-05 10:00:25.835623444 +0000 UTC m=+0.176797828 container died 820a815746412eea4d3ccfd2c682e67790448afde33b772d74d3fa7b9d8c76e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_spence, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, release=1763362218, RELEASE=main, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 10:00:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-2b21e86bc4319b65e30b8db68bc5d0c03d330bcd94d979929d677e7438f34b3b-merged.mount: Deactivated successfully.
Dec 05 10:00:25 np0005546420.localdomain podman[304391]: 2025-12-05 10:00:25.934306198 +0000 UTC m=+0.090708710 container remove 820a815746412eea4d3ccfd2c682e67790448afde33b772d74d3fa7b9d8c76e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_spence, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True, name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, RELEASE=main, version=7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 10:00:25 np0005546420.localdomain systemd[1]: libpod-conmon-820a815746412eea4d3ccfd2c682e67790448afde33b772d74d3fa7b9d8c76e2.scope: Deactivated successfully.
Dec 05 10:00:25 np0005546420.localdomain sudo[304336]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:26 np0005546420.localdomain sudo[304408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:26 np0005546420.localdomain sudo[304408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:00:26 np0005546420.localdomain sudo[304408]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:26 np0005546420.localdomain sudo[304427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:26 np0005546420.localdomain sudo[304427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:26 np0005546420.localdomain podman[304426]: 2025-12-05 10:00:26.231820718 +0000 UTC m=+0.081962802 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:00:26 np0005546420.localdomain ceph-mon[298353]: pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 0 B/s wr, 11 op/s
Dec 05 10:00:26 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546420 (monmap changed)...
Dec 05 10:00:26 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain
Dec 05 10:00:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 05 10:00:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:26 np0005546420.localdomain podman[304426]: 2025-12-05 10:00:26.283671312 +0000 UTC m=+0.133813356 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 10:00:26 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:00:26 np0005546420.localdomain podman[304486]: 
Dec 05 10:00:26 np0005546420.localdomain podman[304486]: 2025-12-05 10:00:26.679153893 +0000 UTC m=+0.080969421 container create 7efb46b86f9b963d8b1d858f697570f34435a8054776a78f61d8e29375630856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_goldstine, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, RELEASE=main, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 10:00:26 np0005546420.localdomain systemd[1]: Started libpod-conmon-7efb46b86f9b963d8b1d858f697570f34435a8054776a78f61d8e29375630856.scope.
Dec 05 10:00:26 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:00:26 np0005546420.localdomain podman[304486]: 2025-12-05 10:00:26.742721758 +0000 UTC m=+0.144537326 container init 7efb46b86f9b963d8b1d858f697570f34435a8054776a78f61d8e29375630856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_goldstine, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 10:00:26 np0005546420.localdomain podman[304486]: 2025-12-05 10:00:26.646146188 +0000 UTC m=+0.047961736 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 10:00:26 np0005546420.localdomain podman[304486]: 2025-12-05 10:00:26.754000115 +0000 UTC m=+0.155815673 container start 7efb46b86f9b963d8b1d858f697570f34435a8054776a78f61d8e29375630856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_goldstine, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4)
Dec 05 10:00:26 np0005546420.localdomain podman[304486]: 2025-12-05 10:00:26.754233882 +0000 UTC m=+0.156049410 container attach 7efb46b86f9b963d8b1d858f697570f34435a8054776a78f61d8e29375630856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_goldstine, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=)
Dec 05 10:00:26 np0005546420.localdomain quirky_goldstine[304501]: 167 167
Dec 05 10:00:26 np0005546420.localdomain systemd[1]: libpod-7efb46b86f9b963d8b1d858f697570f34435a8054776a78f61d8e29375630856.scope: Deactivated successfully.
Dec 05 10:00:26 np0005546420.localdomain podman[304486]: 2025-12-05 10:00:26.758465633 +0000 UTC m=+0.160281221 container died 7efb46b86f9b963d8b1d858f697570f34435a8054776a78f61d8e29375630856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_goldstine, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4)
Dec 05 10:00:26 np0005546420.localdomain systemd[1]: tmp-crun.3UYT8P.mount: Deactivated successfully.
Dec 05 10:00:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-0253bb1e8e92ee1d7a69764f62b61991e8091a82d67ee9e15f019b047e1c7f37-merged.mount: Deactivated successfully.
Dec 05 10:00:26 np0005546420.localdomain podman[304506]: 2025-12-05 10:00:26.856794216 +0000 UTC m=+0.089546764 container remove 7efb46b86f9b963d8b1d858f697570f34435a8054776a78f61d8e29375630856 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_goldstine, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, release=1763362218, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph)
Dec 05 10:00:26 np0005546420.localdomain systemd[1]: libpod-conmon-7efb46b86f9b963d8b1d858f697570f34435a8054776a78f61d8e29375630856.scope: Deactivated successfully.
Dec 05 10:00:27 np0005546420.localdomain sudo[304427]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:27 np0005546420.localdomain sudo[304530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:27 np0005546420.localdomain sudo[304530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:27 np0005546420.localdomain sudo[304530]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:27 np0005546420.localdomain sudo[304548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:27 np0005546420.localdomain sudo[304548]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:27 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.1 (monmap changed)...
Dec 05 10:00:27 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.1 on np0005546420.localdomain
Dec 05 10:00:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 05 10:00:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:27 np0005546420.localdomain podman[304581]: 
Dec 05 10:00:27 np0005546420.localdomain podman[304581]: 2025-12-05 10:00:27.75413351 +0000 UTC m=+0.084910172 container create 41ef3ebc663b3bba26fe5107b33262b11d44ac5288c107fac10a6b31b8491609 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_bassi, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, version=7, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 10:00:27 np0005546420.localdomain systemd[1]: Started libpod-conmon-41ef3ebc663b3bba26fe5107b33262b11d44ac5288c107fac10a6b31b8491609.scope.
Dec 05 10:00:27 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:00:27 np0005546420.localdomain podman[304581]: 2025-12-05 10:00:27.718148404 +0000 UTC m=+0.048925116 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 10:00:27 np0005546420.localdomain podman[304581]: 2025-12-05 10:00:27.825883317 +0000 UTC m=+0.156659979 container init 41ef3ebc663b3bba26fe5107b33262b11d44ac5288c107fac10a6b31b8491609 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_bassi, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, ceph=True, name=rhceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 05 10:00:27 np0005546420.localdomain systemd[1]: tmp-crun.vX6wn0.mount: Deactivated successfully.
Dec 05 10:00:27 np0005546420.localdomain podman[304581]: 2025-12-05 10:00:27.840395413 +0000 UTC m=+0.171172075 container start 41ef3ebc663b3bba26fe5107b33262b11d44ac5288c107fac10a6b31b8491609 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_bassi, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_CLEAN=True, version=7, release=1763362218, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main)
Dec 05 10:00:27 np0005546420.localdomain podman[304581]: 2025-12-05 10:00:27.840679872 +0000 UTC m=+0.171456524 container attach 41ef3ebc663b3bba26fe5107b33262b11d44ac5288c107fac10a6b31b8491609 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_bassi, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vendor=Red Hat, Inc., architecture=x86_64, version=7, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:00:27 np0005546420.localdomain unruffled_bassi[304596]: 167 167
Dec 05 10:00:27 np0005546420.localdomain systemd[1]: libpod-41ef3ebc663b3bba26fe5107b33262b11d44ac5288c107fac10a6b31b8491609.scope: Deactivated successfully.
Dec 05 10:00:27 np0005546420.localdomain podman[304581]: 2025-12-05 10:00:27.843985623 +0000 UTC m=+0.174762275 container died 41ef3ebc663b3bba26fe5107b33262b11d44ac5288c107fac10a6b31b8491609 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_bassi, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, release=1763362218, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, ceph=True, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 05 10:00:27 np0005546420.localdomain podman[304601]: 2025-12-05 10:00:27.945430243 +0000 UTC m=+0.089695600 container remove 41ef3ebc663b3bba26fe5107b33262b11d44ac5288c107fac10a6b31b8491609 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_bassi, io.buildah.version=1.41.4, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, RELEASE=main)
Dec 05 10:00:27 np0005546420.localdomain systemd[1]: libpod-conmon-41ef3ebc663b3bba26fe5107b33262b11d44ac5288c107fac10a6b31b8491609.scope: Deactivated successfully.
Dec 05 10:00:28 np0005546420.localdomain sudo[304548]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:28 np0005546420.localdomain sudo[304624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:28 np0005546420.localdomain sudo[304624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:28 np0005546420.localdomain sudo[304624]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:28 np0005546420.localdomain ceph-mon[298353]: pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 10:00:28 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.4 (monmap changed)...
Dec 05 10:00:28 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.4 on np0005546420.localdomain
Dec 05 10:00:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 10:00:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 05 10:00:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:28 np0005546420.localdomain sudo[304642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:28 np0005546420.localdomain sudo[304642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-5eea866acf442859adfb91ec6d03220bab2676a44ce418a7dbf3664cb05d7c27-merged.mount: Deactivated successfully.
Dec 05 10:00:28 np0005546420.localdomain podman[304676]: 
Dec 05 10:00:28 np0005546420.localdomain podman[304676]: 2025-12-05 10:00:28.859248213 +0000 UTC m=+0.111640133 container create 6db4b9c153a6f756f3b78f4c84421783d6693386d332b65c7b6b457c07b3f53c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_banzai, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, ceph=True, distribution-scope=public, CEPH_POINT_RELEASE=, release=1763362218, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4)
Dec 05 10:00:28 np0005546420.localdomain podman[304676]: 2025-12-05 10:00:28.794604735 +0000 UTC m=+0.046996705 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 10:00:28 np0005546420.localdomain systemd[1]: Started libpod-conmon-6db4b9c153a6f756f3b78f4c84421783d6693386d332b65c7b6b457c07b3f53c.scope.
Dec 05 10:00:28 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:00:28 np0005546420.localdomain podman[304676]: 2025-12-05 10:00:28.932317981 +0000 UTC m=+0.184709931 container init 6db4b9c153a6f756f3b78f4c84421783d6693386d332b65c7b6b457c07b3f53c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_banzai, io.buildah.version=1.41.4, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, RELEASE=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, vcs-type=git, release=1763362218)
Dec 05 10:00:28 np0005546420.localdomain podman[304676]: 2025-12-05 10:00:28.943876856 +0000 UTC m=+0.196268796 container start 6db4b9c153a6f756f3b78f4c84421783d6693386d332b65c7b6b457c07b3f53c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_banzai, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=7)
Dec 05 10:00:28 np0005546420.localdomain podman[304676]: 2025-12-05 10:00:28.944293789 +0000 UTC m=+0.196685779 container attach 6db4b9c153a6f756f3b78f4c84421783d6693386d332b65c7b6b457c07b3f53c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_banzai, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, distribution-scope=public, release=1763362218, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph)
Dec 05 10:00:28 np0005546420.localdomain agitated_banzai[304690]: 167 167
Dec 05 10:00:28 np0005546420.localdomain systemd[1]: libpod-6db4b9c153a6f756f3b78f4c84421783d6693386d332b65c7b6b457c07b3f53c.scope: Deactivated successfully.
Dec 05 10:00:28 np0005546420.localdomain podman[304676]: 2025-12-05 10:00:28.947778626 +0000 UTC m=+0.200170566 container died 6db4b9c153a6f756f3b78f4c84421783d6693386d332b65c7b6b457c07b3f53c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_banzai, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 10:00:29 np0005546420.localdomain podman[304695]: 2025-12-05 10:00:29.065389853 +0000 UTC m=+0.103476773 container remove 6db4b9c153a6f756f3b78f4c84421783d6693386d332b65c7b6b457c07b3f53c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_banzai, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., release=1763362218, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 05 10:00:29 np0005546420.localdomain systemd[1]: libpod-conmon-6db4b9c153a6f756f3b78f4c84421783d6693386d332b65c7b6b457c07b3f53c.scope: Deactivated successfully.
Dec 05 10:00:29 np0005546420.localdomain sudo[304642]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:29 np0005546420.localdomain sudo[304711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:29 np0005546420.localdomain sudo[304711]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:29 np0005546420.localdomain sudo[304711]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:29 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)...
Dec 05 10:00:29 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain
Dec 05 10:00:29 np0005546420.localdomain ceph-mon[298353]: from='client.44532 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 10:00:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 10:00:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 05 10:00:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:00:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:29 np0005546420.localdomain sudo[304729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:29 np0005546420.localdomain sudo[304729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:29 np0005546420.localdomain systemd[1]: tmp-crun.zVLys1.mount: Deactivated successfully.
Dec 05 10:00:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-03f59382e1b1663f7b6bb288c5eb680b47517fca5c05ac0070bfad71111bd2da-merged.mount: Deactivated successfully.
Dec 05 10:00:29 np0005546420.localdomain podman[304763]: 
Dec 05 10:00:29 np0005546420.localdomain podman[304763]: 2025-12-05 10:00:29.817261133 +0000 UTC m=+0.079669800 container create eebe013f026a0380fcc1016139ac146dc04970114c34066860b19041cebc2b8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_shannon, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-type=git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True)
Dec 05 10:00:29 np0005546420.localdomain systemd[1]: Started libpod-conmon-eebe013f026a0380fcc1016139ac146dc04970114c34066860b19041cebc2b8f.scope.
Dec 05 10:00:29 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:00:29 np0005546420.localdomain podman[304763]: 2025-12-05 10:00:29.884433849 +0000 UTC m=+0.146842506 container init eebe013f026a0380fcc1016139ac146dc04970114c34066860b19041cebc2b8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_shannon, version=7, release=1763362218, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.tags=rhceph ceph)
Dec 05 10:00:29 np0005546420.localdomain podman[304763]: 2025-12-05 10:00:29.78654963 +0000 UTC m=+0.048958347 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 10:00:29 np0005546420.localdomain podman[304763]: 2025-12-05 10:00:29.893749936 +0000 UTC m=+0.156158593 container start eebe013f026a0380fcc1016139ac146dc04970114c34066860b19041cebc2b8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_shannon, distribution-scope=public, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, name=rhceph)
Dec 05 10:00:29 np0005546420.localdomain podman[304763]: 2025-12-05 10:00:29.894009294 +0000 UTC m=+0.156418011 container attach eebe013f026a0380fcc1016139ac146dc04970114c34066860b19041cebc2b8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_shannon, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, architecture=x86_64, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, ceph=True, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 10:00:29 np0005546420.localdomain infallible_shannon[304778]: 167 167
Dec 05 10:00:29 np0005546420.localdomain systemd[1]: libpod-eebe013f026a0380fcc1016139ac146dc04970114c34066860b19041cebc2b8f.scope: Deactivated successfully.
Dec 05 10:00:29 np0005546420.localdomain podman[304763]: 2025-12-05 10:00:29.896506261 +0000 UTC m=+0.158914978 container died eebe013f026a0380fcc1016139ac146dc04970114c34066860b19041cebc2b8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_shannon, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph)
Dec 05 10:00:29 np0005546420.localdomain podman[304783]: 2025-12-05 10:00:29.989866561 +0000 UTC m=+0.086038337 container remove eebe013f026a0380fcc1016139ac146dc04970114c34066860b19041cebc2b8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_shannon, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, name=rhceph, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, release=1763362218, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z)
Dec 05 10:00:29 np0005546420.localdomain systemd[1]: libpod-conmon-eebe013f026a0380fcc1016139ac146dc04970114c34066860b19041cebc2b8f.scope: Deactivated successfully.
Dec 05 10:00:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:00:30 np0005546420.localdomain sudo[304729]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:30 np0005546420.localdomain ceph-mon[298353]: pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 10:00:30 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)...
Dec 05 10:00:30 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain
Dec 05 10:00:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 10:00:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 05 10:00:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-aebd0eb061ff57225ce8e2a4e8dfaed52d5c3643b23a8b17b492daf5ccec5d2b-merged.mount: Deactivated successfully.
Dec 05 10:00:31 np0005546420.localdomain ceph-mon[298353]: Reconfiguring crash.np0005546421 (monmap changed)...
Dec 05 10:00:31 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain
Dec 05 10:00:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 05 10:00:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:32 np0005546420.localdomain ceph-mon[298353]: from='client.44538 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:00:32 np0005546420.localdomain ceph-mon[298353]: Saving service mon spec with placement label:mon
Dec 05 10:00:32 np0005546420.localdomain ceph-mon[298353]: pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 10:00:32 np0005546420.localdomain ceph-mon[298353]: Reconfiguring osd.2 (monmap changed)...
Dec 05 10:00:32 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.2 on np0005546421.localdomain
Dec 05 10:00:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 05 10:00:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:33 np0005546420.localdomain ceph-mon[298353]: from='client.44544 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546421", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 10:00:33 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon osd.5 on np0005546421.localdomain
Dec 05 10:00:34 np0005546420.localdomain ceph-mon[298353]: pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:34 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mon.np0005546421 (monmap changed)...
Dec 05 10:00:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 10:00:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 10:00:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:34 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain
Dec 05 10:00:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:00:34 np0005546420.localdomain systemd[1]: tmp-crun.nSwI61.mount: Deactivated successfully.
Dec 05 10:00:34 np0005546420.localdomain podman[304800]: 2025-12-05 10:00:34.522158104 +0000 UTC m=+0.096289792 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:00:34 np0005546420.localdomain podman[304800]: 2025-12-05 10:00:34.559928265 +0000 UTC m=+0.134059923 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute)
Dec 05 10:00:34 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:00:34 np0005546420.localdomain sudo[304821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:00:34 np0005546420.localdomain sudo[304821]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:34 np0005546420.localdomain sudo[304821]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:35 np0005546420.localdomain ceph-mon[298353]: mgrmap e42: np0005546421.sukfea(active, since 21s), standbys: np0005546419.zhsnqq, np0005546420.aoeylc
Dec 05 10:00:36 np0005546420.localdomain sudo[304839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:00:36 np0005546420.localdomain sudo[304839]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:36 np0005546420.localdomain sudo[304839]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:36 np0005546420.localdomain sudo[304857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:00:36 np0005546420.localdomain sudo[304857]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:00:36 np0005546420.localdomain podman[304893]: 
Dec 05 10:00:36 np0005546420.localdomain podman[304893]: 2025-12-05 10:00:36.521119234 +0000 UTC m=+0.073665317 container create 72fc12b104452e3893855a1e44e34f4c79f0891fbf127ae481fd8c37714537f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_jemison, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, release=1763362218, RELEASE=main, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=)
Dec 05 10:00:36 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mon.np0005546419 (monmap changed)...
Dec 05 10:00:36 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain
Dec 05 10:00:36 np0005546420.localdomain ceph-mon[298353]: pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 05 10:00:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 05 10:00:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:00:36 np0005546420.localdomain systemd[1]: Started libpod-conmon-72fc12b104452e3893855a1e44e34f4c79f0891fbf127ae481fd8c37714537f5.scope.
Dec 05 10:00:36 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:00:36 np0005546420.localdomain podman[304893]: 2025-12-05 10:00:36.493303058 +0000 UTC m=+0.045849232 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 10:00:36 np0005546420.localdomain podman[304893]: 2025-12-05 10:00:36.599305957 +0000 UTC m=+0.151852040 container init 72fc12b104452e3893855a1e44e34f4c79f0891fbf127ae481fd8c37714537f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_jemison, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, release=1763362218)
Dec 05 10:00:36 np0005546420.localdomain podman[304893]: 2025-12-05 10:00:36.608682646 +0000 UTC m=+0.161228719 container start 72fc12b104452e3893855a1e44e34f4c79f0891fbf127ae481fd8c37714537f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_jemison, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, release=1763362218, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z)
Dec 05 10:00:36 np0005546420.localdomain podman[304893]: 2025-12-05 10:00:36.60912011 +0000 UTC m=+0.161666233 container attach 72fc12b104452e3893855a1e44e34f4c79f0891fbf127ae481fd8c37714537f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_jemison, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1763362218, version=7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, vendor=Red Hat, Inc.)
Dec 05 10:00:36 np0005546420.localdomain crazy_jemison[304908]: 167 167
Dec 05 10:00:36 np0005546420.localdomain podman[304893]: 2025-12-05 10:00:36.61400338 +0000 UTC m=+0.166549513 container died 72fc12b104452e3893855a1e44e34f4c79f0891fbf127ae481fd8c37714537f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_jemison, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, release=1763362218, ceph=True, GIT_BRANCH=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 05 10:00:36 np0005546420.localdomain systemd[1]: libpod-72fc12b104452e3893855a1e44e34f4c79f0891fbf127ae481fd8c37714537f5.scope: Deactivated successfully.
Dec 05 10:00:36 np0005546420.localdomain podman[304913]: 2025-12-05 10:00:36.710135286 +0000 UTC m=+0.081656132 container remove 72fc12b104452e3893855a1e44e34f4c79f0891fbf127ae481fd8c37714537f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_jemison, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, build-date=2025-11-26T19:44:28Z, vcs-type=git, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_BRANCH=main)
Dec 05 10:00:36 np0005546420.localdomain systemd[1]: libpod-conmon-72fc12b104452e3893855a1e44e34f4c79f0891fbf127ae481fd8c37714537f5.scope: Deactivated successfully.
Dec 05 10:00:36 np0005546420.localdomain sudo[304857]: pam_unix(sudo:session): session closed for user root
Dec 05 10:00:37 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a530e321181498126ce6092f4519cd11ac9731f6e4bfc55c83e60648d5e792ef-merged.mount: Deactivated successfully.
Dec 05 10:00:37 np0005546420.localdomain ceph-mon[298353]: Reconfiguring mon.np0005546420 (monmap changed)...
Dec 05 10:00:37 np0005546420.localdomain ceph-mon[298353]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain
Dec 05 10:00:37 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:37 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:38 np0005546420.localdomain ceph-mon[298353]: pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:00:40 np0005546420.localdomain ceph-mon[298353]: pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:00:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:00:41 np0005546420.localdomain podman[304929]: 2025-12-05 10:00:41.519695005 +0000 UTC m=+0.092752214 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 10:00:41 np0005546420.localdomain podman[304929]: 2025-12-05 10:00:41.557435195 +0000 UTC m=+0.130492424 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:00:41 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:00:42 np0005546420.localdomain ceph-mon[298353]: pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:00:43 np0005546420.localdomain podman[304948]: 2025-12-05 10:00:43.496280296 +0000 UTC m=+0.075207423 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:00:43 np0005546420.localdomain podman[304948]: 2025-12-05 10:00:43.510329398 +0000 UTC m=+0.089256595 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:00:43 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:00:44 np0005546420.localdomain ceph-mon[298353]: pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:00:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:00:45 np0005546420.localdomain podman[304973]: 2025-12-05 10:00:45.505381559 +0000 UTC m=+0.079978051 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec 05 10:00:45 np0005546420.localdomain podman[304973]: 2025-12-05 10:00:45.516619775 +0000 UTC m=+0.091216307 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:00:45 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:00:45 np0005546420.localdomain ceph-mon[298353]: pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:00:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:00:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:00:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:00:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:00:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18222 "" "Go-http-client/1.1"
Dec 05 10:00:47 np0005546420.localdomain ceph-mon[298353]: pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:00:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:00:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:00:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:00:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:00:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:00:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:00:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:00:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:00:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:00:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:00:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:00:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:00:50 np0005546420.localdomain ceph-mon[298353]: pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:52 np0005546420.localdomain ceph-mon[298353]: pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:00:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:00:53 np0005546420.localdomain systemd[1]: tmp-crun.GRFARF.mount: Deactivated successfully.
Dec 05 10:00:53 np0005546420.localdomain podman[304992]: 2025-12-05 10:00:53.522589905 +0000 UTC m=+0.097434316 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, architecture=x86_64)
Dec 05 10:00:53 np0005546420.localdomain systemd[1]: tmp-crun.IiDxUn.mount: Deactivated successfully.
Dec 05 10:00:53 np0005546420.localdomain podman[304993]: 2025-12-05 10:00:53.568120346 +0000 UTC m=+0.142328308 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:00:53 np0005546420.localdomain podman[304993]: 2025-12-05 10:00:53.580532047 +0000 UTC m=+0.154740049 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:00:53 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:00:53 np0005546420.localdomain podman[304992]: 2025-12-05 10:00:53.637132987 +0000 UTC m=+0.211977398 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350)
Dec 05 10:00:53 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:00:54 np0005546420.localdomain ceph-mon[298353]: pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:00:55 np0005546420.localdomain ceph-mon[298353]: pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:00:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:00:56 np0005546420.localdomain podman[305034]: 2025-12-05 10:00:56.5037715 +0000 UTC m=+0.083089746 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 05 10:00:56 np0005546420.localdomain podman[305034]: 2025-12-05 10:00:56.60456824 +0000 UTC m=+0.183886476 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:00:56 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:00:58 np0005546420.localdomain ceph-mon[298353]: pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:00 np0005546420.localdomain ceph-mon[298353]: pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:01 np0005546420.localdomain CROND[305060]: (root) CMD (run-parts /etc/cron.hourly)
Dec 05 10:01:01 np0005546420.localdomain run-parts[305063]: (/etc/cron.hourly) starting 0anacron
Dec 05 10:01:01 np0005546420.localdomain run-parts[305069]: (/etc/cron.hourly) finished 0anacron
Dec 05 10:01:01 np0005546420.localdomain CROND[305059]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 05 10:01:02 np0005546420.localdomain ceph-mon[298353]: pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:01:04.121 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:01:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:01:04.122 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:01:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:01:04.123 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:01:04 np0005546420.localdomain ceph-mon[298353]: pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1958821480' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:01:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1958821480' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:01:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:01:05 np0005546420.localdomain podman[305070]: 2025-12-05 10:01:05.498581509 +0000 UTC m=+0.078480845 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:01:05 np0005546420.localdomain podman[305070]: 2025-12-05 10:01:05.513441256 +0000 UTC m=+0.093340642 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:01:05 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:01:05 np0005546420.localdomain ceph-mon[298353]: pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:06.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:01:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:06.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:01:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:07.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:01:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:07.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:01:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:07.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:01:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:07.898 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:01:08 np0005546420.localdomain ceph-mon[298353]: pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:08.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:01:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:09.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:01:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:10 np0005546420.localdomain ceph-mon[298353]: pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:10.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:01:11 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/3219937789' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 05 10:01:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:01:12 np0005546420.localdomain podman[305089]: 2025-12-05 10:01:12.501501014 +0000 UTC m=+0.078890467 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:01:12 np0005546420.localdomain podman[305089]: 2025-12-05 10:01:12.507455747 +0000 UTC m=+0.084845220 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:01:12 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:01:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:12.866 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:01:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:12.886 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:01:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:12.901 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:01:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:12.902 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:01:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:12.902 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:01:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:12.902 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:01:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:12.903 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:01:12 np0005546420.localdomain ceph-mon[298353]: pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:12 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3194464759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:01:12 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/57977895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:01:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:01:13 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3015480187' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:01:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:13.371 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:01:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:13.539 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:01:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:13.540 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12332MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:01:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:13.541 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:01:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:13.541 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:01:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:13.602 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:01:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:13.602 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:01:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:13.621 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:01:13 np0005546420.localdomain ceph-mon[298353]: pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:13 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3015480187' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:01:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:01:14 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2258204410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:01:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:14.055 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:01:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:14.089 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:01:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:14.110 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:01:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:14.113 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:01:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:14.113 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:01:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:01:14 np0005546420.localdomain systemd[1]: tmp-crun.ssBDhv.mount: Deactivated successfully.
Dec 05 10:01:14 np0005546420.localdomain podman[305153]: 2025-12-05 10:01:14.514312042 +0000 UTC m=+0.086198282 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:01:14 np0005546420.localdomain podman[305153]: 2025-12-05 10:01:14.547493243 +0000 UTC m=+0.119379433 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:01:14 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:01:14 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2258204410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:01:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:15.098 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:01:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:15.099 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:01:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:01:15.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:01:16 np0005546420.localdomain ceph-mon[298353]: pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:01:16 np0005546420.localdomain podman[305177]: 2025-12-05 10:01:16.474126848 +0000 UTC m=+0.061851193 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:01:16 np0005546420.localdomain podman[305177]: 2025-12-05 10:01:16.487544991 +0000 UTC m=+0.075269376 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:01:16 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:01:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:01:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:01:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:01:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:01:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:01:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18227 "" "Go-http-client/1.1"
Dec 05 10:01:18 np0005546420.localdomain ceph-mon[298353]: pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3497231476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:01:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/820698944' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:01:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:01:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:01:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:01:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:01:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:01:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:01:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:01:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:01:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:01:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:01:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:01:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:01:19 np0005546420.localdomain ceph-mon[298353]: from='client.44574 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 10:01:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:20 np0005546420.localdomain ceph-mon[298353]: pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:22 np0005546420.localdomain ceph-mon[298353]: pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.275422) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883275477, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2727, "num_deletes": 255, "total_data_size": 8546942, "memory_usage": 9088976, "flush_reason": "Manual Compaction"}
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883312235, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 5197727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14411, "largest_seqno": 17133, "table_properties": {"data_size": 5186649, "index_size": 6943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27279, "raw_average_key_size": 22, "raw_value_size": 5162730, "raw_average_value_size": 4207, "num_data_blocks": 302, "num_entries": 1227, "num_filter_entries": 1227, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928790, "oldest_key_time": 1764928790, "file_creation_time": 1764928883, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 36872 microseconds, and 10992 cpu microseconds.
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.312296) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 5197727 bytes OK
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.312324) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.313980) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.314002) EVENT_LOG_v1 {"time_micros": 1764928883313996, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.314024) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 8533838, prev total WAL file size 8533838, number of live WAL files 2.
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.315741) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end)
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(5075KB)], [24(14MB)]
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883315790, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 20684440, "oldest_snapshot_seqno": -1}
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 11710 keys, 18479654 bytes, temperature: kUnknown
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883411270, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 18479654, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18412004, "index_size": 37277, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 313899, "raw_average_key_size": 26, "raw_value_size": 18211610, "raw_average_value_size": 1555, "num_data_blocks": 1421, "num_entries": 11710, "num_filter_entries": 11710, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764928883, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.411571) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 18479654 bytes
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.413382) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.4 rd, 193.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.0, 14.8 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(7.5) write-amplify(3.6) OK, records in: 12257, records dropped: 547 output_compression: NoCompression
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.413412) EVENT_LOG_v1 {"time_micros": 1764928883413397, "job": 12, "event": "compaction_finished", "compaction_time_micros": 95570, "compaction_time_cpu_micros": 47507, "output_level": 6, "num_output_files": 1, "total_output_size": 18479654, "num_input_records": 12257, "num_output_records": 11710, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883414339, "job": 12, "event": "table_file_deletion", "file_number": 26}
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883416505, "job": 12, "event": "table_file_deletion", "file_number": 24}
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.315627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.416598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.416603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.416607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.416610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:01:23 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:01:23.416613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:01:24 np0005546420.localdomain ceph-mon[298353]: pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:01:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:01:24 np0005546420.localdomain podman[305196]: 2025-12-05 10:01:24.51746024 +0000 UTC m=+0.088050350 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:01:24 np0005546420.localdomain podman[305196]: 2025-12-05 10:01:24.531430329 +0000 UTC m=+0.102020439 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:01:24 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:01:24 np0005546420.localdomain podman[305195]: 2025-12-05 10:01:24.627363669 +0000 UTC m=+0.202256371 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 10:01:24 np0005546420.localdomain podman[305195]: 2025-12-05 10:01:24.64235519 +0000 UTC m=+0.217247852 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Dec 05 10:01:24 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:01:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:25 np0005546420.localdomain ceph-mon[298353]: pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/1235296873' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 05 10:01:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:01:27 np0005546420.localdomain podman[305238]: 2025-12-05 10:01:27.502230444 +0000 UTC m=+0.081508758 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 10:01:27 np0005546420.localdomain podman[305238]: 2025-12-05 10:01:27.56747225 +0000 UTC m=+0.146750564 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true)
Dec 05 10:01:27 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:01:28 np0005546420.localdomain ceph-mon[298353]: pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:30 np0005546420.localdomain ceph-mon[298353]: pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:32 np0005546420.localdomain ceph-mon[298353]: pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:33 np0005546420.localdomain ceph-mon[298353]: from='client.44583 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 10:01:34 np0005546420.localdomain ceph-mon[298353]: pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:35 np0005546420.localdomain ceph-mon[298353]: pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:01:36 np0005546420.localdomain podman[305264]: 2025-12-05 10:01:36.514811609 +0000 UTC m=+0.088429690 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:01:36 np0005546420.localdomain podman[305264]: 2025-12-05 10:01:36.526913271 +0000 UTC m=+0.100531382 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:01:36 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:01:36 np0005546420.localdomain sudo[305284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:01:36 np0005546420.localdomain sudo[305284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:36 np0005546420.localdomain sudo[305284]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:37 np0005546420.localdomain sudo[305302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:01:37 np0005546420.localdomain sudo[305302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:37 np0005546420.localdomain sudo[305302]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:38 np0005546420.localdomain sudo[305353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:01:38 np0005546420.localdomain sudo[305353]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:38 np0005546420.localdomain sudo[305353]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:38 np0005546420.localdomain ceph-mon[298353]: pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:01:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:01:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:01:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:01:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:40 np0005546420.localdomain ceph-mon[298353]: pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' 
Dec 05 10:01:40 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/3329117818' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:01:42 np0005546420.localdomain ceph-mon[298353]: pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:01:43 np0005546420.localdomain podman[305371]: 2025-12-05 10:01:43.514011152 +0000 UTC m=+0.088060360 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:01:43 np0005546420.localdomain podman[305371]: 2025-12-05 10:01:43.54909297 +0000 UTC m=+0.123142148 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 05 10:01:43 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:01:44 np0005546420.localdomain ceph-mon[298353]: pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:44 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 e91: 6 total, 6 up, 6 in
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:45 np0005546420.localdomain sshd[303239]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 10:01:45 np0005546420.localdomain systemd[1]: session-70.scope: Deactivated successfully.
Dec 05 10:01:45 np0005546420.localdomain systemd[1]: session-70.scope: Consumed 11.823s CPU time.
Dec 05 10:01:45 np0005546420.localdomain systemd-logind[762]: Session 70 logged out. Waiting for processes to exit.
Dec 05 10:01:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:01:45 np0005546420.localdomain systemd-logind[762]: Removed session 70.
Dec 05 10:01:45 np0005546420.localdomain podman[305389]: 2025-12-05 10:01:45.158616665 +0000 UTC m=+0.084327384 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:01:45 np0005546420.localdomain podman[305389]: 2025-12-05 10:01:45.192394244 +0000 UTC m=+0.118105123 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:01:45 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:01:45 np0005546420.localdomain sshd[305412]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:01:45 np0005546420.localdomain sshd[305412]: Accepted publickey for ceph-admin from 192.168.122.106 port 57948 ssh2: RSA SHA256:q6VxC6DPUNFS0sVwKTHgxs4jXzUeEUj9Lclf/gEqlLc
Dec 05 10:01:45 np0005546420.localdomain systemd-logind[762]: New session 71 of user ceph-admin.
Dec 05 10:01:45 np0005546420.localdomain systemd[1]: Started Session 71 of User ceph-admin.
Dec 05 10:01:45 np0005546420.localdomain sshd[305412]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 05 10:01:45 np0005546420.localdomain sudo[305416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:01:45 np0005546420.localdomain sudo[305416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:45 np0005546420.localdomain sudo[305416]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:45 np0005546420.localdomain sudo[305434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 10:01:45 np0005546420.localdomain sudo[305434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/2842962018' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: Activating manager daemon np0005546419.zhsnqq
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: osdmap e91: 6 total, 6 up, 6 in
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.200:0/2842962018' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: mgrmap e43: np0005546419.zhsnqq(active, starting, since 0.0322325s), standbys: np0005546420.aoeylc
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: Manager daemon np0005546419.zhsnqq is now available
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/mirror_snapshot_schedule"} : dispatch
Dec 05 10:01:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/trash_purge_schedule"} : dispatch
Dec 05 10:01:46 np0005546420.localdomain podman[305525]: 2025-12-05 10:01:46.491785942 +0000 UTC m=+0.098024326 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_BRANCH=main, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_CLEAN=True)
Dec 05 10:01:46 np0005546420.localdomain podman[305525]: 2025-12-05 10:01:46.603438504 +0000 UTC m=+0.209676858 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, build-date=2025-11-26T19:44:28Z)
Dec 05 10:01:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:01:46 np0005546420.localdomain podman[305558]: 2025-12-05 10:01:46.758875014 +0000 UTC m=+0.091852895 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 05 10:01:46 np0005546420.localdomain podman[305558]: 2025-12-05 10:01:46.776060643 +0000 UTC m=+0.109038504 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 05 10:01:46 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:01:47 np0005546420.localdomain ceph-mon[298353]: mgrmap e44: np0005546419.zhsnqq(active, since 1.07125s), standbys: np0005546420.aoeylc
Dec 05 10:01:47 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:10:01:46] ENGINE Bus STARTING
Dec 05 10:01:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:01:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:01:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:01:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:01:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:01:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18228 "" "Go-http-client/1.1"
Dec 05 10:01:47 np0005546420.localdomain sudo[305434]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:47 np0005546420.localdomain sudo[305662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:01:47 np0005546420.localdomain sudo[305662]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:47 np0005546420.localdomain sudo[305662]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:47 np0005546420.localdomain sudo[305680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:01:47 np0005546420.localdomain sudo[305680]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:10:01:46] ENGINE Serving on https://172.18.0.106:7150
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:10:01:46] ENGINE Client ('172.18.0.106', 34070) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:10:01:46] ENGINE Serving on http://172.18.0.106:8765
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: [05/Dec/2025:10:01:46] ENGINE Bus STARTED
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm)
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm)
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: Cluster is now healthy
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: mgrmap e45: np0005546419.zhsnqq(active, since 2s), standbys: np0005546420.aoeylc
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:48 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:48 np0005546420.localdomain sudo[305680]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:48 np0005546420.localdomain sudo[305729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:01:48 np0005546420.localdomain sudo[305729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:48 np0005546420.localdomain sudo[305729]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:48 np0005546420.localdomain sudo[305747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 05 10:01:48 np0005546420.localdomain sudo[305747]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:48 np0005546420.localdomain sudo[305747]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:01:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:01:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:01:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:01:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:01:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:01:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:01:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:01:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:01:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:01:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:01:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:01:48 np0005546420.localdomain sudo[305783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 10:01:48 np0005546420.localdomain sudo[305783]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:48 np0005546420.localdomain sudo[305783]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain sudo[305801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 10:01:49 np0005546420.localdomain sudo[305801]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[305801]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain sudo[305819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:01:49 np0005546420.localdomain sudo[305819]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[305819]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain sudo[305837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:01:49 np0005546420.localdomain sudo[305837]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[305837]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain sudo[305855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:01:49 np0005546420.localdomain sudo[305855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[305855]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain sudo[305889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:01:49 np0005546420.localdomain sudo[305889]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[305889]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain sudo[305907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new
Dec 05 10:01:49 np0005546420.localdomain sudo[305907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[305907]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain sudo[305925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 10:01:49 np0005546420.localdomain sudo[305925]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[305925]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain sudo[305943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 10:01:49 np0005546420.localdomain sudo[305943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[305943]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:49 np0005546420.localdomain ceph-mon[298353]: mgrmap e46: np0005546419.zhsnqq(active, since 4s), standbys: np0005546420.aoeylc
Dec 05 10:01:49 np0005546420.localdomain sudo[305961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 10:01:49 np0005546420.localdomain sudo[305961]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[305961]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain sudo[305979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:01:49 np0005546420.localdomain sudo[305979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[305979]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain sudo[305997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:01:49 np0005546420.localdomain sudo[305997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[305997]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:49 np0005546420.localdomain sudo[306015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:01:49 np0005546420.localdomain sudo[306015]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:49 np0005546420.localdomain sudo[306015]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:50 np0005546420.localdomain sudo[306049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:01:50 np0005546420.localdomain sudo[306049]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:50 np0005546420.localdomain sudo[306049]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:50 np0005546420.localdomain sudo[306067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new
Dec 05 10:01:50 np0005546420.localdomain sudo[306067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:50 np0005546420.localdomain sudo[306067]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:50 np0005546420.localdomain sudo[306085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:01:50 np0005546420.localdomain sudo[306085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:50 np0005546420.localdomain sudo[306085]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:50 np0005546420.localdomain sudo[306103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 10:01:50 np0005546420.localdomain sudo[306103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:50 np0005546420.localdomain sudo[306103]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:50 np0005546420.localdomain sudo[306121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph
Dec 05 10:01:50 np0005546420.localdomain sudo[306121]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:50 np0005546420.localdomain sudo[306121]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:50 np0005546420.localdomain sudo[306139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 10:01:50 np0005546420.localdomain sudo[306139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:50 np0005546420.localdomain sudo[306139]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:50 np0005546420.localdomain sudo[306157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:01:50 np0005546420.localdomain sudo[306157]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:50 np0005546420.localdomain sudo[306157]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:50 np0005546420.localdomain sudo[306175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 10:01:50 np0005546420.localdomain sudo[306175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:50 np0005546420.localdomain sudo[306175]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:50 np0005546420.localdomain sshd[306209]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:01:50 np0005546420.localdomain sudo[306211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 10:01:50 np0005546420.localdomain sudo[306211]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:50 np0005546420.localdomain sudo[306211]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:50 np0005546420.localdomain sudo[306229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new
Dec 05 10:01:50 np0005546420.localdomain sudo[306229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:50 np0005546420.localdomain sudo[306229]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:50 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:01:50 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:01:50 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf
Dec 05 10:01:50 np0005546420.localdomain ceph-mon[298353]: Standby manager daemon np0005546421.sukfea started
Dec 05 10:01:50 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 10:01:50 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 10:01:50 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 05 10:01:50 np0005546420.localdomain sudo[306247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 10:01:50 np0005546420.localdomain sudo[306247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:50 np0005546420.localdomain sudo[306247]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:51 np0005546420.localdomain sudo[306265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 10:01:51 np0005546420.localdomain sudo[306265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:51 np0005546420.localdomain sudo[306265]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:51 np0005546420.localdomain sshd[306209]: Connection closed by authenticating user root 87.120.191.21 port 53540 [preauth]
Dec 05 10:01:51 np0005546420.localdomain sudo[306283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config
Dec 05 10:01:51 np0005546420.localdomain sudo[306283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:51 np0005546420.localdomain sudo[306283]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:51 np0005546420.localdomain sudo[306301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 10:01:51 np0005546420.localdomain sudo[306301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:51 np0005546420.localdomain sudo[306301]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:51 np0005546420.localdomain sudo[306319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b
Dec 05 10:01:51 np0005546420.localdomain sudo[306319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:51 np0005546420.localdomain sudo[306319]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:51 np0005546420.localdomain sudo[306337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 10:01:51 np0005546420.localdomain sudo[306337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:51 np0005546420.localdomain sudo[306337]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:51 np0005546420.localdomain sudo[306371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 10:01:51 np0005546420.localdomain sudo[306371]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:51 np0005546420.localdomain sudo[306371]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:51 np0005546420.localdomain sudo[306389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new
Dec 05 10:01:51 np0005546420.localdomain sudo[306389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:51 np0005546420.localdomain sudo[306389]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:51 np0005546420.localdomain sudo[306407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-79feddb1-4bfc-557f-83b9-0d57c9f66c1b/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring.new /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 10:01:51 np0005546420.localdomain sudo[306407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:51 np0005546420.localdomain sudo[306407]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: mgrmap e47: np0005546419.zhsnqq(active, since 6s), standbys: np0005546420.aoeylc, np0005546421.sukfea
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} : dispatch
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:51 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:01:51 np0005546420.localdomain sudo[306425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:01:52 np0005546420.localdomain sudo[306425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:52 np0005546420.localdomain sudo[306425]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:52 np0005546420.localdomain sudo[306443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:01:52 np0005546420.localdomain sudo[306443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:01:52 np0005546420.localdomain sudo[306443]: pam_unix(sudo:session): session closed for user root
Dec 05 10:01:52 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:01:52 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:01:52 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:52 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:01:54 np0005546420.localdomain ceph-mon[298353]: pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Dec 05 10:01:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:01:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:01:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:01:55 np0005546420.localdomain podman[306462]: 2025-12-05 10:01:55.520772384 +0000 UTC m=+0.089533177 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:01:55 np0005546420.localdomain podman[306462]: 2025-12-05 10:01:55.531383268 +0000 UTC m=+0.100144080 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:01:55 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:01:55 np0005546420.localdomain podman[306461]: 2025-12-05 10:01:55.61980854 +0000 UTC m=+0.189005846 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, managed_by=edpm_ansible)
Dec 05 10:01:55 np0005546420.localdomain podman[306461]: 2025-12-05 10:01:55.661503615 +0000 UTC m=+0.230700911 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Dec 05 10:01:55 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:01:56 np0005546420.localdomain ceph-mon[298353]: pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Dec 05 10:01:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:01:58 np0005546420.localdomain ceph-mon[298353]: pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 10:01:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:01:58 np0005546420.localdomain systemd[292579]: Created slice User Background Tasks Slice.
Dec 05 10:01:58 np0005546420.localdomain systemd[292579]: Starting Cleanup of User's Temporary Files and Directories...
Dec 05 10:01:58 np0005546420.localdomain podman[306503]: 2025-12-05 10:01:58.502472195 +0000 UTC m=+0.080547953 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 05 10:01:58 np0005546420.localdomain systemd[292579]: Finished Cleanup of User's Temporary Files and Directories.
Dec 05 10:01:58 np0005546420.localdomain podman[306503]: 2025-12-05 10:01:58.54031174 +0000 UTC m=+0.118387468 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:01:58 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:02:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:00 np0005546420.localdomain ceph-mon[298353]: pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 10:02:02 np0005546420.localdomain ceph-mon[298353]: pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 10:02:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:02:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/485580555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:02:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:02:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/485580555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:02:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:02:04.122 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:02:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:02:04.123 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:02:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:02:04.123 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:02:04 np0005546420.localdomain ceph-mon[298353]: pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 05 10:02:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/485580555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:02:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/485580555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:02:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:05 np0005546420.localdomain ceph-mon[298353]: pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:02:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7800.1 total, 600.0 interval
                                                          Cumulative writes: 5841 writes, 25K keys, 5841 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5841 writes, 784 syncs, 7.45 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 26 writes, 120 keys, 26 commit groups, 1.0 writes per commit group, ingest: 0.21 MB, 0.00 MB/s
                                                          Interval WAL: 26 writes, 13 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:02:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:02:07 np0005546420.localdomain podman[306530]: 2025-12-05 10:02:07.508080255 +0000 UTC m=+0.084907776 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 05 10:02:07 np0005546420.localdomain podman[306530]: 2025-12-05 10:02:07.523347801 +0000 UTC m=+0.100175332 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:02:07 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:02:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:07.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:02:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:07.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:02:08 np0005546420.localdomain ceph-mon[298353]: pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:08.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:02:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:08.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:02:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:08.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:02:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:08.894 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:02:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:08.894 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:02:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:10 np0005546420.localdomain ceph-mon[298353]: pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:02:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7800.2 total, 600.0 interval
                                                          Cumulative writes: 4941 writes, 22K keys, 4941 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4941 writes, 705 syncs, 7.01 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 215 writes, 495 keys, 215 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s
                                                          Interval WAL: 215 writes, 103 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:02:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:10.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:02:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:11.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:02:12 np0005546420.localdomain ceph-mon[298353]: pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.953 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:02:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:02:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:13.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:02:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:13.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:02:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:13.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:02:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:13.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:02:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:13.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:02:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:13.890 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:02:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:13.890 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:02:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:13.890 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:02:14 np0005546420.localdomain ceph-mon[298353]: pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:14 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3450334063' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:02:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:02:14 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3989272146' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:02:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:14.312 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:02:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:02:14 np0005546420.localdomain systemd[1]: tmp-crun.q9OPy2.mount: Deactivated successfully.
Dec 05 10:02:14 np0005546420.localdomain podman[306571]: 2025-12-05 10:02:14.508618541 +0000 UTC m=+0.081053038 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:02:14 np0005546420.localdomain podman[306571]: 2025-12-05 10:02:14.54429223 +0000 UTC m=+0.116726717 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 05 10:02:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:14.544 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:02:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:14.546 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12325MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:02:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:14.546 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:02:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:14.547 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:02:14 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:02:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:14.616 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:02:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:14.617 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:02:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:14.637 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:02:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:02:15 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3818251046' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:02:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:15.146 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:02:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:15.151 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:02:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3989272146' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:02:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1288089841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:02:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3818251046' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:02:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:02:15 np0005546420.localdomain systemd[1]: tmp-crun.1s2vZs.mount: Deactivated successfully.
Dec 05 10:02:15 np0005546420.localdomain podman[306610]: 2025-12-05 10:02:15.550113926 +0000 UTC m=+0.128096315 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:02:15 np0005546420.localdomain podman[306610]: 2025-12-05 10:02:15.55940461 +0000 UTC m=+0.137387039 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:02:15 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:02:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:15.746 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:02:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:15.749 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:02:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:15.750 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:02:16 np0005546420.localdomain ceph-mon[298353]: pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:02:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:02:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:02:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:02:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:02:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18221 "" "Go-http-client/1.1"
Dec 05 10:02:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:02:17 np0005546420.localdomain podman[306633]: 2025-12-05 10:02:17.515924795 +0000 UTC m=+0.089876527 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:02:17 np0005546420.localdomain podman[306633]: 2025-12-05 10:02:17.53142802 +0000 UTC m=+0.105379772 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 05 10:02:17 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:02:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:02:17.748 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:02:18 np0005546420.localdomain ceph-mon[298353]: pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3969466333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:02:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:02:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:02:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:02:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:02:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:02:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:02:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:02:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:02:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:02:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:02:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:02:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:02:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2663320372' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:02:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:20 np0005546420.localdomain ceph-mon[298353]: pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:22 np0005546420.localdomain ceph-mon[298353]: pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:24 np0005546420.localdomain ceph-mon[298353]: pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:25 np0005546420.localdomain ceph-mon[298353]: pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:02:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:02:26 np0005546420.localdomain podman[306653]: 2025-12-05 10:02:26.512503928 +0000 UTC m=+0.086466713 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:02:26 np0005546420.localdomain podman[306652]: 2025-12-05 10:02:26.558786042 +0000 UTC m=+0.135129180 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, version=9.6, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public)
Dec 05 10:02:26 np0005546420.localdomain podman[306652]: 2025-12-05 10:02:26.575819562 +0000 UTC m=+0.152162700 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, distribution-scope=public)
Dec 05 10:02:26 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:02:26 np0005546420.localdomain podman[306653]: 2025-12-05 10:02:26.594254376 +0000 UTC m=+0.168217221 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:02:26 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:02:28 np0005546420.localdomain ceph-mon[298353]: pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:02:29 np0005546420.localdomain podman[306696]: 2025-12-05 10:02:29.522141792 +0000 UTC m=+0.092574460 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:02:29 np0005546420.localdomain podman[306696]: 2025-12-05 10:02:29.568543469 +0000 UTC m=+0.138976117 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:02:29 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:02:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:30 np0005546420.localdomain ceph-mon[298353]: pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:32 np0005546420.localdomain ceph-mon[298353]: pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:34 np0005546420.localdomain ceph-mon[298353]: pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:35 np0005546420.localdomain ceph-mon[298353]: pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:38 np0005546420.localdomain ceph-mon[298353]: pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:02:38 np0005546420.localdomain podman[306721]: 2025-12-05 10:02:38.501778876 +0000 UTC m=+0.080123318 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 05 10:02:38 np0005546420.localdomain podman[306721]: 2025-12-05 10:02:38.537512618 +0000 UTC m=+0.115857060 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:02:38 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:02:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:40 np0005546420.localdomain ceph-mon[298353]: pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:42 np0005546420.localdomain ceph-mon[298353]: pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:44 np0005546420.localdomain ceph-mon[298353]: pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:02:45 np0005546420.localdomain podman[306741]: 2025-12-05 10:02:45.512739006 +0000 UTC m=+0.089077963 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec 05 10:02:45 np0005546420.localdomain podman[306741]: 2025-12-05 10:02:45.547464687 +0000 UTC m=+0.123803704 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:02:45 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:02:45 np0005546420.localdomain ceph-mon[298353]: pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:02:46 np0005546420.localdomain podman[306759]: 2025-12-05 10:02:46.502498679 +0000 UTC m=+0.081759909 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:02:46 np0005546420.localdomain podman[306759]: 2025-12-05 10:02:46.515299941 +0000 UTC m=+0.094561231 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:02:46 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:02:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:02:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:02:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:02:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:02:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:02:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18228 "" "Go-http-client/1.1"
Dec 05 10:02:48 np0005546420.localdomain ceph-mon[298353]: pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:02:48 np0005546420.localdomain podman[306783]: 2025-12-05 10:02:48.506952728 +0000 UTC m=+0.084866954 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Dec 05 10:02:48 np0005546420.localdomain podman[306783]: 2025-12-05 10:02:48.520370188 +0000 UTC m=+0.098284404 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:02:48 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:02:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:02:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:02:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:02:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:02:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:02:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:02:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:02:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:02:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:02:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:02:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:02:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:02:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:50 np0005546420.localdomain ceph-mon[298353]: pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:52 np0005546420.localdomain ceph-mon[298353]: pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:52 np0005546420.localdomain sudo[306803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:02:52 np0005546420.localdomain sudo[306803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:02:52 np0005546420.localdomain sudo[306803]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:52 np0005546420.localdomain sudo[306821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:02:52 np0005546420.localdomain sudo[306821]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:02:53 np0005546420.localdomain sudo[306821]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:53 np0005546420.localdomain sudo[306871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:02:53 np0005546420.localdomain sudo[306871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:02:53 np0005546420.localdomain sudo[306871]: pam_unix(sudo:session): session closed for user root
Dec 05 10:02:54 np0005546420.localdomain ceph-mon[298353]: pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:02:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:02:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:02:54 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:02:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:02:56 np0005546420.localdomain ceph-mon[298353]: pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:02:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:02:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:02:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:02:57 np0005546420.localdomain podman[306890]: 2025-12-05 10:02:57.523785582 +0000 UTC m=+0.091003732 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:02:57 np0005546420.localdomain podman[306890]: 2025-12-05 10:02:57.536921073 +0000 UTC m=+0.104139183 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:02:57 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:02:57 np0005546420.localdomain podman[306889]: 2025-12-05 10:02:57.63239551 +0000 UTC m=+0.202461777 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec 05 10:02:57 np0005546420.localdomain podman[306889]: 2025-12-05 10:02:57.645913524 +0000 UTC m=+0.215979781 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9)
Dec 05 10:02:57 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:02:58 np0005546420.localdomain ceph-mon[298353]: pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:00 np0005546420.localdomain ceph-mon[298353]: pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:03:00 np0005546420.localdomain podman[306930]: 2025-12-05 10:03:00.501173271 +0000 UTC m=+0.079128769 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:03:00 np0005546420.localdomain podman[306930]: 2025-12-05 10:03:00.608260354 +0000 UTC m=+0.186215892 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 10:03:00 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:03:02 np0005546420.localdomain ceph-mon[298353]: pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:03:04.123 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:03:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:03:04.124 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:03:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:03:04.124 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:03:04 np0005546420.localdomain ceph-mon[298353]: pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2953737703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:03:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2953737703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:03:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:06 np0005546420.localdomain ceph-mon[298353]: pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:08 np0005546420.localdomain ceph-mon[298353]: pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:08.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:03:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:08.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:03:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:08.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:03:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:03:09 np0005546420.localdomain podman[306955]: 2025-12-05 10:03:09.510517386 +0000 UTC m=+0.084674298 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 10:03:09 np0005546420.localdomain podman[306955]: 2025-12-05 10:03:09.532391825 +0000 UTC m=+0.106548717 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 05 10:03:09 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:03:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:10 np0005546420.localdomain ceph-mon[298353]: pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:10.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:03:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:10.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:03:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:10.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:03:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:10.891 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:03:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:10.891 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:03:11 np0005546420.localdomain ceph-mon[298353]: pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:11.886 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:03:13 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1628058856' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:03:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:13.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:03:14 np0005546420.localdomain ceph-mon[298353]: pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:14 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3548757848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:03:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:15.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:03:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:15.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:03:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:15.890 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:03:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:15.891 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:03:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:15.891 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:03:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:15.891 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:03:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:15.891 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:03:16 np0005546420.localdomain ceph-mon[298353]: pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:03:16 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1919215097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:03:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:16.354 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:03:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:03:16 np0005546420.localdomain systemd[1]: tmp-crun.gsEzwU.mount: Deactivated successfully.
Dec 05 10:03:16 np0005546420.localdomain podman[306997]: 2025-12-05 10:03:16.487321604 +0000 UTC m=+0.062503070 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 05 10:03:16 np0005546420.localdomain podman[306997]: 2025-12-05 10:03:16.522326923 +0000 UTC m=+0.097508369 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:03:16 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:03:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:03:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:16.555 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:03:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:16.556 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=12326MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:03:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:16.556 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:03:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:16.556 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:03:16 np0005546420.localdomain podman[307015]: 2025-12-05 10:03:16.616643306 +0000 UTC m=+0.058404466 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:03:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:16.620 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:03:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:16.620 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:03:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:16.639 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:03:16 np0005546420.localdomain podman[307015]: 2025-12-05 10:03:16.655448401 +0000 UTC m=+0.097209571 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:03:16 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:03:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:03:17 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2064996136' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:03:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:17.055 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:03:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:17.062 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:03:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:17.078 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:03:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:17.082 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:03:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:17.082 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:03:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1919215097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:03:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2064996136' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:03:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:03:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:03:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:03:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:03:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:03:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18228 "" "Go-http-client/1.1"
Dec 05 10:03:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:18.079 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:03:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:03:18.096 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:03:18 np0005546420.localdomain ceph-mon[298353]: pgmap v49: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 05 10:03:18 np0005546420.localdomain ceph-mon[298353]: mgrmap e48: np0005546419.zhsnqq(active, since 92s), standbys: np0005546420.aoeylc, np0005546421.sukfea
Dec 05 10:03:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:03:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:03:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:03:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:03:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:03:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:03:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:03:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:03:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:03:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:03:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:03:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:03:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3502213756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:03:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1211497033' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:03:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:03:19 np0005546420.localdomain podman[307062]: 2025-12-05 10:03:19.465363812 +0000 UTC m=+0.048131852 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Dec 05 10:03:19 np0005546420.localdomain podman[307062]: 2025-12-05 10:03:19.472313494 +0000 UTC m=+0.055081544 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:03:19 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:03:19 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:03:19.767 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:03:19 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:03:19.768 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:03:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:20 np0005546420.localdomain ceph-mon[298353]: pgmap v50: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 05 10:03:22 np0005546420.localdomain ceph-mon[298353]: pgmap v51: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 05 10:03:24 np0005546420.localdomain ceph-mon[298353]: pgmap v52: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 05 10:03:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:26 np0005546420.localdomain ceph-mon[298353]: pgmap v53: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 05 10:03:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e92 e92: 6 total, 6 up, 6 in
Dec 05 10:03:27 np0005546420.localdomain ceph-mon[298353]: osdmap e92: 6 total, 6 up, 6 in
Dec 05 10:03:28 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 e93: 6 total, 6 up, 6 in
Dec 05 10:03:28 np0005546420.localdomain ceph-mon[298353]: pgmap v55: 177 pgs: 177 active+clean; 125 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Dec 05 10:03:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:03:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:03:28 np0005546420.localdomain podman[307083]: 2025-12-05 10:03:28.543280673 +0000 UTC m=+0.117558723 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:03:28 np0005546420.localdomain podman[307083]: 2025-12-05 10:03:28.556326482 +0000 UTC m=+0.130604532 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm)
Dec 05 10:03:28 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:03:28 np0005546420.localdomain podman[307084]: 2025-12-05 10:03:28.511283316 +0000 UTC m=+0.082841713 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:03:28 np0005546420.localdomain podman[307084]: 2025-12-05 10:03:28.642042632 +0000 UTC m=+0.213601019 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:03:28 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:03:29 np0005546420.localdomain ceph-mon[298353]: osdmap e93: 6 total, 6 up, 6 in
Dec 05 10:03:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:03:29.770 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:03:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:30 np0005546420.localdomain ceph-mon[298353]: pgmap v57: 177 pgs: 177 active+clean; 125 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 2.6 MiB/s wr, 19 op/s
Dec 05 10:03:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:03:31 np0005546420.localdomain podman[307126]: 2025-12-05 10:03:31.500134785 +0000 UTC m=+0.077438198 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 05 10:03:31 np0005546420.localdomain podman[307126]: 2025-12-05 10:03:31.53923492 +0000 UTC m=+0.116538303 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:03:31 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:03:32 np0005546420.localdomain ceph-mon[298353]: pgmap v58: 177 pgs: 177 active+clean; 125 MiB data, 609 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 2.6 MiB/s wr, 35 op/s
Dec 05 10:03:34 np0005546420.localdomain ceph-mon[298353]: pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 05 10:03:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:36 np0005546420.localdomain ceph-mon[298353]: pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 4.5 MiB/s wr, 42 op/s
Dec 05 10:03:38 np0005546420.localdomain ceph-mon[298353]: pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 2.0 MiB/s wr, 22 op/s
Dec 05 10:03:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:03:39 np0005546420.localdomain podman[307152]: 2025-12-05 10:03:39.851047321 +0000 UTC m=+0.093579171 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible)
Dec 05 10:03:39 np0005546420.localdomain podman[307152]: 2025-12-05 10:03:39.889920439 +0000 UTC m=+0.132452349 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:03:39 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:03:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:40 np0005546420.localdomain ceph-mon[298353]: pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 1.9 MiB/s wr, 20 op/s
Dec 05 10:03:42 np0005546420.localdomain ceph-mon[298353]: pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Dec 05 10:03:44 np0005546420.localdomain ceph-mon[298353]: pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 4.9 KiB/s rd, 1.7 MiB/s wr, 8 op/s
Dec 05 10:03:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:46 np0005546420.localdomain ceph-mon[298353]: pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:03:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:03:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:03:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:03:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:03:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18225 "" "Go-http-client/1.1"
Dec 05 10:03:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:03:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:03:47 np0005546420.localdomain podman[307171]: 2025-12-05 10:03:47.512873189 +0000 UTC m=+0.085197375 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:03:47 np0005546420.localdomain podman[307171]: 2025-12-05 10:03:47.552579112 +0000 UTC m=+0.124903318 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:03:47 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:03:47 np0005546420.localdomain podman[307170]: 2025-12-05 10:03:47.572246163 +0000 UTC m=+0.147544440 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:03:47 np0005546420.localdomain podman[307170]: 2025-12-05 10:03:47.584329032 +0000 UTC m=+0.159627389 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:03:47 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:03:48 np0005546420.localdomain ceph-mon[298353]: pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:03:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:03:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:03:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:03:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:03:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:03:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:03:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:03:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:03:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:03:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:03:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:03:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:50 np0005546420.localdomain ceph-mon[298353]: pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:03:50 np0005546420.localdomain podman[307209]: 2025-12-05 10:03:50.502248374 +0000 UTC m=+0.081059268 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:03:50 np0005546420.localdomain podman[307209]: 2025-12-05 10:03:50.539311736 +0000 UTC m=+0.118122590 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 10:03:50 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:03:52 np0005546420.localdomain ceph-mon[298353]: pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:53 np0005546420.localdomain sudo[307229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:03:53 np0005546420.localdomain sudo[307229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:03:53 np0005546420.localdomain sudo[307229]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:53 np0005546420.localdomain sudo[307247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:03:53 np0005546420.localdomain sudo[307247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:03:54 np0005546420.localdomain ceph-mon[298353]: pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:54 np0005546420.localdomain sudo[307247]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:54 np0005546420.localdomain sudo[307296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:03:54 np0005546420.localdomain sudo[307296]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:03:54 np0005546420.localdomain sudo[307296]: pam_unix(sudo:session): session closed for user root
Dec 05 10:03:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:03:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:03:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:03:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:03:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:03:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:03:56 np0005546420.localdomain ceph-mon[298353]: pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:58 np0005546420.localdomain ceph-mon[298353]: pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:03:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:03:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:03:59 np0005546420.localdomain podman[307315]: 2025-12-05 10:03:59.515929099 +0000 UTC m=+0.090539838 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:03:59 np0005546420.localdomain podman[307315]: 2025-12-05 10:03:59.528213814 +0000 UTC m=+0.102824633 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:03:59 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:03:59 np0005546420.localdomain podman[307314]: 2025-12-05 10:03:59.615262114 +0000 UTC m=+0.190570234 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.6, config_id=edpm, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1755695350, io.openshift.expose-services=)
Dec 05 10:03:59 np0005546420.localdomain podman[307314]: 2025-12-05 10:03:59.657646719 +0000 UTC m=+0.232954849 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec 05 10:03:59 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:04:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:00 np0005546420.localdomain ceph-mon[298353]: pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:02 np0005546420.localdomain ceph-mon[298353]: pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:04:02 np0005546420.localdomain podman[307356]: 2025-12-05 10:04:02.515571677 +0000 UTC m=+0.091560878 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 10:04:02 np0005546420.localdomain podman[307356]: 2025-12-05 10:04:02.621125633 +0000 UTC m=+0.197114844 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:04:02 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:04:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1239661781' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:04:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1239661781' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:04:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:04.124 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:04.124 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:04.125 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:04 np0005546420.localdomain ceph-mon[298353]: pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:05 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:05.886 262769 INFO oslo.privsep.daemon [None req-e395aa92-e855-4bfe-851e-4f92c6b48516 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp85q0qzdg/privsep.sock']
Dec 05 10:04:06 np0005546420.localdomain ceph-mon[298353]: pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:06 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:06.550 262769 INFO oslo.privsep.daemon [None req-e395aa92-e855-4bfe-851e-4f92c6b48516 - - - - - -] Spawned new privsep daemon via rootwrap
Dec 05 10:04:06 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:06.430 307385 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:04:06 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:06.436 307385 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:04:06 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:06.439 307385 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 05 10:04:06 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:06.440 307385 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307385
Dec 05 10:04:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:06.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:06.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 10:04:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:06.896 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 10:04:07 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:07.109 262769 INFO oslo.privsep.daemon [None req-e395aa92-e855-4bfe-851e-4f92c6b48516 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmppgc818q1/privsep.sock']
Dec 05 10:04:07 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:07.717 262769 INFO oslo.privsep.daemon [None req-e395aa92-e855-4bfe-851e-4f92c6b48516 - - - - - -] Spawned new privsep daemon via rootwrap
Dec 05 10:04:07 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:07.606 307394 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:04:07 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:07.609 307394 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:04:07 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:07.610 307394 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 05 10:04:07 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:07.611 307394 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307394
Dec 05 10:04:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:07.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:08 np0005546420.localdomain ceph-mon[298353]: pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:08 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:08.757 262769 INFO oslo.privsep.daemon [None req-e395aa92-e855-4bfe-851e-4f92c6b48516 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpbpwl0wv_/privsep.sock']
Dec 05 10:04:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:09.380 262769 INFO oslo.privsep.daemon [None req-e395aa92-e855-4bfe-851e-4f92c6b48516 - - - - - -] Spawned new privsep daemon via rootwrap
Dec 05 10:04:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:09.272 307406 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:04:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:09.277 307406 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:04:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:09.280 307406 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 05 10:04:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:09.280 307406 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307406
Dec 05 10:04:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:09.894 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:09.895 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:04:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:09.896 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:09.897 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 10:04:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:10 np0005546420.localdomain ceph-mon[298353]: pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:04:10 np0005546420.localdomain podman[307416]: 2025-12-05 10:04:10.519450538 +0000 UTC m=+0.096103747 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:04:10 np0005546420.localdomain podman[307416]: 2025-12-05 10:04:10.532461046 +0000 UTC m=+0.109114295 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:04:10 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:04:10 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:10.836 262769 INFO neutron.agent.linux.ip_lib [None req-e395aa92-e855-4bfe-851e-4f92c6b48516 - - - - - -] Device tapc9513305-54 cannot be used as it has no MAC address
Dec 05 10:04:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:10.890 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:10 np0005546420.localdomain kernel: device tapc9513305-54 entered promiscuous mode
Dec 05 10:04:10 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929050.8952] manager: (tapc9513305-54): new Generic device (/org/freedesktop/NetworkManager/Devices/13)
Dec 05 10:04:10 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:10Z|00025|binding|INFO|Claiming lport c9513305-5405-4e6d-997a-b5e59856978a for this chassis.
Dec 05 10:04:10 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:10Z|00026|binding|INFO|c9513305-5405-4e6d-997a-b5e59856978a: Claiming unknown
Dec 05 10:04:10 np0005546420.localdomain systemd-udevd[307440]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:04:10 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:10Z|00027|ovn_bfd|INFO|Enabled BFD on interface ovn-40c64e-0
Dec 05 10:04:10 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:10Z|00028|ovn_bfd|INFO|Enabled BFD on interface ovn-473cc8-0
Dec 05 10:04:10 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:10Z|00029|ovn_bfd|INFO|Enabled BFD on interface ovn-f5bb44-0
Dec 05 10:04:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:10.916 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-4d14eca3-0067-494d-b2d9-059bccd18a88', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d14eca3-0067-494d-b2d9-059bccd18a88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b63f7777dfa40c1bfc42162c9fd676f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfe0d10c-51af-4255-846b-8c331654da0e, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=c9513305-5405-4e6d-997a-b5e59856978a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:04:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:10.919 159503 INFO neutron.agent.ovn.metadata.agent [-] Port c9513305-5405-4e6d-997a-b5e59856978a in datapath 4d14eca3-0067-494d-b2d9-059bccd18a88 bound to our chassis
Dec 05 10:04:10 np0005546420.localdomain virtnodedevd[229575]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 05 10:04:10 np0005546420.localdomain virtnodedevd[229575]: hostname: np0005546420.localdomain
Dec 05 10:04:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapc9513305-54: No such device
Dec 05 10:04:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:10.924 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port b2bed88e-d07c-4687-9f66-0fce20a95357 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:04:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:10.924 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d14eca3-0067-494d-b2d9-059bccd18a88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:04:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:10.926 159503 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmps75ws8bz/privsep.sock']
Dec 05 10:04:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapc9513305-54: No such device
Dec 05 10:04:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapc9513305-54: No such device
Dec 05 10:04:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapc9513305-54: No such device
Dec 05 10:04:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapc9513305-54: No such device
Dec 05 10:04:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapc9513305-54: No such device
Dec 05 10:04:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapc9513305-54: No such device
Dec 05 10:04:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapc9513305-54: No such device
Dec 05 10:04:10 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:10Z|00030|binding|INFO|Setting lport c9513305-5405-4e6d-997a-b5e59856978a ovn-installed in OVS
Dec 05 10:04:10 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:10Z|00031|binding|INFO|Setting lport c9513305-5405-4e6d-997a-b5e59856978a up in Southbound
Dec 05 10:04:11 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:11.583 159503 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 10:04:11 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:11.584 159503 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps75ws8bz/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 10:04:11 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:11.413 307492 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:04:11 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:11.419 307492 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:04:11 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:11.422 307492 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 05 10:04:11 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:11.423 307492 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307492
Dec 05 10:04:11 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:11.587 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[fb65ea51-46a6-4a0b-a6a6-59b04e490fef]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:11.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:11 np0005546420.localdomain podman[307523]: 
Dec 05 10:04:11 np0005546420.localdomain podman[307523]: 2025-12-05 10:04:11.909132873 +0000 UTC m=+0.081150381 container create a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:04:11 np0005546420.localdomain systemd[1]: Started libpod-conmon-a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93.scope.
Dec 05 10:04:11 np0005546420.localdomain podman[307523]: 2025-12-05 10:04:11.864556551 +0000 UTC m=+0.036574099 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:04:11 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:04:11 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474772dc4e7411bc8e30ffe0ad9d9c584466ee365e5b5657b676bb8340e8f4dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:04:11 np0005546420.localdomain podman[307523]: 2025-12-05 10:04:11.991559831 +0000 UTC m=+0.163577349 container init a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:04:12 np0005546420.localdomain podman[307523]: 2025-12-05 10:04:12.001137783 +0000 UTC m=+0.173155291 container start a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:04:12 np0005546420.localdomain dnsmasq[307541]: started, version 2.85 cachesize 150
Dec 05 10:04:12 np0005546420.localdomain dnsmasq[307541]: DNS service limited to local subnets
Dec 05 10:04:12 np0005546420.localdomain dnsmasq[307541]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:04:12 np0005546420.localdomain dnsmasq[307541]: warning: no upstream servers configured
Dec 05 10:04:12 np0005546420.localdomain dnsmasq-dhcp[307541]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:04:12 np0005546420.localdomain dnsmasq[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/addn_hosts - 0 addresses
Dec 05 10:04:12 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/host
Dec 05 10:04:12 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/opts
Dec 05 10:04:12 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:12.064 262769 INFO neutron.agent.dhcp.agent [None req-39bfc2a4-399c-425d-a55a-8978b41f95f5 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:07Z, description=, device_id=54a92bf3-8bdd-4752-81f9-df8cdef049ac, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a885430>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a885a90>], id=0d6be3a6-2b9a-4890-a4c1-10439bc2c3c7, ip_allocation=immediate, mac_address=fa:16:3e:f9:cd:30, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:04Z, description=, dns_domain=, id=4d14eca3-0067-494d-b2d9-059bccd18a88, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1134815813-network, port_security_enabled=True, project_id=1b63f7777dfa40c1bfc42162c9fd676f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=41100, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=191, status=ACTIVE, subnets=['5ee02dc7-19b8-4705-8343-283760163bca'], tags=[], tenant_id=1b63f7777dfa40c1bfc42162c9fd676f, updated_at=2025-12-05T10:04:04Z, vlan_transparent=None, network_id=4d14eca3-0067-494d-b2d9-059bccd18a88, port_security_enabled=False, project_id=1b63f7777dfa40c1bfc42162c9fd676f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=208, status=DOWN, tags=[], tenant_id=1b63f7777dfa40c1bfc42162c9fd676f, updated_at=2025-12-05T10:04:07Z on network 4d14eca3-0067-494d-b2d9-059bccd18a88
Dec 05 10:04:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:12.087 307492 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:12.087 307492 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:12.087 307492 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:12 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:12.172 262769 INFO neutron.agent.dhcp.agent [None req-fd05b025-2377-4b8b-84bd-da26316039e4 - - - - - -] DHCP configuration for ports {'60f79f33-8f4e-452b-bed3-efc4f7ae8a69'} is completed
Dec 05 10:04:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:12.185 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[937f8ecc-7407-4f33-bf98-c3def3a6be4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:12 np0005546420.localdomain dnsmasq[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/addn_hosts - 1 addresses
Dec 05 10:04:12 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/host
Dec 05 10:04:12 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/opts
Dec 05 10:04:12 np0005546420.localdomain podman[307559]: 2025-12-05 10:04:12.278985524 +0000 UTC m=+0.057611792 container kill a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:04:12 np0005546420.localdomain ceph-mon[298353]: pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:12 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:12.436 262769 INFO neutron.agent.dhcp.agent [None req-8d60b9a0-948a-43e6-804e-fd0b71034e5f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:07Z, description=, device_id=54a92bf3-8bdd-4752-81f9-df8cdef049ac, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a188670>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a188790>], id=0d6be3a6-2b9a-4890-a4c1-10439bc2c3c7, ip_allocation=immediate, mac_address=fa:16:3e:f9:cd:30, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:04Z, description=, dns_domain=, id=4d14eca3-0067-494d-b2d9-059bccd18a88, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1134815813-network, port_security_enabled=True, project_id=1b63f7777dfa40c1bfc42162c9fd676f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=41100, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=191, status=ACTIVE, subnets=['5ee02dc7-19b8-4705-8343-283760163bca'], tags=[], tenant_id=1b63f7777dfa40c1bfc42162c9fd676f, updated_at=2025-12-05T10:04:04Z, vlan_transparent=None, network_id=4d14eca3-0067-494d-b2d9-059bccd18a88, port_security_enabled=False, project_id=1b63f7777dfa40c1bfc42162c9fd676f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=208, status=DOWN, tags=[], tenant_id=1b63f7777dfa40c1bfc42162c9fd676f, updated_at=2025-12-05T10:04:07Z on network 4d14eca3-0067-494d-b2d9-059bccd18a88
Dec 05 10:04:12 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:12.512 262769 INFO neutron.agent.dhcp.agent [None req-117a4210-46af-4a4c-b584-6c5b2b4632c0 - - - - - -] DHCP configuration for ports {'0d6be3a6-2b9a-4890-a4c1-10439bc2c3c7'} is completed
Dec 05 10:04:12 np0005546420.localdomain dnsmasq[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/addn_hosts - 1 addresses
Dec 05 10:04:12 np0005546420.localdomain podman[307598]: 2025-12-05 10:04:12.683304938 +0000 UTC m=+0.075569530 container kill a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:04:12 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/host
Dec 05 10:04:12 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/opts
Dec 05 10:04:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:12.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:12.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:04:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:12.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:04:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:12.892 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:04:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:12.892 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:04:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:04:12 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:12.959 262769 INFO neutron.agent.dhcp.agent [None req-f6ac719d-dc23-4b76-8373-818c3816ad07 - - - - - -] DHCP configuration for ports {'0d6be3a6-2b9a-4890-a4c1-10439bc2c3c7'} is completed
Dec 05 10:04:14 np0005546420.localdomain ceph-mon[298353]: pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:14 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/286176458' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:14 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:14.646 262769 INFO neutron.agent.linux.ip_lib [None req-6e944492-e851-4413-a094-b74546b4b37e - - - - - -] Device tap5d03bb9c-79 cannot be used as it has no MAC address
Dec 05 10:04:14 np0005546420.localdomain kernel: device tap5d03bb9c-79 entered promiscuous mode
Dec 05 10:04:14 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929054.6803] manager: (tap5d03bb9c-79): new Generic device (/org/freedesktop/NetworkManager/Devices/14)
Dec 05 10:04:14 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:14Z|00032|binding|INFO|Claiming lport 5d03bb9c-7960-4d4c-b6e0-f33abb4191c1 for this chassis.
Dec 05 10:04:14 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:14Z|00033|binding|INFO|5d03bb9c-7960-4d4c-b6e0-f33abb4191c1: Claiming unknown
Dec 05 10:04:14 np0005546420.localdomain systemd-udevd[307629]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:04:14 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:14.692 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-b30a6f59-c719-4709-86d0-d8d44de009b2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b30a6f59-c719-4709-86d0-d8d44de009b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86cb8d3b471543839983316ef2de7b3f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b86c7586-dad3-4ed6-bcb9-b7e99ffa9ee8, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5d03bb9c-7960-4d4c-b6e0-f33abb4191c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:04:14 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:14.694 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5d03bb9c-7960-4d4c-b6e0-f33abb4191c1 in datapath b30a6f59-c719-4709-86d0-d8d44de009b2 bound to our chassis
Dec 05 10:04:14 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:14.696 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 327244e6-d11a-4cd8-9bd2-2d239cdc1461 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:04:14 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:14.696 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b30a6f59-c719-4709-86d0-d8d44de009b2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:04:14 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:14.698 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c2503a-f791-4751-83d3-0ca6c19db7e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:14 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5d03bb9c-79: No such device
Dec 05 10:04:14 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5d03bb9c-79: No such device
Dec 05 10:04:14 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:14Z|00034|binding|INFO|Setting lport 5d03bb9c-7960-4d4c-b6e0-f33abb4191c1 ovn-installed in OVS
Dec 05 10:04:14 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:14Z|00035|binding|INFO|Setting lport 5d03bb9c-7960-4d4c-b6e0-f33abb4191c1 up in Southbound
Dec 05 10:04:14 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5d03bb9c-79: No such device
Dec 05 10:04:14 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5d03bb9c-79: No such device
Dec 05 10:04:14 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5d03bb9c-79: No such device
Dec 05 10:04:14 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5d03bb9c-79: No such device
Dec 05 10:04:14 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5d03bb9c-79: No such device
Dec 05 10:04:14 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5d03bb9c-79: No such device
Dec 05 10:04:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:14.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.077853) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055077999, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2506, "num_deletes": 252, "total_data_size": 5734333, "memory_usage": 5900576, "flush_reason": "Manual Compaction"}
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055100808, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 3689873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17139, "largest_seqno": 19639, "table_properties": {"data_size": 3680505, "index_size": 5809, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20760, "raw_average_key_size": 21, "raw_value_size": 3661228, "raw_average_value_size": 3758, "num_data_blocks": 249, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928883, "oldest_key_time": 1764928883, "file_creation_time": 1764929055, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 23022 microseconds, and 10081 cpu microseconds.
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.100884) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 3689873 bytes OK
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.100915) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.102664) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.102694) EVENT_LOG_v1 {"time_micros": 1764929055102685, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.102726) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 5722911, prev total WAL file size 5722911, number of live WAL files 2.
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.105028) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end)
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(3603KB)], [27(17MB)]
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055105078, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 22169527, "oldest_snapshot_seqno": -1}
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 12150 keys, 19415088 bytes, temperature: kUnknown
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055186327, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 19415088, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19343914, "index_size": 39713, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30405, "raw_key_size": 323983, "raw_average_key_size": 26, "raw_value_size": 19135140, "raw_average_value_size": 1574, "num_data_blocks": 1523, "num_entries": 12150, "num_filter_entries": 12150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929055, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.187026) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 19415088 bytes
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.189617) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 271.7 rd, 238.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 17.6 +0.0 blob) out(18.5 +0.0 blob), read-write-amplify(11.3) write-amplify(5.3) OK, records in: 12684, records dropped: 534 output_compression: NoCompression
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.189650) EVENT_LOG_v1 {"time_micros": 1764929055189637, "job": 14, "event": "compaction_finished", "compaction_time_micros": 81593, "compaction_time_cpu_micros": 28207, "output_level": 6, "num_output_files": 1, "total_output_size": 19415088, "num_input_records": 12684, "num_output_records": 12150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055191053, "job": 14, "event": "table_file_deletion", "file_number": 29}
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055195388, "job": 14, "event": "table_file_deletion", "file_number": 27}
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.104535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.195898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.195907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.195912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.195916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:15 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:15.195923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:15 np0005546420.localdomain podman[307700]: 
Dec 05 10:04:15 np0005546420.localdomain podman[307700]: 2025-12-05 10:04:15.70103894 +0000 UTC m=+0.092901530 container create 83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b30a6f59-c719-4709-86d0-d8d44de009b2, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:04:15 np0005546420.localdomain systemd[1]: Started libpod-conmon-83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe.scope.
Dec 05 10:04:15 np0005546420.localdomain systemd[1]: tmp-crun.HzXmdp.mount: Deactivated successfully.
Dec 05 10:04:15 np0005546420.localdomain podman[307700]: 2025-12-05 10:04:15.656069115 +0000 UTC m=+0.047931705 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:04:15 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:04:15 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a85a7bef52414ae1fd73fbae05968d8ee845de09eb9db252c24fa77c0cce498f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:04:15 np0005546420.localdomain podman[307700]: 2025-12-05 10:04:15.787605115 +0000 UTC m=+0.179467705 container init 83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b30a6f59-c719-4709-86d0-d8d44de009b2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:04:15 np0005546420.localdomain podman[307700]: 2025-12-05 10:04:15.796335671 +0000 UTC m=+0.188198261 container start 83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b30a6f59-c719-4709-86d0-d8d44de009b2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:04:15 np0005546420.localdomain dnsmasq[307718]: started, version 2.85 cachesize 150
Dec 05 10:04:15 np0005546420.localdomain dnsmasq[307718]: DNS service limited to local subnets
Dec 05 10:04:15 np0005546420.localdomain dnsmasq[307718]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:04:15 np0005546420.localdomain dnsmasq[307718]: warning: no upstream servers configured
Dec 05 10:04:15 np0005546420.localdomain dnsmasq-dhcp[307718]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:04:15 np0005546420.localdomain dnsmasq[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/addn_hosts - 0 addresses
Dec 05 10:04:15 np0005546420.localdomain dnsmasq-dhcp[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/host
Dec 05 10:04:15 np0005546420.localdomain dnsmasq-dhcp[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/opts
Dec 05 10:04:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:15.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:15.940 262769 INFO neutron.agent.dhcp.agent [None req-024d0110-d4e8-449c-919f-c9a8fc705d59 - - - - - -] DHCP configuration for ports {'4d285d90-b3af-400a-af4e-f47a338d8762'} is completed
Dec 05 10:04:16 np0005546420.localdomain ceph-mon[298353]: pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2755924571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:04:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:04:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:04:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156739 "" "Go-http-client/1.1"
Dec 05 10:04:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:04:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19176 "" "Go-http-client/1.1"
Dec 05 10:04:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:17.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:17.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:04:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:17.888 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:17.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:17.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:17.890 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:04:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:17.890 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:04:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:18.064 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:17Z, description=, device_id=621b1edc-d8e2-4272-819e-09f25a8f24ba, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a056730>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a056760>], id=7276ba35-f562-4dbe-aaa9-28d14fa29a3e, ip_allocation=immediate, mac_address=fa:16:3e:54:8c:36, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:12Z, description=, dns_domain=, id=b30a6f59-c719-4709-86d0-d8d44de009b2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-215545664-network, port_security_enabled=True, project_id=86cb8d3b471543839983316ef2de7b3f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30750, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=254, status=ACTIVE, subnets=['fdd73ed4-6102-459d-9a7f-787d90ccf534'], tags=[], tenant_id=86cb8d3b471543839983316ef2de7b3f, updated_at=2025-12-05T10:04:12Z, vlan_transparent=None, network_id=b30a6f59-c719-4709-86d0-d8d44de009b2, port_security_enabled=False, project_id=86cb8d3b471543839983316ef2de7b3f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=327, status=DOWN, tags=[], tenant_id=86cb8d3b471543839983316ef2de7b3f, updated_at=2025-12-05T10:04:17Z on network b30a6f59-c719-4709-86d0-d8d44de009b2
Dec 05 10:04:18 np0005546420.localdomain ceph-mon[298353]: pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:18 np0005546420.localdomain dnsmasq[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/addn_hosts - 1 addresses
Dec 05 10:04:18 np0005546420.localdomain dnsmasq-dhcp[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/host
Dec 05 10:04:18 np0005546420.localdomain dnsmasq-dhcp[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/opts
Dec 05 10:04:18 np0005546420.localdomain podman[307754]: 2025-12-05 10:04:18.291577308 +0000 UTC m=+0.063250054 container kill 83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b30a6f59-c719-4709-86d0-d8d44de009b2, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:04:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:04:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:04:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:04:18 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2140871061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:18.368 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:04:18 np0005546420.localdomain podman[307768]: 2025-12-05 10:04:18.417243668 +0000 UTC m=+0.100533953 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:04:18 np0005546420.localdomain podman[307768]: 2025-12-05 10:04:18.458501838 +0000 UTC m=+0.141792133 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:04:18 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:04:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:18.525 262769 INFO neutron.agent.dhcp.agent [None req-0629eeb8-c3dd-451d-b9dd-f752de78ed0d - - - - - -] DHCP configuration for ports {'7276ba35-f562-4dbe-aaa9-28d14fa29a3e'} is completed
Dec 05 10:04:18 np0005546420.localdomain systemd[1]: tmp-crun.uUg1uE.mount: Deactivated successfully.
Dec 05 10:04:18 np0005546420.localdomain podman[307770]: 2025-12-05 10:04:18.536645436 +0000 UTC m=+0.211240586 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 05 10:04:18 np0005546420.localdomain podman[307770]: 2025-12-05 10:04:18.566595121 +0000 UTC m=+0.241190261 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 05 10:04:18 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:04:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:18.644 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:04:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:18.646 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11911MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:04:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:18.646 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:18.647 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:04:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:04:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:04:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:04:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:04:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:04:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:04:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:04:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:04:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:04:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:04:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.095 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.095 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:04:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2140871061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.186 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.264 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.265 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.289 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.331 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.359 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:04:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:04:19 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1953650006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.828 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.836 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.863 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.889 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:04:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:19.890 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:20 np0005546420.localdomain ceph-mon[298353]: pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1953650006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:20.231 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:04:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:20.233 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:04:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:20.928 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:17Z, description=, device_id=621b1edc-d8e2-4272-819e-09f25a8f24ba, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0408e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0407c0>], id=7276ba35-f562-4dbe-aaa9-28d14fa29a3e, ip_allocation=immediate, mac_address=fa:16:3e:54:8c:36, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:12Z, description=, dns_domain=, id=b30a6f59-c719-4709-86d0-d8d44de009b2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-215545664-network, port_security_enabled=True, project_id=86cb8d3b471543839983316ef2de7b3f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30750, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=254, status=ACTIVE, subnets=['fdd73ed4-6102-459d-9a7f-787d90ccf534'], tags=[], tenant_id=86cb8d3b471543839983316ef2de7b3f, updated_at=2025-12-05T10:04:12Z, vlan_transparent=None, network_id=b30a6f59-c719-4709-86d0-d8d44de009b2, port_security_enabled=False, project_id=86cb8d3b471543839983316ef2de7b3f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=327, status=DOWN, tags=[], tenant_id=86cb8d3b471543839983316ef2de7b3f, updated_at=2025-12-05T10:04:17Z on network b30a6f59-c719-4709-86d0-d8d44de009b2
Dec 05 10:04:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3868771900' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:21.237 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:04:21 np0005546420.localdomain dnsmasq[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/addn_hosts - 1 addresses
Dec 05 10:04:21 np0005546420.localdomain dnsmasq-dhcp[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/host
Dec 05 10:04:21 np0005546420.localdomain podman[307855]: 2025-12-05 10:04:21.291825035 +0000 UTC m=+0.052348511 container kill 83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b30a6f59-c719-4709-86d0-d8d44de009b2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:04:21 np0005546420.localdomain dnsmasq-dhcp[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/opts
Dec 05 10:04:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:04:21 np0005546420.localdomain podman[307869]: 2025-12-05 10:04:21.407160839 +0000 UTC m=+0.086606577 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 10:04:21 np0005546420.localdomain podman[307869]: 2025-12-05 10:04:21.447931796 +0000 UTC m=+0.127377534 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 10:04:21 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:04:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:21.651 262769 INFO neutron.agent.dhcp.agent [None req-28ef4a78-b86e-4e75-81d2-91231dc04aad - - - - - -] DHCP configuration for ports {'7276ba35-f562-4dbe-aaa9-28d14fa29a3e'} is completed
Dec 05 10:04:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:22.124 262769 INFO neutron.agent.linux.ip_lib [None req-2e102002-e5b4-43cb-beaa-44a0fd6e704e - - - - - -] Device tap40c14e92-17 cannot be used as it has no MAC address
Dec 05 10:04:22 np0005546420.localdomain kernel: device tap40c14e92-17 entered promiscuous mode
Dec 05 10:04:22 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929062.1553] manager: (tap40c14e92-17): new Generic device (/org/freedesktop/NetworkManager/Devices/15)
Dec 05 10:04:22 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:22Z|00036|binding|INFO|Claiming lport 40c14e92-17bf-4ad8-bc71-5d611aa76f67 for this chassis.
Dec 05 10:04:22 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:22Z|00037|binding|INFO|40c14e92-17bf-4ad8-bc71-5d611aa76f67: Claiming unknown
Dec 05 10:04:22 np0005546420.localdomain systemd-udevd[307906]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:04:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:22.171 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-64267419-8c47-450f-9ba4-afc8c103bf71', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64267419-8c47-450f-9ba4-afc8c103bf71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41095831ac6247b0a5ea030490af998f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9200024c-1bb2-4d9b-96df-67796d72a9e4, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=40c14e92-17bf-4ad8-bc71-5d611aa76f67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:04:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:22.173 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 40c14e92-17bf-4ad8-bc71-5d611aa76f67 in datapath 64267419-8c47-450f-9ba4-afc8c103bf71 bound to our chassis
Dec 05 10:04:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:22.174 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 64267419-8c47-450f-9ba4-afc8c103bf71 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:04:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:22.175 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a1e212-d92e-4f3a-8712-c89f8e3c31ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:22 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap40c14e92-17: No such device
Dec 05 10:04:22 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:22Z|00038|binding|INFO|Setting lport 40c14e92-17bf-4ad8-bc71-5d611aa76f67 ovn-installed in OVS
Dec 05 10:04:22 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:22Z|00039|binding|INFO|Setting lport 40c14e92-17bf-4ad8-bc71-5d611aa76f67 up in Southbound
Dec 05 10:04:22 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap40c14e92-17: No such device
Dec 05 10:04:22 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap40c14e92-17: No such device
Dec 05 10:04:22 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap40c14e92-17: No such device
Dec 05 10:04:22 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap40c14e92-17: No such device
Dec 05 10:04:22 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap40c14e92-17: No such device
Dec 05 10:04:22 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap40c14e92-17: No such device
Dec 05 10:04:22 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap40c14e92-17: No such device
Dec 05 10:04:22 np0005546420.localdomain ceph-mon[298353]: pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3626454032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:23 np0005546420.localdomain podman[307978]: 
Dec 05 10:04:23 np0005546420.localdomain podman[307978]: 2025-12-05 10:04:23.085563915 +0000 UTC m=+0.079927683 container create 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:04:23 np0005546420.localdomain systemd[1]: Started libpod-conmon-0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05.scope.
Dec 05 10:04:23 np0005546420.localdomain systemd[1]: tmp-crun.GwAzQ4.mount: Deactivated successfully.
Dec 05 10:04:23 np0005546420.localdomain podman[307978]: 2025-12-05 10:04:23.042458758 +0000 UTC m=+0.036822526 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:04:23 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:04:23 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/980f7199c576e15abdbc6fb24f98160c4a354a48ac38db7fb54cf92b7bd3658a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:04:23 np0005546420.localdomain podman[307978]: 2025-12-05 10:04:23.168632984 +0000 UTC m=+0.162996752 container init 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 05 10:04:23 np0005546420.localdomain podman[307978]: 2025-12-05 10:04:23.177497794 +0000 UTC m=+0.171861572 container start 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:04:23 np0005546420.localdomain dnsmasq[307996]: started, version 2.85 cachesize 150
Dec 05 10:04:23 np0005546420.localdomain dnsmasq[307996]: DNS service limited to local subnets
Dec 05 10:04:23 np0005546420.localdomain dnsmasq[307996]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:04:23 np0005546420.localdomain dnsmasq[307996]: warning: no upstream servers configured
Dec 05 10:04:23 np0005546420.localdomain dnsmasq-dhcp[307996]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:04:23 np0005546420.localdomain dnsmasq[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/addn_hosts - 0 addresses
Dec 05 10:04:23 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/host
Dec 05 10:04:23 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/opts
Dec 05 10:04:23 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:23.306 262769 INFO neutron.agent.dhcp.agent [None req-13ebed86-7f35-40d7-ac79-27fee1ab0480 - - - - - -] DHCP configuration for ports {'8dc3951d-9e6b-4dd7-9953-6042801ec206'} is completed
Dec 05 10:04:24 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:04:24.151 2 INFO neutron.agent.securitygroups_rpc [None req-cb57f573-169e-481a-9ba1-5f48d09f78aa 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Security group member updated ['d4162554-7d79-4103-bc2a-c014e86c3743']
Dec 05 10:04:24 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:24.195 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:23Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0906d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a090040>], id=3337c926-0e11-468a-9ddc-efe5775aec35, ip_allocation=immediate, mac_address=fa:16:3e:a2:59:b6, name=tempest-parent-211399516, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:04Z, description=, dns_domain=, id=4d14eca3-0067-494d-b2d9-059bccd18a88, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1134815813-network, port_security_enabled=True, project_id=1b63f7777dfa40c1bfc42162c9fd676f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=41100, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=191, status=ACTIVE, subnets=['5ee02dc7-19b8-4705-8343-283760163bca'], tags=[], tenant_id=1b63f7777dfa40c1bfc42162c9fd676f, updated_at=2025-12-05T10:04:04Z, vlan_transparent=None, network_id=4d14eca3-0067-494d-b2d9-059bccd18a88, port_security_enabled=True, project_id=1b63f7777dfa40c1bfc42162c9fd676f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d4162554-7d79-4103-bc2a-c014e86c3743'], standard_attr_id=390, status=DOWN, tags=[], tenant_id=1b63f7777dfa40c1bfc42162c9fd676f, updated_at=2025-12-05T10:04:23Z on network 4d14eca3-0067-494d-b2d9-059bccd18a88
Dec 05 10:04:24 np0005546420.localdomain ceph-mon[298353]: pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:24 np0005546420.localdomain dnsmasq[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/addn_hosts - 2 addresses
Dec 05 10:04:24 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/host
Dec 05 10:04:24 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/opts
Dec 05 10:04:24 np0005546420.localdomain podman[308014]: 2025-12-05 10:04:24.438772933 +0000 UTC m=+0.064068388 container kill a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:04:24 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:24.688 262769 INFO neutron.agent.dhcp.agent [None req-140552c9-4612-4291-87fc-106b1117cba4 - - - - - -] DHCP configuration for ports {'3337c926-0e11-468a-9ddc-efe5775aec35'} is completed
Dec 05 10:04:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:26 np0005546420.localdomain ceph-mon[298353]: pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:27 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:27.037 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:26Z, description=, device_id=41ce6239-6c23-4ae2-bd0d-1751d767ec1c, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a02a490>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a02abe0>], id=ac498d68-ca37-45d8-ad98-1e2be030395c, ip_allocation=immediate, mac_address=fa:16:3e:5f:62:bc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:18Z, description=, dns_domain=, id=64267419-8c47-450f-9ba4-afc8c103bf71, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-159398337-network, port_security_enabled=True, project_id=41095831ac6247b0a5ea030490af998f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9646, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['ade22a6d-3389-4ec4-9b3f-a300e7c34d78'], tags=[], tenant_id=41095831ac6247b0a5ea030490af998f, updated_at=2025-12-05T10:04:20Z, vlan_transparent=None, network_id=64267419-8c47-450f-9ba4-afc8c103bf71, port_security_enabled=False, project_id=41095831ac6247b0a5ea030490af998f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=406, status=DOWN, tags=[], tenant_id=41095831ac6247b0a5ea030490af998f, updated_at=2025-12-05T10:04:26Z on network 64267419-8c47-450f-9ba4-afc8c103bf71
Dec 05 10:04:27 np0005546420.localdomain dnsmasq[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/addn_hosts - 1 addresses
Dec 05 10:04:27 np0005546420.localdomain podman[308051]: 2025-12-05 10:04:27.365773554 +0000 UTC m=+0.057731646 container kill 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:04:27 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/host
Dec 05 10:04:27 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/opts
Dec 05 10:04:27 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:27.556 262769 INFO neutron.agent.dhcp.agent [None req-ce285969-c86e-4a66-96f3-d8ba580b745a - - - - - -] DHCP configuration for ports {'ac498d68-ca37-45d8-ad98-1e2be030395c'} is completed
Dec 05 10:04:27 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:27.803 262769 INFO neutron.agent.linux.ip_lib [None req-1d5a6396-9c9f-432a-9d24-1df918514cd0 - - - - - -] Device tapfc269b6d-01 cannot be used as it has no MAC address
Dec 05 10:04:27 np0005546420.localdomain kernel: device tapfc269b6d-01 entered promiscuous mode
Dec 05 10:04:27 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929067.8331] manager: (tapfc269b6d-01): new Generic device (/org/freedesktop/NetworkManager/Devices/16)
Dec 05 10:04:27 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:27Z|00040|binding|INFO|Claiming lport fc269b6d-014b-4201-bcf4-5f7f3bdd4836 for this chassis.
Dec 05 10:04:27 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:27Z|00041|binding|INFO|fc269b6d-014b-4201-bcf4-5f7f3bdd4836: Claiming unknown
Dec 05 10:04:27 np0005546420.localdomain systemd-udevd[308082]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:04:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:27.849 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.3/24', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-49a4879c-0612-443d-8b44-15b1f6a18cea', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49a4879c-0612-443d-8b44-15b1f6a18cea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b63f7777dfa40c1bfc42162c9fd676f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9b3b447-6047-4886-9c84-e76d87b6b24c, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=fc269b6d-014b-4201-bcf4-5f7f3bdd4836) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:04:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:27.851 159503 INFO neutron.agent.ovn.metadata.agent [-] Port fc269b6d-014b-4201-bcf4-5f7f3bdd4836 in datapath 49a4879c-0612-443d-8b44-15b1f6a18cea bound to our chassis
Dec 05 10:04:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:27.855 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 4e4a6f83-9e6c-47aa-981e-76d88a525e9b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:04:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:27.855 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 49a4879c-0612-443d-8b44-15b1f6a18cea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:04:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:27.856 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[37764ba9-aea7-4af2-9d39-0884662a40b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:27 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:27Z|00042|binding|INFO|Setting lport fc269b6d-014b-4201-bcf4-5f7f3bdd4836 ovn-installed in OVS
Dec 05 10:04:27 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:27Z|00043|binding|INFO|Setting lport fc269b6d-014b-4201-bcf4-5f7f3bdd4836 up in Southbound
Dec 05 10:04:28 np0005546420.localdomain ceph-mon[298353]: pgmap v86: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:28 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:04:28.361 2 INFO neutron.agent.securitygroups_rpc [None req-48e9a8a2-2821-4767-a782-e295c5c4e972 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Security group member updated ['d4162554-7d79-4103-bc2a-c014e86c3743']
Dec 05 10:04:28 np0005546420.localdomain podman[308135]: 
Dec 05 10:04:28 np0005546420.localdomain podman[308135]: 2025-12-05 10:04:28.807826588 +0000 UTC m=+0.080263314 container create b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-49a4879c-0612-443d-8b44-15b1f6a18cea, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:04:28 np0005546420.localdomain systemd[1]: Started libpod-conmon-b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a.scope.
Dec 05 10:04:28 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:04:28 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad16be59cbaad5f6a61300dad6c951ce8b6792e925bdbb69c28e23baa7a55577/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:04:28 np0005546420.localdomain podman[308135]: 2025-12-05 10:04:28.774423017 +0000 UTC m=+0.046859803 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:04:28 np0005546420.localdomain podman[308135]: 2025-12-05 10:04:28.881036145 +0000 UTC m=+0.153472911 container init b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-49a4879c-0612-443d-8b44-15b1f6a18cea, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:04:28 np0005546420.localdomain podman[308135]: 2025-12-05 10:04:28.88971276 +0000 UTC m=+0.162149486 container start b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-49a4879c-0612-443d-8b44-15b1f6a18cea, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 05 10:04:28 np0005546420.localdomain dnsmasq[308153]: started, version 2.85 cachesize 150
Dec 05 10:04:28 np0005546420.localdomain dnsmasq[308153]: DNS service limited to local subnets
Dec 05 10:04:28 np0005546420.localdomain dnsmasq[308153]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:04:28 np0005546420.localdomain dnsmasq[308153]: warning: no upstream servers configured
Dec 05 10:04:28 np0005546420.localdomain dnsmasq-dhcp[308153]: DHCP, static leases only on 19.80.0.0, lease time 1d
Dec 05 10:04:28 np0005546420.localdomain dnsmasq[308153]: read /var/lib/neutron/dhcp/49a4879c-0612-443d-8b44-15b1f6a18cea/addn_hosts - 0 addresses
Dec 05 10:04:28 np0005546420.localdomain dnsmasq-dhcp[308153]: read /var/lib/neutron/dhcp/49a4879c-0612-443d-8b44-15b1f6a18cea/host
Dec 05 10:04:28 np0005546420.localdomain dnsmasq-dhcp[308153]: read /var/lib/neutron/dhcp/49a4879c-0612-443d-8b44-15b1f6a18cea/opts
Dec 05 10:04:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:28.954 262769 INFO neutron.agent.dhcp.agent [None req-d859f382-5e67-4446-8505-20cce1732cf5 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:27Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a083520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a083eb0>], id=320a1991-e020-47d2-a4c0-35897ed58f6e, ip_allocation=immediate, mac_address=fa:16:3e:0c:5c:88, name=tempest-subport-251692295, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:24Z, description=, dns_domain=, id=49a4879c-0612-443d-8b44-15b1f6a18cea, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-1034273559, port_security_enabled=True, project_id=1b63f7777dfa40c1bfc42162c9fd676f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27419, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=394, status=ACTIVE, subnets=['975d8046-68d8-4ccd-8c4d-50f700a93da8'], tags=[], tenant_id=1b63f7777dfa40c1bfc42162c9fd676f, updated_at=2025-12-05T10:04:25Z, vlan_transparent=None, network_id=49a4879c-0612-443d-8b44-15b1f6a18cea, port_security_enabled=True, project_id=1b63f7777dfa40c1bfc42162c9fd676f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d4162554-7d79-4103-bc2a-c014e86c3743'], standard_attr_id=409, status=DOWN, tags=[], tenant_id=1b63f7777dfa40c1bfc42162c9fd676f, updated_at=2025-12-05T10:04:27Z on network 49a4879c-0612-443d-8b44-15b1f6a18cea
Dec 05 10:04:29 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:29.021 262769 INFO neutron.agent.dhcp.agent [None req-53a96601-d971-4dd0-8a63-41d317e4e288 - - - - - -] DHCP configuration for ports {'64519645-9612-467a-bef1-c2a575a644f8'} is completed
Dec 05 10:04:29 np0005546420.localdomain dnsmasq[308153]: read /var/lib/neutron/dhcp/49a4879c-0612-443d-8b44-15b1f6a18cea/addn_hosts - 1 addresses
Dec 05 10:04:29 np0005546420.localdomain podman[308170]: 2025-12-05 10:04:29.173935674 +0000 UTC m=+0.060120947 container kill b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-49a4879c-0612-443d-8b44-15b1f6a18cea, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:04:29 np0005546420.localdomain dnsmasq-dhcp[308153]: read /var/lib/neutron/dhcp/49a4879c-0612-443d-8b44-15b1f6a18cea/host
Dec 05 10:04:29 np0005546420.localdomain dnsmasq-dhcp[308153]: read /var/lib/neutron/dhcp/49a4879c-0612-443d-8b44-15b1f6a18cea/opts
Dec 05 10:04:29 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:29.370 262769 INFO neutron.agent.dhcp.agent [None req-bb64983c-c76b-4ecd-8481-7b0925355dc1 - - - - - -] DHCP configuration for ports {'320a1991-e020-47d2-a4c0-35897ed58f6e'} is completed
Dec 05 10:04:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:04:29 np0005546420.localdomain podman[308191]: 2025-12-05 10:04:29.755544797 +0000 UTC m=+0.081955195 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:04:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:04:29 np0005546420.localdomain podman[308191]: 2025-12-05 10:04:29.792555748 +0000 UTC m=+0.118966156 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:04:29 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:04:29 np0005546420.localdomain podman[308212]: 2025-12-05 10:04:29.871421737 +0000 UTC m=+0.083265744 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 10:04:29 np0005546420.localdomain podman[308212]: 2025-12-05 10:04:29.914317939 +0000 UTC m=+0.126161976 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, version=9.6, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:04:29 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:04:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:30.039 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:26Z, description=, device_id=41ce6239-6c23-4ae2-bd0d-1751d767ec1c, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ff1400>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ff1970>], id=ac498d68-ca37-45d8-ad98-1e2be030395c, ip_allocation=immediate, mac_address=fa:16:3e:5f:62:bc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:18Z, description=, dns_domain=, id=64267419-8c47-450f-9ba4-afc8c103bf71, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-159398337-network, port_security_enabled=True, project_id=41095831ac6247b0a5ea030490af998f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9646, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['ade22a6d-3389-4ec4-9b3f-a300e7c34d78'], tags=[], tenant_id=41095831ac6247b0a5ea030490af998f, updated_at=2025-12-05T10:04:20Z, vlan_transparent=None, network_id=64267419-8c47-450f-9ba4-afc8c103bf71, port_security_enabled=False, project_id=41095831ac6247b0a5ea030490af998f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=406, status=DOWN, tags=[], tenant_id=41095831ac6247b0a5ea030490af998f, updated_at=2025-12-05T10:04:26Z on network 64267419-8c47-450f-9ba4-afc8c103bf71
Dec 05 10:04:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:30 np0005546420.localdomain ceph-mon[298353]: pgmap v87: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1455715051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:30 np0005546420.localdomain dnsmasq[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/addn_hosts - 1 addresses
Dec 05 10:04:30 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/host
Dec 05 10:04:30 np0005546420.localdomain podman[308252]: 2025-12-05 10:04:30.251801821 +0000 UTC m=+0.062727458 container kill 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:04:30 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/opts
Dec 05 10:04:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:30.534 262769 INFO neutron.agent.dhcp.agent [None req-cc2ef10b-994d-4d29-9cea-95a14d5d8e18 - - - - - -] DHCP configuration for ports {'ac498d68-ca37-45d8-ad98-1e2be030395c'} is completed
Dec 05 10:04:32 np0005546420.localdomain ceph-mon[298353]: pgmap v88: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:32 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 05 10:04:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:04:33 np0005546420.localdomain systemd[1]: tmp-crun.7u1gvj.mount: Deactivated successfully.
Dec 05 10:04:33 np0005546420.localdomain podman[308273]: 2025-12-05 10:04:33.515333344 +0000 UTC m=+0.092929621 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:04:33 np0005546420.localdomain podman[308273]: 2025-12-05 10:04:33.561506105 +0000 UTC m=+0.139102432 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:04:33 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:04:33 np0005546420.localdomain ceph-mon[298353]: pgmap v89: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/4069304929' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:04:34 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1053282919' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.019 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Acquiring lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.020 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.038 281103 DEBUG nova.compute.manager [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 10:04:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.129 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.131 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.138 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.138 281103 INFO nova.compute.claims [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Claim successful on node np0005546420.localdomain
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.337 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:04:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:04:35 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1074536360' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.795 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.803 281103 DEBUG nova.compute.provider_tree [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.823 281103 DEBUG nova.scheduler.client.report [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.853 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.854 281103 DEBUG nova.compute.manager [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.911 281103 DEBUG nova.compute.manager [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.912 281103 DEBUG nova.network.neutron [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 10:04:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:35.972 281103 INFO nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 10:04:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:36.002 281103 DEBUG nova.compute.manager [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 10:04:36 np0005546420.localdomain ceph-mon[298353]: pgmap v90: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:04:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1074536360' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:36.292 281103 DEBUG nova.compute.manager [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 10:04:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:36.295 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 10:04:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:36.295 281103 INFO nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Creating image(s)
Dec 05 10:04:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:36.338 281103 DEBUG nova.storage.rbd_utils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] rbd image e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:04:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:36.381 281103 DEBUG nova.storage.rbd_utils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] rbd image e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:04:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:36.420 281103 DEBUG nova.storage.rbd_utils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] rbd image e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:04:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:36.426 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Acquiring lock "803b7e0e18f6b644279a18f87a62b7eb9e1015e6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:36.428 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lock "803b7e0e18f6b644279a18f87a62b7eb9e1015e6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:36 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:04:36.496 2 INFO neutron.agent.securitygroups_rpc [None req-83762cff-c285-4c7f-b0ea-dfcff003231b 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Security group rule updated ['13f09786-c3de-4f80-a431-bd4239c2ee01']
Dec 05 10:04:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:37.569 281103 DEBUG nova.virt.libvirt.imagebackend [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Image locations are: [{'url': 'rbd://79feddb1-4bfc-557f-83b9-0d57c9f66c1b/images/3647d20f-5e09-41b2-a6f3-f320b9e4e343/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://79feddb1-4bfc-557f-83b9-0d57c9f66c1b/images/3647d20f-5e09-41b2-a6f3-f320b9e4e343/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 05 10:04:38 np0005546420.localdomain ceph-mon[298353]: pgmap v91: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Dec 05 10:04:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:38.462 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:04:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:38.545 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.part --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:04:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:38.547 281103 DEBUG nova.virt.images [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] 3647d20f-5e09-41b2-a6f3-f320b9e4e343 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 05 10:04:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:38.548 281103 DEBUG nova.privsep.utils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 05 10:04:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:38.548 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.part /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:04:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:38.726 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.part /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.converted" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:04:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:38.729 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:04:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:38.800 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.converted --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:04:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:38.801 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lock "803b7e0e18f6b644279a18f87a62b7eb9e1015e6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:38.832 281103 DEBUG nova.storage.rbd_utils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] rbd image e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:04:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:38.837 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6 e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:04:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:39.442 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6 e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:04:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:39.527 281103 DEBUG nova.storage.rbd_utils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] resizing rbd image e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 10:04:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:39.673 281103 DEBUG nova.objects.instance [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lazy-loading 'migration_context' on Instance uuid e3717d5b-7a3e-4d08-82c4-1fc3cef82d42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:04:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:39.688 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 10:04:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:39.688 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Ensure instance console log exists: /var/lib/nova/instances/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 10:04:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:39.689 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:39.690 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:39.690 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:39 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:04:39.887 2 INFO neutron.agent.securitygroups_rpc [None req-9b0cbbee-3d9a-4a2a-b636-071077dfdc6c 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Security group rule updated ['13f09786-c3de-4f80-a431-bd4239c2ee01']
Dec 05 10:04:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:39.937 281103 WARNING oslo_policy.policy [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 05 10:04:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:39.938 281103 WARNING oslo_policy.policy [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 05 10:04:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:39.942 281103 DEBUG nova.policy [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '21c29f3a56e54486b61ecc72cb35cc3e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b63f7777dfa40c1bfc42162c9fd676f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 10:04:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:40 np0005546420.localdomain ceph-mon[298353]: pgmap v92: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Dec 05 10:04:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:04:41 np0005546420.localdomain podman[308497]: 2025-12-05 10:04:41.506873629 +0000 UTC m=+0.085928877 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 05 10:04:41 np0005546420.localdomain podman[308497]: 2025-12-05 10:04:41.521844096 +0000 UTC m=+0.100899394 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:04:41 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:04:42 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:42.093 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005546420.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:23Z, description=, device_id=e3717d5b-7a3e-4d08-82c4-1fc3cef82d42, device_owner=compute:nova, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a170dc0>], dns_domain=, dns_name=tempest-liveautoblockmigrationv225test-server-2007800372, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0de7c0>], id=3337c926-0e11-468a-9ddc-efe5775aec35, ip_allocation=immediate, mac_address=fa:16:3e:a2:59:b6, name=tempest-parent-211399516, network_id=4d14eca3-0067-494d-b2d9-059bccd18a88, port_security_enabled=True, project_id=1b63f7777dfa40c1bfc42162c9fd676f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['d4162554-7d79-4103-bc2a-c014e86c3743'], standard_attr_id=390, status=DOWN, tags=[], tenant_id=1b63f7777dfa40c1bfc42162c9fd676f, trunk_details=sub_ports=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ff8850>], trunk_id=fc4bfdf3-14cc-44e0-9079-e9511071cfff, updated_at=2025-12-05T10:04:40Z on network 4d14eca3-0067-494d-b2d9-059bccd18a88
Dec 05 10:04:42 np0005546420.localdomain ceph-mon[298353]: pgmap v93: 177 pgs: 177 active+clean; 205 MiB data, 782 MiB used, 41 GiB / 42 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 99 op/s
Dec 05 10:04:42 np0005546420.localdomain systemd[1]: tmp-crun.i3mW27.mount: Deactivated successfully.
Dec 05 10:04:42 np0005546420.localdomain dnsmasq[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/addn_hosts - 2 addresses
Dec 05 10:04:42 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/host
Dec 05 10:04:42 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/opts
Dec 05 10:04:42 np0005546420.localdomain podman[308533]: 2025-12-05 10:04:42.314718624 +0000 UTC m=+0.067190074 container kill a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:04:42 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:42.530 262769 INFO neutron.agent.dhcp.agent [None req-6c979862-f288-47f8-b116-8050baa45580 - - - - - -] DHCP configuration for ports {'3337c926-0e11-468a-9ddc-efe5775aec35'} is completed
Dec 05 10:04:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:43.672 281103 DEBUG nova.network.neutron [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Successfully updated port: 3337c926-0e11-468a-9ddc-efe5775aec35 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 10:04:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:43.696 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Acquiring lock "refresh_cache-e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:04:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:43.696 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Acquired lock "refresh_cache-e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:04:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:43.697 281103 DEBUG nova.network.neutron [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 10:04:44 np0005546420.localdomain ceph-mon[298353]: pgmap v94: 177 pgs: 177 active+clean; 238 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 141 op/s
Dec 05 10:04:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:44.364 281103 DEBUG nova.network.neutron [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 10:04:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:44.578 281103 DEBUG nova.compute.manager [req-cc105f44-8c65-41fa-b34f-119487e08635 req-397fdddf-7c41-4c7e-906b-b37a4220e91f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-changed-3337c926-0e11-468a-9ddc-efe5775aec35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:04:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:44.579 281103 DEBUG nova.compute.manager [req-cc105f44-8c65-41fa-b34f-119487e08635 req-397fdddf-7c41-4c7e-906b-b37a4220e91f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Refreshing instance network info cache due to event network-changed-3337c926-0e11-468a-9ddc-efe5775aec35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 10:04:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:44.579 281103 DEBUG oslo_concurrency.lockutils [req-cc105f44-8c65-41fa-b34f-119487e08635 req-397fdddf-7c41-4c7e-906b-b37a4220e91f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "refresh_cache-e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:04:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.343 281103 DEBUG nova.network.neutron [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Updating instance_info_cache with network_info: [{"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.364 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Releasing lock "refresh_cache-e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.365 281103 DEBUG nova.compute.manager [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Instance network_info: |[{"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.365 281103 DEBUG oslo_concurrency.lockutils [req-cc105f44-8c65-41fa-b34f-119487e08635 req-397fdddf-7c41-4c7e-906b-b37a4220e91f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquired lock "refresh_cache-e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.366 281103 DEBUG nova.network.neutron [req-cc105f44-8c65-41fa-b34f-119487e08635 req-397fdddf-7c41-4c7e-906b-b37a4220e91f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Refreshing network info cache for port 3337c926-0e11-468a-9ddc-efe5775aec35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.371 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Start _get_guest_xml network_info=[{"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T10:03:24Z,direct_url=<?>,disk_format='qcow2',id=3647d20f-5e09-41b2-a6f3-f320b9e4e343,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e6ca8a92050741d3a93772e6c1b0d704',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T10:03:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'image_id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.392 281103 WARNING nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.396 281103 DEBUG nova.virt.libvirt.host [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Searching host: 'np0005546420.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.397 281103 DEBUG nova.virt.libvirt.host [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.399 281103 DEBUG nova.virt.libvirt.host [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Searching host: 'np0005546420.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.400 281103 DEBUG nova.virt.libvirt.host [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.401 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.401 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T10:03:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='445199a6-1f73-405e-82f4-8bd8c4bb34c6',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T10:03:24Z,direct_url=<?>,disk_format='qcow2',id=3647d20f-5e09-41b2-a6f3-f320b9e4e343,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e6ca8a92050741d3a93772e6c1b0d704',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T10:03:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.402 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.403 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.403 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.404 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.404 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.405 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.405 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.406 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.406 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.406 281103 DEBUG nova.virt.hardware [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.413 281103 DEBUG nova.privsep.utils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.414 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:04:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:04:45 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1365789327' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.860 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.901 281103 DEBUG nova.storage.rbd_utils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] rbd image e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:04:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:45.907 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:04:46 np0005546420.localdomain ceph-mon[298353]: pgmap v95: 177 pgs: 177 active+clean; 238 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 141 op/s
Dec 05 10:04:46 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1365789327' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:04:46 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1647312907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:04:46 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:04:46 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3567328624' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.327 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.330 281103 DEBUG nova.virt.libvirt.vif [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T10:04:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2007800372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005546420.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2007800372',id=7,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005546420.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005546420.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b63f7777dfa40c1bfc42162c9fd676f',ramdisk_id='',reservation_id='r-kprv5g3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-642400384',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-642400384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T10:04:36Z,user_data=None,user_id='21c29f3a56e54486b61ecc72cb35cc3e',uuid=e3717d5b-7a3e-4d08-82c4-1fc3cef82d42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.331 281103 DEBUG nova.network.os_vif_util [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Converting VIF {"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.332 281103 DEBUG nova.network.os_vif_util [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:59:b6,bridge_name='br-int',has_traffic_filtering=True,id=3337c926-0e11-468a-9ddc-efe5775aec35,network=Network(4d14eca3-0067-494d-b2d9-059bccd18a88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3337c926-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.336 281103 DEBUG nova.objects.instance [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lazy-loading 'pci_devices' on Instance uuid e3717d5b-7a3e-4d08-82c4-1fc3cef82d42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.355 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] End _get_guest_xml xml=<domain type="kvm">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   <uuid>e3717d5b-7a3e-4d08-82c4-1fc3cef82d42</uuid>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   <name>instance-00000007</name>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   <memory>131072</memory>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   <vcpu>1</vcpu>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   <metadata>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-2007800372</nova:name>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <nova:creationTime>2025-12-05 10:04:45</nova:creationTime>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <nova:flavor name="m1.nano">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <nova:memory>128</nova:memory>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <nova:disk>1</nova:disk>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <nova:swap>0</nova:swap>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <nova:vcpus>1</nova:vcpus>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       </nova:flavor>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <nova:owner>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <nova:user uuid="21c29f3a56e54486b61ecc72cb35cc3e">tempest-LiveAutoBlockMigrationV225Test-642400384-project-member</nova:user>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <nova:project uuid="1b63f7777dfa40c1bfc42162c9fd676f">tempest-LiveAutoBlockMigrationV225Test-642400384</nova:project>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       </nova:owner>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <nova:root type="image" uuid="3647d20f-5e09-41b2-a6f3-f320b9e4e343"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <nova:ports>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <nova:port uuid="3337c926-0e11-468a-9ddc-efe5775aec35">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         </nova:port>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       </nova:ports>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     </nova:instance>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   </metadata>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   <sysinfo type="smbios">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <system>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <entry name="manufacturer">RDO</entry>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <entry name="product">OpenStack Compute</entry>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <entry name="serial">e3717d5b-7a3e-4d08-82c4-1fc3cef82d42</entry>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <entry name="uuid">e3717d5b-7a3e-4d08-82c4-1fc3cef82d42</entry>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <entry name="family">Virtual Machine</entry>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     </system>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   </sysinfo>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   <os>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <boot dev="hd"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <smbios mode="sysinfo"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   </os>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   <features>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <acpi/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <apic/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <vmcoreinfo/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   </features>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   <clock offset="utc">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <timer name="hpet" present="no"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   </clock>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   <cpu mode="host-model" match="exact">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   </cpu>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   <devices>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <disk type="network" device="disk">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <driver type="raw" cache="none"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <source protocol="rbd" name="vms/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.103" port="6789"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.104" port="6789"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.105" port="6789"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       </source>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <auth username="openstack">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <secret type="ceph" uuid="79feddb1-4bfc-557f-83b9-0d57c9f66c1b"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       </auth>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <target dev="vda" bus="virtio"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     </disk>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <disk type="network" device="cdrom">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <driver type="raw" cache="none"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <source protocol="rbd" name="vms/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk.config">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.103" port="6789"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.104" port="6789"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.105" port="6789"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       </source>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <auth username="openstack">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:         <secret type="ceph" uuid="79feddb1-4bfc-557f-83b9-0d57c9f66c1b"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       </auth>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <target dev="sda" bus="sata"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     </disk>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <interface type="ethernet">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <mac address="fa:16:3e:a2:59:b6"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <model type="virtio"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <mtu size="1442"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <target dev="tap3337c926-0e"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     </interface>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <serial type="pty">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <log file="/var/lib/nova/instances/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42/console.log" append="off"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     </serial>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <video>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <model type="virtio"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     </video>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <input type="tablet" bus="usb"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <rng model="virtio">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <backend model="random">/dev/urandom</backend>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     </rng>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <controller type="usb" index="0"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     <memballoon model="virtio">
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:       <stats period="10"/>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:     </memballoon>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:   </devices>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: </domain>
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.357 281103 DEBUG nova.compute.manager [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Preparing to wait for external event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.357 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Acquiring lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.358 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.358 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.359 281103 DEBUG nova.virt.libvirt.vif [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T10:04:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2007800372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005546420.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2007800372',id=7,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005546420.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005546420.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b63f7777dfa40c1bfc42162c9fd676f',ramdisk_id='',reservation_id='r-kprv5g3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-642400384',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-642400384-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T10:04:36Z,user_data=None,user_id='21c29f3a56e54486b61ecc72cb35cc3e',uuid=e3717d5b-7a3e-4d08-82c4-1fc3cef82d42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.360 281103 DEBUG nova.network.os_vif_util [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Converting VIF {"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.361 281103 DEBUG nova.network.os_vif_util [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:59:b6,bridge_name='br-int',has_traffic_filtering=True,id=3337c926-0e11-468a-9ddc-efe5775aec35,network=Network(4d14eca3-0067-494d-b2d9-059bccd18a88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3337c926-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.362 281103 DEBUG os_vif [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:59:b6,bridge_name='br-int',has_traffic_filtering=True,id=3337c926-0e11-468a-9ddc-efe5775aec35,network=Network(4d14eca3-0067-494d-b2d9-059bccd18a88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3337c926-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.465 281103 DEBUG ovsdbapp.backend.ovs_idl [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.466 281103 DEBUG ovsdbapp.backend.ovs_idl [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.467 281103 DEBUG ovsdbapp.backend.ovs_idl [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.468 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.469 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.469 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.470 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.472 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.476 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.506 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.506 281103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.506 281103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 10:04:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:46.508 281103 INFO oslo.privsep.daemon [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmplqw5dibd/privsep.sock']
Dec 05 10:04:47 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3567328624' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.155 281103 INFO oslo.privsep.daemon [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Spawned new privsep daemon via rootwrap
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.049 308620 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.053 308620 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.055 308620 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.055 308620 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308620
Dec 05 10:04:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:04:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:04:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:04:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160385 "" "Go-http-client/1.1"
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.215 281103 DEBUG nova.network.neutron [req-cc105f44-8c65-41fa-b34f-119487e08635 req-397fdddf-7c41-4c7e-906b-b37a4220e91f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Updated VIF entry in instance network info cache for port 3337c926-0e11-468a-9ddc-efe5775aec35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.216 281103 DEBUG nova.network.neutron [req-cc105f44-8c65-41fa-b34f-119487e08635 req-397fdddf-7c41-4c7e-906b-b37a4220e91f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Updating instance_info_cache with network_info: [{"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.231 281103 DEBUG oslo_concurrency.lockutils [req-cc105f44-8c65-41fa-b34f-119487e08635 req-397fdddf-7c41-4c7e-906b-b37a4220e91f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Releasing lock "refresh_cache-e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:04:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:04:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20140 "" "Go-http-client/1.1"
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.456 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.457 281103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3337c926-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.458 281103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3337c926-0e, col_values=(('external_ids', {'iface-id': '3337c926-0e11-468a-9ddc-efe5775aec35', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:59:b6', 'vm-uuid': 'e3717d5b-7a3e-4d08-82c4-1fc3cef82d42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.480 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.484 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.489 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.490 281103 INFO os_vif [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:59:b6,bridge_name='br-int',has_traffic_filtering=True,id=3337c926-0e11-468a-9ddc-efe5775aec35,network=Network(4d14eca3-0067-494d-b2d9-059bccd18a88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3337c926-0e')
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.598 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.599 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.600 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] No VIF found with MAC fa:16:3e:a2:59:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.601 281103 INFO nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Using config drive
Dec 05 10:04:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:47.645 281103 DEBUG nova.storage.rbd_utils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] rbd image e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:04:47 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:04:47.926 2 INFO neutron.agent.securitygroups_rpc [req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 req-81daabdd-a902-4eca-b1c0-004b68779d1e 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Security group member updated ['13f09786-c3de-4f80-a431-bd4239c2ee01']
Dec 05 10:04:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:48.089 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:47Z, description=, device_id=fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a02aa90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a02ad60>], id=24f19dd4-108e-4a77-b44d-59a215801baa, ip_allocation=immediate, mac_address=fa:16:3e:5a:5d:78, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:18Z, description=, dns_domain=, id=64267419-8c47-450f-9ba4-afc8c103bf71, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-159398337-network, port_security_enabled=True, project_id=41095831ac6247b0a5ea030490af998f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9646, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=347, status=ACTIVE, subnets=['ade22a6d-3389-4ec4-9b3f-a300e7c34d78'], tags=[], tenant_id=41095831ac6247b0a5ea030490af998f, updated_at=2025-12-05T10:04:20Z, vlan_transparent=None, network_id=64267419-8c47-450f-9ba4-afc8c103bf71, port_security_enabled=True, project_id=41095831ac6247b0a5ea030490af998f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['13f09786-c3de-4f80-a431-bd4239c2ee01'], standard_attr_id=486, status=DOWN, tags=[], tenant_id=41095831ac6247b0a5ea030490af998f, updated_at=2025-12-05T10:04:47Z on network 64267419-8c47-450f-9ba4-afc8c103bf71
Dec 05 10:04:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:48.090 281103 INFO nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Creating config drive at /var/lib/nova/instances/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42/disk.config
Dec 05 10:04:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:48.094 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi45xvtqv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:04:48 np0005546420.localdomain ceph-mon[298353]: pgmap v96: 177 pgs: 177 active+clean; 238 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 141 op/s
Dec 05 10:04:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:48.225 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi45xvtqv" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:04:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:48.275 281103 DEBUG nova.storage.rbd_utils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] rbd image e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:04:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:48.280 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42/disk.config e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:04:48 np0005546420.localdomain dnsmasq[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/addn_hosts - 2 addresses
Dec 05 10:04:48 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/host
Dec 05 10:04:48 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/opts
Dec 05 10:04:48 np0005546420.localdomain podman[308678]: 2025-12-05 10:04:48.327887716 +0000 UTC m=+0.063508411 container kill 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:04:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:48.543 281103 DEBUG oslo_concurrency.processutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42/disk.config e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:04:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:48.544 281103 INFO nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Deleting local config drive /var/lib/nova/instances/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42/disk.config because it was imported into RBD.
Dec 05 10:04:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:04:48 np0005546420.localdomain systemd[1]: Started libvirt secret daemon.
Dec 05 10:04:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:48.620 262769 INFO neutron.agent.dhcp.agent [None req-4f60c952-4219-4b5b-a221-6f6e16b23731 - - - - - -] DHCP configuration for ports {'24f19dd4-108e-4a77-b44d-59a215801baa'} is completed
Dec 05 10:04:48 np0005546420.localdomain kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 05 10:04:48 np0005546420.localdomain kernel: device tap3337c926-0e entered promiscuous mode
Dec 05 10:04:48 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929088.6543] manager: (tap3337c926-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/17)
Dec 05 10:04:48 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:48Z|00044|binding|INFO|Claiming lport 3337c926-0e11-468a-9ddc-efe5775aec35 for this chassis.
Dec 05 10:04:48 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:48Z|00045|binding|INFO|3337c926-0e11-468a-9ddc-efe5775aec35: Claiming fa:16:3e:a2:59:b6 10.100.0.9
Dec 05 10:04:48 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:48Z|00046|binding|INFO|Claiming lport 320a1991-e020-47d2-a4c0-35897ed58f6e for this chassis.
Dec 05 10:04:48 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:48Z|00047|binding|INFO|320a1991-e020-47d2-a4c0-35897ed58f6e: Claiming fa:16:3e:0c:5c:88 19.80.0.199
Dec 05 10:04:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:48.656 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:48 np0005546420.localdomain systemd-udevd[308770]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:04:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:04:48 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:48.669 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:5c:88 19.80.0.199'], port_security=['fa:16:3e:0c:5c:88 19.80.0.199'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['3337c926-0e11-468a-9ddc-efe5775aec35'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-251692295', 'neutron:cidrs': '19.80.0.199/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49a4879c-0612-443d-8b44-15b1f6a18cea', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-251692295', 'neutron:project_id': '1b63f7777dfa40c1bfc42162c9fd676f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4162554-7d79-4103-bc2a-c014e86c3743', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=f9b3b447-6047-4886-9c84-e76d87b6b24c, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=320a1991-e020-47d2-a4c0-35897ed58f6e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:04:48 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:48.672 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:59:b6 10.100.0.9'], port_security=['fa:16:3e:a2:59:b6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-211399516', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e3717d5b-7a3e-4d08-82c4-1fc3cef82d42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d14eca3-0067-494d-b2d9-059bccd18a88', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-211399516', 'neutron:project_id': '1b63f7777dfa40c1bfc42162c9fd676f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4162554-7d79-4103-bc2a-c014e86c3743', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfe0d10c-51af-4255-846b-8c331654da0e, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=3337c926-0e11-468a-9ddc-efe5775aec35) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:04:48 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:48.673 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 320a1991-e020-47d2-a4c0-35897ed58f6e in datapath 49a4879c-0612-443d-8b44-15b1f6a18cea bound to our chassis
Dec 05 10:04:48 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:48.677 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 4e4a6f83-9e6c-47aa-981e-76d88a525e9b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:04:48 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:48.678 159503 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49a4879c-0612-443d-8b44-15b1f6a18cea
Dec 05 10:04:48 np0005546420.localdomain systemd[1]: tmp-crun.bGcz19.mount: Deactivated successfully.
Dec 05 10:04:48 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929088.6882] device (tap3337c926-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 05 10:04:48 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:48Z|00048|binding|INFO|Setting lport 3337c926-0e11-468a-9ddc-efe5775aec35 ovn-installed in OVS
Dec 05 10:04:48 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:48Z|00049|binding|INFO|Setting lport 3337c926-0e11-468a-9ddc-efe5775aec35 up in Southbound
Dec 05 10:04:48 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:48Z|00050|binding|INFO|Setting lport 320a1991-e020-47d2-a4c0-35897ed58f6e up in Southbound
Dec 05 10:04:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:48.688 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:48 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929088.6897] device (tap3337c926-0e): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Dec 05 10:04:48 np0005546420.localdomain podman[308723]: 2025-12-05 10:04:48.693063095 +0000 UTC m=+0.115827980 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:04:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:48.698 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:48 np0005546420.localdomain podman[308723]: 2025-12-05 10:04:48.70272612 +0000 UTC m=+0.125491005 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:04:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:48.705 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:48 np0005546420.localdomain systemd-machined[203266]: New machine qemu-1-instance-00000007.
Dec 05 10:04:48 np0005546420.localdomain systemd[1]: Started Virtual Machine qemu-1-instance-00000007.
Dec 05 10:04:48 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:04:48 np0005546420.localdomain podman[308773]: 2025-12-05 10:04:48.774477812 +0000 UTC m=+0.091916720 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:04:48 np0005546420.localdomain podman[308773]: 2025-12-05 10:04:48.809479392 +0000 UTC m=+0.126918300 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 05 10:04:48 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:04:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:04:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:04:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:04:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:04:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:04:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:04:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:04:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:04:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:04:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:04:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:04:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.128 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[942c6d3e-1827-4cae-a239-735e00698113]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.130 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap49a4879c-01 in ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 10:04:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:49.134 281103 DEBUG nova.virt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Emitting event <LifecycleEvent: 1764929089.1331224, e3717d5b-7a3e-4d08-82c4-1fc3cef82d42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.134 307492 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap49a4879c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.134 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[2162d126-024c-4834-bc3c-c50ec379408a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:49.135 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] VM Started (Lifecycle Event)
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.135 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4c51d1-a9c6-45b8-bd36-b0933b4f01a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:49.152 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.156 159609 DEBUG oslo.privsep.daemon [-] privsep: reply[c4723cea-9362-4a6d-ae3f-2157f6baa6bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:49.156 281103 DEBUG nova.virt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Emitting event <LifecycleEvent: 1764929089.1380668, e3717d5b-7a3e-4d08-82c4-1fc3cef82d42 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:04:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:49.157 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] VM Paused (Lifecycle Event)
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.169 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad7cf0f-c6ef-4dfb-b577-1d2f91a4bb20]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.170 159503 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpumy8efiz/privsep.sock']
Dec 05 10:04:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:49.189 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:04:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:49.192 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 10:04:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:49.215 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 10:04:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:49Z|00051|memory|INFO|peak resident set size grew 51% in last 2252.1 seconds, from 13108 kB to 19832 kB
Dec 05 10:04:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:49Z|00052|memory|INFO|idl-cells-OVN_Southbound:11643 idl-cells-Open_vSwitch:1098 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:173 lflow-cache-entries-cache-matches:220 lflow-cache-size-KB:643 local_datapath_usage-KB:2 ofctrl_desired_flow_usage-KB:344 ofctrl_installed_flow_usage-KB:252 ofctrl_sb_flow_ref_usage-KB:133
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.782 159503 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.783 159503 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpumy8efiz/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.660 308862 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.666 308862 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.669 308862 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.669 308862 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308862
Dec 05 10:04:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:49.787 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5aaf26-5543-438a-9ee4-3425e2fbe991]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:50 np0005546420.localdomain ceph-mon[298353]: pgmap v97: 177 pgs: 177 active+clean; 238 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 05 10:04:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:50.200 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:50.323 308862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:50.323 308862 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:50.323 308862 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:50.448 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005546419.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:47Z, description=, device_id=fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71, device_owner=compute:nova, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a053c40>], dns_domain=, dns_name=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a053cd0>], id=24f19dd4-108e-4a77-b44d-59a215801baa, ip_allocation=immediate, mac_address=fa:16:3e:5a:5d:78, name=, network_id=64267419-8c47-450f-9ba4-afc8c103bf71, port_security_enabled=True, project_id=41095831ac6247b0a5ea030490af998f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['13f09786-c3de-4f80-a431-bd4239c2ee01'], standard_attr_id=486, status=DOWN, tags=[], tenant_id=41095831ac6247b0a5ea030490af998f, updated_at=2025-12-05T10:04:49Z on network 64267419-8c47-450f-9ba4-afc8c103bf71
Dec 05 10:04:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:50.812 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[cec48262-28fd-4615-9dd9-d20255e531fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:50 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929090.8414] manager: (tap49a4879c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/18)
Dec 05 10:04:50 np0005546420.localdomain dnsmasq[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/addn_hosts - 2 addresses
Dec 05 10:04:50 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/host
Dec 05 10:04:50 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/opts
Dec 05 10:04:50 np0005546420.localdomain podman[308883]: 2025-12-05 10:04:50.843866296 +0000 UTC m=+0.070095363 container kill 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 05 10:04:50 np0005546420.localdomain systemd-udevd[308784]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:04:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:50.839 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[34e68f6a-262a-41b8-845e-9497c0dd34eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:50.868 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[cda3f49b-2af0-48c2-af17-26fe90a160b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:50.873 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[05145f83-6c4e-49cf-ac47-2c5c724bb41b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:50 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:04:50.903 2 INFO neutron.agent.securitygroups_rpc [req-4fcd9471-764f-4413-a3d6-c9db510ad3ec req-3be5ff8b-30c2-4669-8ea2-ebd3ceebb30b 332193d57d0f40b4a4331c53909cd01e 38ca44ea29964cdc953c4acef5715d76 - - default default] Security group rule updated ['d04b003d-84d7-4ef3-bd89-909ee44f1f42']
Dec 05 10:04:50 np0005546420.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap49a4879c-01: link becomes ready
Dec 05 10:04:50 np0005546420.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap49a4879c-00: link becomes ready
Dec 05 10:04:50 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929090.9061] device (tap49a4879c-00): carrier: link connected
Dec 05 10:04:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:50.915 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[9880c9e0-161d-47bd-bef9-4469fd5d042f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:50.935 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[8c20fc2b-e948-4c13-b745-6d650a12582c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49a4879c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:22:2d:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1220902, 'reachable_time': 17444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308924, 'error': None, 'target': 'ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:50.959 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[15ebb090-2c0b-4dbc-a38d-e5c0ae213b84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:2df3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1220902, 'tstamp': 1220902}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308926, 'error': None, 'target': 'ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:50.978 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb20e7f-4aff-4f8f-b332-1f9026f13e47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49a4879c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:22:2d:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1220902, 'reachable_time': 17444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308928, 'error': None, 'target': 'ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.010 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[77a0d60e-d79c-4c3d-a3fc-8b779bd7fa59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:04:51.058 262769 INFO neutron.agent.dhcp.agent [None req-469bad30-c278-4b14-8c68-cdc4c3992506 - - - - - -] DHCP configuration for ports {'24f19dd4-108e-4a77-b44d-59a215801baa'} is completed
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.075 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[cd63f79f-57b3-475b-a286-ba14c9d024e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.077 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49a4879c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.078 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.079 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49a4879c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:04:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:51.082 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:51 np0005546420.localdomain kernel: device tap49a4879c-00 entered promiscuous mode
Dec 05 10:04:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:51.086 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.092 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49a4879c-00, col_values=(('external_ids', {'iface-id': '64519645-9612-467a-bef1-c2a575a644f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:04:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:51.094 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:51 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:51Z|00053|binding|INFO|Releasing lport 64519645-9612-467a-bef1-c2a575a644f8 from this chassis (sb_readonly=0)
Dec 05 10:04:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:51.096 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.097 159503 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/49a4879c-0612-443d-8b44-15b1f6a18cea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/49a4879c-0612-443d-8b44-15b1f6a18cea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.099 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f97148-0059-4ac9-abed-011d5a7c7ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.101 159503 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: global
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     log         /dev/log local0 debug
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     log-tag     haproxy-metadata-proxy-49a4879c-0612-443d-8b44-15b1f6a18cea
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     user        root
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     group       root
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     maxconn     1024
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     pidfile     /var/lib/neutron/external/pids/49a4879c-0612-443d-8b44-15b1f6a18cea.pid.haproxy
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     daemon
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: defaults
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     log global
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     mode http
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     option httplog
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     option dontlognull
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     option http-server-close
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     option forwardfor
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     retries                 3
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout http-request    30s
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout connect         30s
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout client          32s
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout server          32s
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout http-keep-alive 30s
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: listen listener
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     bind 169.254.169.254:80
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:     http-request add-header X-OVN-Network-ID 49a4879c-0612-443d-8b44-15b1f6a18cea
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.102 159503 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea', 'env', 'PROCESS_TAG=haproxy-49a4879c-0612-443d-8b44-15b1f6a18cea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/49a4879c-0612-443d-8b44-15b1f6a18cea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 10:04:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:51.145 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:51 np0005546420.localdomain podman[308962]: 
Dec 05 10:04:51 np0005546420.localdomain podman[308962]: 2025-12-05 10:04:51.54239778 +0000 UTC m=+0.090861807 container create 35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Dec 05 10:04:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:04:51 np0005546420.localdomain systemd[1]: Started libpod-conmon-35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7.scope.
Dec 05 10:04:51 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:04:51 np0005546420.localdomain podman[308962]: 2025-12-05 10:04:51.498114097 +0000 UTC m=+0.046578164 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 10:04:51 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/478919b9b4e507a1aca495ec9014b0d87a60bb8d1e6319aa7cb02f770c721e08/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:04:51 np0005546420.localdomain podman[308962]: 2025-12-05 10:04:51.613334659 +0000 UTC m=+0.161798686 container init 35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:04:51 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:04:51.613 2 INFO neutron.agent.securitygroups_rpc [req-661adfa3-5841-49b7-bd34-e3e89bc27cd4 req-b26eebef-19af-47b2-81cf-3102a5d50f45 332193d57d0f40b4a4331c53909cd01e 38ca44ea29964cdc953c4acef5715d76 - - default default] Security group rule updated ['d04b003d-84d7-4ef3-bd89-909ee44f1f42']
Dec 05 10:04:51 np0005546420.localdomain podman[308962]: 2025-12-05 10:04:51.625552152 +0000 UTC m=+0.174016179 container start 35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:04:51 np0005546420.localdomain neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea[308977]: [NOTICE]   (308991) : New worker (308993) forked
Dec 05 10:04:51 np0005546420.localdomain neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea[308977]: [NOTICE]   (308991) : Loading success.
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.690 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 3337c926-0e11-468a-9ddc-efe5775aec35 in datapath 4d14eca3-0067-494d-b2d9-059bccd18a88 unbound from our chassis
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.693 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port b2bed88e-d07c-4687-9f66-0fce20a95357 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.693 159503 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d14eca3-0067-494d-b2d9-059bccd18a88
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.703 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[932410bc-5112-42bb-bed5-75320ee3765b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.703 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d14eca3-01 in ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.705 307492 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d14eca3-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.705 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[c088d2ce-7da9-48cd-9412-16ab93c54871]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.706 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1e6dd8-8311-4423-b9f3-b92e3ed48743]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain podman[308976]: 2025-12-05 10:04:51.719994228 +0000 UTC m=+0.137301897 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.728 159609 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e2efa8-1d11-4e42-9032-aa3d44291d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain podman[308976]: 2025-12-05 10:04:51.735302225 +0000 UTC m=+0.152609884 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.740 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[58468878-2986-4084-b95c-674ccc64795f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.764 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[bd57dc17-0539-48d4-9cf7-0f53c4511a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929091.7729] manager: (tap4d14eca3-00): new Veth device (/org/freedesktop/NetworkManager/Devices/19)
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.772 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[db56f84a-0198-44fb-a140-38f4922dfa81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.800 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[bb25d6da-d07a-4f49-8d78-4e264f7d4448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.805 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0d11cf-9afc-4b39-ac86-f03b6645fcb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap4d14eca3-00: link becomes ready
Dec 05 10:04:51 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929091.8283] device (tap4d14eca3-00): carrier: link connected
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.833 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[65eaea65-4649-4179-ad2d-1e3229b40227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:04:51.840 2 INFO neutron.agent.securitygroups_rpc [None req-4221ce4d-d911-4b23-95b4-1da9650671e2 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Security group member updated ['8c9500c3-6ac9-452e-a652-72bddc07be6d']
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.857 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[7eada653-86e3-41a3-8116-159af5018ce1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d14eca3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:8b:e9:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1220994, 'reachable_time': 43970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309021, 'error': None, 'target': 'ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.874 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a72c3697-6a3f-4838-9f1d-8bd1c1d13208]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:e943'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1220994, 'tstamp': 1220994}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309022, 'error': None, 'target': 'ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.892 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[add92d56-2b91-400f-a4dc-255a935c5ea8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d14eca3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:8b:e9:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1220994, 'reachable_time': 43970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309023, 'error': None, 'target': 'ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.923 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[17ee6fab-0106-49c0-aa23-d6b0ef595b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.987 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[b97264b0-3440-415b-8262-30d727165965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.989 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d14eca3-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.989 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.990 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d14eca3-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:04:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:51.992 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:51 np0005546420.localdomain kernel: device tap4d14eca3-00 entered promiscuous mode
Dec 05 10:04:51 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:51.998 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d14eca3-00, col_values=(('external_ids', {'iface-id': '60f79f33-8f4e-452b-bed3-efc4f7ae8a69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:04:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:52.000 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:52Z|00054|binding|INFO|Releasing lport 60f79f33-8f4e-452b-bed3-efc4f7ae8a69 from this chassis (sb_readonly=0)
Dec 05 10:04:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:52.010 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:52.011 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:52.013 159503 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d14eca3-0067-494d-b2d9-059bccd18a88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d14eca3-0067-494d-b2d9-059bccd18a88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:52.014 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[4a424013-912e-44a7-b385-d2935e433043]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:52.015 159503 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]: global
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     log         /dev/log local0 debug
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     log-tag     haproxy-metadata-proxy-4d14eca3-0067-494d-b2d9-059bccd18a88
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     user        root
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     group       root
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     maxconn     1024
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     pidfile     /var/lib/neutron/external/pids/4d14eca3-0067-494d-b2d9-059bccd18a88.pid.haproxy
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     daemon
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]: 
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]: defaults
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     log global
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     mode http
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     option httplog
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     option dontlognull
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     option http-server-close
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     option forwardfor
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     retries                 3
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout http-request    30s
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout connect         30s
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout client          32s
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout server          32s
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout http-keep-alive 30s
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]: 
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]: 
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]: listen listener
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     bind 169.254.169.254:80
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:     http-request add-header X-OVN-Network-ID 4d14eca3-0067-494d-b2d9-059bccd18a88
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 10:04:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:04:52.016 159503 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88', 'env', 'PROCESS_TAG=haproxy-4d14eca3-0067-494d-b2d9-059bccd18a88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d14eca3-0067-494d-b2d9-059bccd18a88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 10:04:52 np0005546420.localdomain ceph-mon[298353]: pgmap v98: 177 pgs: 177 active+clean; 269 MiB data, 883 MiB used, 41 GiB / 42 GiB avail; 4.5 MiB/s rd, 3.2 MiB/s wr, 168 op/s
Dec 05 10:04:52 np0005546420.localdomain podman[309055]: 
Dec 05 10:04:52 np0005546420.localdomain podman[309055]: 2025-12-05 10:04:52.453270823 +0000 UTC m=+0.086532485 container create 19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:04:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:52.480 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:52 np0005546420.localdomain systemd[1]: Started libpod-conmon-19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090.scope.
Dec 05 10:04:52 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:04:52 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13a852eb785da4d3051f1c4e0667427aea12d44087aa898fd40a1ddabc6aa174/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:04:52 np0005546420.localdomain podman[309055]: 2025-12-05 10:04:52.410046183 +0000 UTC m=+0.043307865 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 10:04:52 np0005546420.localdomain podman[309055]: 2025-12-05 10:04:52.519086345 +0000 UTC m=+0.152347997 container init 19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:04:52 np0005546420.localdomain podman[309055]: 2025-12-05 10:04:52.528274475 +0000 UTC m=+0.161536137 container start 19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:04:52 np0005546420.localdomain neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88[309069]: [NOTICE]   (309073) : New worker (309075) forked
Dec 05 10:04:52 np0005546420.localdomain neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88[309069]: [NOTICE]   (309073) : Loading success.
Dec 05 10:04:53 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/352553780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:04:53 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:04:53 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2895591683' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.579 281103 DEBUG nova.compute.manager [req-75e94125-c9d7-42ee-b703-2d77d36a3b5d req-62614dc8-f026-4302-9ac9-063ad2390d86 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.581 281103 DEBUG oslo_concurrency.lockutils [req-75e94125-c9d7-42ee-b703-2d77d36a3b5d req-62614dc8-f026-4302-9ac9-063ad2390d86 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.581 281103 DEBUG oslo_concurrency.lockutils [req-75e94125-c9d7-42ee-b703-2d77d36a3b5d req-62614dc8-f026-4302-9ac9-063ad2390d86 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.581 281103 DEBUG oslo_concurrency.lockutils [req-75e94125-c9d7-42ee-b703-2d77d36a3b5d req-62614dc8-f026-4302-9ac9-063ad2390d86 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.581 281103 DEBUG nova.compute.manager [req-75e94125-c9d7-42ee-b703-2d77d36a3b5d req-62614dc8-f026-4302-9ac9-063ad2390d86 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Processing event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.582 281103 DEBUG nova.compute.manager [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.589 281103 DEBUG nova.virt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Emitting event <LifecycleEvent: 1764929093.588914, e3717d5b-7a3e-4d08-82c4-1fc3cef82d42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.590 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] VM Resumed (Lifecycle Event)
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.594 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.616 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.618 281103 INFO nova.virt.libvirt.driver [-] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Instance spawned successfully.
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.619 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.624 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.647 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.652 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.653 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.654 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.654 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.655 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.656 281103 DEBUG nova.virt.libvirt.driver [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.784 281103 INFO nova.compute.manager [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Took 17.49 seconds to spawn the instance on the hypervisor.
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.787 281103 DEBUG nova.compute.manager [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.938 281103 INFO nova.compute.manager [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Took 18.85 seconds to build instance.
Dec 05 10:04:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:53.962 281103 DEBUG oslo_concurrency.lockutils [None req-46d3ab55-dd7c-4571-802d-64ddb7b03098 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:54 np0005546420.localdomain ceph-mon[298353]: pgmap v99: 177 pgs: 177 active+clean; 317 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 4.2 MiB/s rd, 5.3 MiB/s wr, 148 op/s
Dec 05 10:04:54 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2895591683' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:04:54 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e94 e94: 6 total, 6 up, 6 in
Dec 05 10:04:54 np0005546420.localdomain sudo[309084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:04:54 np0005546420.localdomain sudo[309084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:04:54 np0005546420.localdomain sudo[309084]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:54 np0005546420.localdomain sudo[309102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:04:54 np0005546420.localdomain sudo[309102]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.098502) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095098537, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 719, "num_deletes": 257, "total_data_size": 631855, "memory_usage": 646536, "flush_reason": "Manual Compaction"}
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095105627, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 412173, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19644, "largest_seqno": 20358, "table_properties": {"data_size": 408989, "index_size": 1103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7679, "raw_average_key_size": 18, "raw_value_size": 402290, "raw_average_value_size": 983, "num_data_blocks": 49, "num_entries": 409, "num_filter_entries": 409, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929056, "oldest_key_time": 1764929056, "file_creation_time": 1764929095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 7226 microseconds, and 2492 cpu microseconds.
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.105709) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 412173 bytes OK
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.105765) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.107791) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.107825) EVENT_LOG_v1 {"time_micros": 1764929095107818, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.107862) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 627945, prev total WAL file size 628269, number of live WAL files 2.
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.108527) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373633' seq:72057594037927935, type:22 .. '6C6F676D0034303136' seq:0, type:0; will stop at (end)
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(402KB)], [30(18MB)]
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095108574, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 19827261, "oldest_snapshot_seqno": -1}
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12028 keys, 19721287 bytes, temperature: kUnknown
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095245239, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 19721287, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19649792, "index_size": 40354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 322437, "raw_average_key_size": 26, "raw_value_size": 19442060, "raw_average_value_size": 1616, "num_data_blocks": 1547, "num_entries": 12028, "num_filter_entries": 12028, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.245613) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 19721287 bytes
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.247412) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.9 rd, 144.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 18.5 +0.0 blob) out(18.8 +0.0 blob), read-write-amplify(96.0) write-amplify(47.8) OK, records in: 12559, records dropped: 531 output_compression: NoCompression
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.247445) EVENT_LOG_v1 {"time_micros": 1764929095247432, "job": 16, "event": "compaction_finished", "compaction_time_micros": 136796, "compaction_time_cpu_micros": 32254, "output_level": 6, "num_output_files": 1, "total_output_size": 19721287, "num_input_records": 12559, "num_output_records": 12028, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095247631, "job": 16, "event": "table_file_deletion", "file_number": 32}
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095250583, "job": 16, "event": "table_file_deletion", "file_number": 30}
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.108427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.250665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.250672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.250674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.250676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:04:55.250678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:04:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:55.262 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:55 np0005546420.localdomain ceph-mon[298353]: osdmap e94: 6 total, 6 up, 6 in
Dec 05 10:04:55 np0005546420.localdomain sudo[309102]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:55.991 281103 DEBUG nova.compute.manager [req-baaf739a-409f-4f31-8ed3-5ab31d9f36ea req-dafb9f2a-baac-4140-a2cc-99a7296948b9 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:04:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:55.992 281103 DEBUG oslo_concurrency.lockutils [req-baaf739a-409f-4f31-8ed3-5ab31d9f36ea req-dafb9f2a-baac-4140-a2cc-99a7296948b9 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:04:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:55.993 281103 DEBUG oslo_concurrency.lockutils [req-baaf739a-409f-4f31-8ed3-5ab31d9f36ea req-dafb9f2a-baac-4140-a2cc-99a7296948b9 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:04:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:55.994 281103 DEBUG oslo_concurrency.lockutils [req-baaf739a-409f-4f31-8ed3-5ab31d9f36ea req-dafb9f2a-baac-4140-a2cc-99a7296948b9 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:04:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:55.994 281103 DEBUG nova.compute.manager [req-baaf739a-409f-4f31-8ed3-5ab31d9f36ea req-dafb9f2a-baac-4140-a2cc-99a7296948b9 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] No waiting events found dispatching network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:04:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:55.995 281103 WARNING nova.compute.manager [req-baaf739a-409f-4f31-8ed3-5ab31d9f36ea req-dafb9f2a-baac-4140-a2cc-99a7296948b9 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received unexpected event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 for instance with vm_state active and task_state None.
Dec 05 10:04:56 np0005546420.localdomain sudo[309154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:04:56 np0005546420.localdomain sudo[309154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:04:56 np0005546420.localdomain sudo[309154]: pam_unix(sudo:session): session closed for user root
Dec 05 10:04:56 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e95 e95: 6 total, 6 up, 6 in
Dec 05 10:04:56 np0005546420.localdomain ceph-mon[298353]: pgmap v101: 177 pgs: 177 active+clean; 317 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 127 op/s
Dec 05 10:04:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:04:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:04:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:04:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:04:57 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:04:57.206 2 INFO neutron.agent.securitygroups_rpc [None req-e20708ac-e402-4240-afd4-18fd4cece83c 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Security group member updated ['8c9500c3-6ac9-452e-a652-72bddc07be6d']
Dec 05 10:04:57 np0005546420.localdomain ceph-mon[298353]: osdmap e95: 6 total, 6 up, 6 in
Dec 05 10:04:57 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e96 e96: 6 total, 6 up, 6 in
Dec 05 10:04:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:57.447 281103 DEBUG nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Check if temp file /var/lib/nova/instances/tmpvrr6p6fi exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Dec 05 10:04:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:57.449 281103 DEBUG nova.compute.manager [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=12288,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvrr6p6fi',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e3717d5b-7a3e-4d08-82c4-1fc3cef82d42',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Dec 05 10:04:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:57.482 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:58.278 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:58 np0005546420.localdomain ceph-mon[298353]: pgmap v103: 177 pgs: 177 active+clean; 397 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 377 op/s
Dec 05 10:04:58 np0005546420.localdomain ceph-mon[298353]: osdmap e96: 6 total, 6 up, 6 in
Dec 05 10:04:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:58.354 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:58 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:58Z|00055|binding|INFO|Releasing lport 64519645-9612-467a-bef1-c2a575a644f8 from this chassis (sb_readonly=0)
Dec 05 10:04:58 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:04:58Z|00056|binding|INFO|Releasing lport 60f79f33-8f4e-452b-bed3-efc4f7ae8a69 from this chassis (sb_readonly=0)
Dec 05 10:04:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:58.395 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:04:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:59.251 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:04:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:59.252 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:04:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:59.597 281103 INFO nova.compute.rpcapi [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Dec 05 10:04:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:04:59.598 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:05:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:00.290 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:00 np0005546420.localdomain ceph-mon[298353]: pgmap v105: 177 pgs: 177 active+clean; 397 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 290 op/s
Dec 05 10:05:00 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:05:00 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/134347548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:05:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:05:00 np0005546420.localdomain systemd[1]: tmp-crun.fotfRm.mount: Deactivated successfully.
Dec 05 10:05:00 np0005546420.localdomain podman[309173]: 2025-12-05 10:05:00.584449434 +0000 UTC m=+0.152373237 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 10:05:00 np0005546420.localdomain podman[309174]: 2025-12-05 10:05:00.557075018 +0000 UTC m=+0.125879158 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:05:00 np0005546420.localdomain podman[309173]: 2025-12-05 10:05:00.62751304 +0000 UTC m=+0.195436843 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:05:00 np0005546420.localdomain podman[309174]: 2025-12-05 10:05:00.640441765 +0000 UTC m=+0.209245875 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:05:00 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:05:00 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:05:01 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3565658944' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:02 np0005546420.localdomain ceph-mon[298353]: pgmap v106: 177 pgs: 177 active+clean; 341 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 6.9 MiB/s wr, 329 op/s
Dec 05 10:05:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:02.485 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:02 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:02Z|00057|binding|INFO|Releasing lport 64519645-9612-467a-bef1-c2a575a644f8 from this chassis (sb_readonly=0)
Dec 05 10:05:02 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:02Z|00058|binding|INFO|Releasing lport 60f79f33-8f4e-452b-bed3-efc4f7ae8a69 from this chassis (sb_readonly=0)
Dec 05 10:05:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:03.013 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:05:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2556557245' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:05:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:05:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2556557245' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:05:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e97 e97: 6 total, 6 up, 6 in
Dec 05 10:05:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1736720270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2556557245' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:05:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2556557245' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:05:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:03.952 281103 DEBUG nova.compute.manager [req-aafb7909-857a-47b0-8b97-217a25c71550 req-48138f5a-3e15-48d7-921d-3dbda8977e41 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-vif-unplugged-3337c926-0e11-468a-9ddc-efe5775aec35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:05:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:03.953 281103 DEBUG oslo_concurrency.lockutils [req-aafb7909-857a-47b0-8b97-217a25c71550 req-48138f5a-3e15-48d7-921d-3dbda8977e41 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:03.953 281103 DEBUG oslo_concurrency.lockutils [req-aafb7909-857a-47b0-8b97-217a25c71550 req-48138f5a-3e15-48d7-921d-3dbda8977e41 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:03.954 281103 DEBUG oslo_concurrency.lockutils [req-aafb7909-857a-47b0-8b97-217a25c71550 req-48138f5a-3e15-48d7-921d-3dbda8977e41 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:03.954 281103 DEBUG nova.compute.manager [req-aafb7909-857a-47b0-8b97-217a25c71550 req-48138f5a-3e15-48d7-921d-3dbda8977e41 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] No waiting events found dispatching network-vif-unplugged-3337c926-0e11-468a-9ddc-efe5775aec35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:05:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:03.956 281103 DEBUG nova.compute.manager [req-aafb7909-857a-47b0-8b97-217a25c71550 req-48138f5a-3e15-48d7-921d-3dbda8977e41 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-vif-unplugged-3337c926-0e11-468a-9ddc-efe5775aec35 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 10:05:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:04.125 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:04.125 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:04.126 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:05:04 np0005546420.localdomain ceph-mon[298353]: pgmap v107: 177 pgs: 177 active+clean; 317 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 399 op/s
Dec 05 10:05:04 np0005546420.localdomain ceph-mon[298353]: osdmap e97: 6 total, 6 up, 6 in
Dec 05 10:05:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/378606692' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:04 np0005546420.localdomain podman[309213]: 2025-12-05 10:05:04.543104307 +0000 UTC m=+0.116927324 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 10:05:04 np0005546420.localdomain podman[309213]: 2025-12-05 10:05:04.587950597 +0000 UTC m=+0.161773674 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 10:05:04 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:05:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e98 e98: 6 total, 6 up, 6 in
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.337 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.358 281103 INFO nova.compute.manager [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Took 6.11 seconds for pre_live_migration on destination host np0005546421.localdomain.
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.358 281103 DEBUG nova.compute.manager [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.376 281103 DEBUG nova.compute.manager [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvrr6p6fi',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e3717d5b-7a3e-4d08-82c4-1fc3cef82d42',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(ea8f417d-20a3-4fe0-8f43-fb13c347e9fa),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.380 281103 DEBUG nova.objects.instance [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Lazy-loading 'migration_context' on Instance uuid e3717d5b-7a3e-4d08-82c4-1fc3cef82d42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.382 281103 DEBUG nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.384 281103 DEBUG nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.385 281103 DEBUG nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.404 281103 DEBUG nova.virt.libvirt.vif [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T10:04:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2007800372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005546420.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2007800372',id=7,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T10:04:53Z,launched_on='np0005546420.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005546420.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1b63f7777dfa40c1bfc42162c9fd676f',ramdisk_id='',reservation_id='r-kprv5g3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-642400384',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-642400384-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T10:04:53Z,user_data=None,user_id='21c29f3a56e54486b61ecc72cb35cc3e',uuid=e3717d5b-7a3e-4d08-82c4-1fc3cef82d42,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.404 281103 DEBUG nova.network.os_vif_util [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Converting VIF {"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.405 281103 DEBUG nova.network.os_vif_util [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:59:b6,bridge_name='br-int',has_traffic_filtering=True,id=3337c926-0e11-468a-9ddc-efe5775aec35,network=Network(4d14eca3-0067-494d-b2d9-059bccd18a88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3337c926-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.406 281103 DEBUG nova.virt.libvirt.migration [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Updating guest XML with vif config: <interface type="ethernet">
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]:   <mac address="fa:16:3e:a2:59:b6"/>
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]:   <model type="virtio"/>
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]:   <driver name="vhost" rx_queue_size="512"/>
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]:   <mtu size="1442"/>
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]:   <target dev="tap3337c926-0e"/>
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: </interface>
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.407 281103 DEBUG nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.888 281103 DEBUG nova.virt.libvirt.migration [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.889 281103 INFO nova.virt.libvirt.migration [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Increasing downtime to 50 ms after 0 sec elapsed time
Dec 05 10:05:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:05.995 281103 INFO nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.029 281103 DEBUG nova.compute.manager [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.030 281103 DEBUG oslo_concurrency.lockutils [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.030 281103 DEBUG oslo_concurrency.lockutils [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.031 281103 DEBUG oslo_concurrency.lockutils [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.031 281103 DEBUG nova.compute.manager [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] No waiting events found dispatching network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.032 281103 WARNING nova.compute.manager [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received unexpected event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 for instance with vm_state active and task_state migrating.
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.033 281103 DEBUG nova.compute.manager [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-changed-3337c926-0e11-468a-9ddc-efe5775aec35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.033 281103 DEBUG nova.compute.manager [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Refreshing instance network info cache due to event network-changed-3337c926-0e11-468a-9ddc-efe5775aec35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.034 281103 DEBUG oslo_concurrency.lockutils [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "refresh_cache-e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.034 281103 DEBUG oslo_concurrency.lockutils [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquired lock "refresh_cache-e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.035 281103 DEBUG nova.network.neutron [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Refreshing network info cache for port 3337c926-0e11-468a-9ddc-efe5775aec35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 10:05:06 np0005546420.localdomain ceph-mon[298353]: pgmap v110: 177 pgs: 177 active+clean; 317 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 2.6 KiB/s wr, 186 op/s
Dec 05 10:05:06 np0005546420.localdomain ceph-mon[298353]: osdmap e98: 6 total, 6 up, 6 in
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.500 281103 DEBUG nova.virt.libvirt.migration [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.502 281103 DEBUG nova.virt.libvirt.migration [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.655 281103 DEBUG nova.network.neutron [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Updated VIF entry in instance network info cache for port 3337c926-0e11-468a-9ddc-efe5775aec35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.656 281103 DEBUG nova.network.neutron [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Updating instance_info_cache with network_info: [{"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005546421.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:05:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:06.682 281103 DEBUG oslo_concurrency.lockutils [req-2902e671-c7d8-4e21-bf45-7bf81f268b35 req-5bcde663-64d0-4459-9fb6-ef92b26b52bc c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Releasing lock "refresh_cache-e3717d5b-7a3e-4d08-82c4-1fc3cef82d42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:05:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:05:06 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2794881848' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.006 281103 DEBUG nova.virt.libvirt.migration [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.007 281103 DEBUG nova.virt.libvirt.migration [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 05 10:05:07 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/736156400' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:05:07 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2794881848' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.179 281103 DEBUG nova.virt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Emitting event <LifecycleEvent: 1764929107.1791596, e3717d5b-7a3e-4d08-82c4-1fc3cef82d42 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.181 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] VM Paused (Lifecycle Event)
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.347 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:05:07 np0005546420.localdomain kernel: device tap3337c926-0e left promiscuous mode
Dec 05 10:05:07 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929107.3775] device (tap3337c926-0e): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Dec 05 10:05:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:07Z|00059|binding|INFO|Releasing lport 3337c926-0e11-468a-9ddc-efe5775aec35 from this chassis (sb_readonly=0)
Dec 05 10:05:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:07Z|00060|binding|INFO|Setting lport 3337c926-0e11-468a-9ddc-efe5775aec35 down in Southbound
Dec 05 10:05:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:07Z|00061|binding|INFO|Releasing lport 320a1991-e020-47d2-a4c0-35897ed58f6e from this chassis (sb_readonly=0)
Dec 05 10:05:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:07Z|00062|binding|INFO|Setting lport 320a1991-e020-47d2-a4c0-35897ed58f6e down in Southbound
Dec 05 10:05:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:07Z|00063|binding|INFO|Removing iface tap3337c926-0e ovn-installed in OVS
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.397 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:07Z|00064|binding|INFO|Releasing lport 64519645-9612-467a-bef1-c2a575a644f8 from this chassis (sb_readonly=0)
Dec 05 10:05:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:07Z|00065|binding|INFO|Releasing lport 60f79f33-8f4e-452b-bed3-efc4f7ae8a69 from this chassis (sb_readonly=0)
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.408 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:5c:88 19.80.0.199'], port_security=['fa:16:3e:0c:5c:88 19.80.0.199'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['3337c926-0e11-468a-9ddc-efe5775aec35'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-251692295', 'neutron:cidrs': '19.80.0.199/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49a4879c-0612-443d-8b44-15b1f6a18cea', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-251692295', 'neutron:project_id': '1b63f7777dfa40c1bfc42162c9fd676f', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'd4162554-7d79-4103-bc2a-c014e86c3743', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=f9b3b447-6047-4886-9c84-e76d87b6b24c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=320a1991-e020-47d2-a4c0-35897ed58f6e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.411 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:59:b6 10.100.0.9'], port_security=['fa:16:3e:a2:59:b6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain,np0005546421.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'b66eb3dc-f30d-4de5-a79b-2460b8903d68'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-211399516', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e3717d5b-7a3e-4d08-82c4-1fc3cef82d42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d14eca3-0067-494d-b2d9-059bccd18a88', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-211399516', 'neutron:project_id': '1b63f7777dfa40c1bfc42162c9fd676f', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd4162554-7d79-4103-bc2a-c014e86c3743', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfe0d10c-51af-4255-846b-8c331654da0e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=3337c926-0e11-468a-9ddc-efe5775aec35) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.413 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 320a1991-e020-47d2-a4c0-35897ed58f6e in datapath 49a4879c-0612-443d-8b44-15b1f6a18cea unbound from our chassis
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.417 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 4e4a6f83-9e6c-47aa-981e-76d88a525e9b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.418 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 49a4879c-0612-443d-8b44-15b1f6a18cea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.419 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2d087b-2809-4f4d-bddf-3b977fa739b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.420 159503 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea namespace which is not needed anymore
Dec 05 10:05:07 np0005546420.localdomain systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec 05 10:05:07 np0005546420.localdomain systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Consumed 12.612s CPU time.
Dec 05 10:05:07 np0005546420.localdomain systemd-machined[203266]: Machine qemu-1-instance-00000007 terminated.
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.441 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.487 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:07 np0005546420.localdomain virtqemud[229316]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk: No such file or directory
Dec 05 10:05:07 np0005546420.localdomain virtqemud[229316]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_disk: No such file or directory
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.546 281103 DEBUG nova.virt.libvirt.guest [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.549 281103 INFO nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Migration operation has completed
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.549 281103 INFO nova.compute.manager [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] _post_live_migration() is started..
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.559 281103 DEBUG nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.559 281103 DEBUG nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.560 281103 DEBUG nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Dec 05 10:05:07 np0005546420.localdomain neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea[308977]: [NOTICE]   (308991) : haproxy version is 2.8.14-c23fe91
Dec 05 10:05:07 np0005546420.localdomain neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea[308977]: [NOTICE]   (308991) : path to executable is /usr/sbin/haproxy
Dec 05 10:05:07 np0005546420.localdomain neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea[308977]: [WARNING]  (308991) : Exiting Master process...
Dec 05 10:05:07 np0005546420.localdomain neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea[308977]: [ALERT]    (308991) : Current worker (308993) exited with code 143 (Terminated)
Dec 05 10:05:07 np0005546420.localdomain neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea[308977]: [WARNING]  (308991) : All workers exited. Exiting... (0)
Dec 05 10:05:07 np0005546420.localdomain systemd[1]: libpod-35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7.scope: Deactivated successfully.
Dec 05 10:05:07 np0005546420.localdomain podman[309270]: 2025-12-05 10:05:07.631811257 +0000 UTC m=+0.081868533 container died 35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:05:07 np0005546420.localdomain systemd[1]: tmp-crun.t3RohM.mount: Deactivated successfully.
Dec 05 10:05:07 np0005546420.localdomain podman[309270]: 2025-12-05 10:05:07.680154974 +0000 UTC m=+0.130212250 container cleanup 35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:05:07 np0005546420.localdomain podman[309288]: 2025-12-05 10:05:07.71405446 +0000 UTC m=+0.074273411 container cleanup 35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:05:07 np0005546420.localdomain systemd[1]: libpod-conmon-35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7.scope: Deactivated successfully.
Dec 05 10:05:07 np0005546420.localdomain podman[309303]: 2025-12-05 10:05:07.78442558 +0000 UTC m=+0.080488191 container remove 35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.788 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[3af13df3-2d10-471a-9027-f04682b18384]: (4, ('Fri Dec  5 10:05:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea (35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7)\n35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7\nFri Dec  5 10:05:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea (35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7)\n35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.791 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[f491040e-84bb-4ef8-8dd8-2cfab1bb3e7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.793 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49a4879c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.845 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:07 np0005546420.localdomain kernel: device tap49a4879c-00 left promiscuous mode
Dec 05 10:05:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:07.855 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.857 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[cadb00d1-29ea-4fff-bacf-0718e7f43cf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.873 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc7d6ad-9a0a-4a27-b284-0c181101502f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.874 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4a2e07-7eb3-4e5c-a87f-98ff18db68b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.891 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[358943e7-423f-478d-bffb-51ab3ce4693a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1220893, 'reachable_time': 19677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309323, 'error': None, 'target': 'ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.903 159609 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-49a4879c-0612-443d-8b44-15b1f6a18cea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.904 159609 DEBUG oslo.privsep.daemon [-] privsep: reply[e753e3ce-2fa9-4229-b009-3c5326dcf2d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.905 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 3337c926-0e11-468a-9ddc-efe5775aec35 in datapath 4d14eca3-0067-494d-b2d9-059bccd18a88 unbound from our chassis
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.907 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port b2bed88e-d07c-4687-9f66-0fce20a95357 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.908 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d14eca3-0067-494d-b2d9-059bccd18a88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.909 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[0f870bce-a917-4d67-bcfd-bf2b431e8325]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:07.909 159503 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88 namespace which is not needed anymore
Dec 05 10:05:08 np0005546420.localdomain neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88[309069]: [NOTICE]   (309073) : haproxy version is 2.8.14-c23fe91
Dec 05 10:05:08 np0005546420.localdomain neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88[309069]: [NOTICE]   (309073) : path to executable is /usr/sbin/haproxy
Dec 05 10:05:08 np0005546420.localdomain neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88[309069]: [WARNING]  (309073) : Exiting Master process...
Dec 05 10:05:08 np0005546420.localdomain neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88[309069]: [ALERT]    (309073) : Current worker (309075) exited with code 143 (Terminated)
Dec 05 10:05:08 np0005546420.localdomain neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88[309069]: [WARNING]  (309073) : All workers exited. Exiting... (0)
Dec 05 10:05:08 np0005546420.localdomain systemd[1]: libpod-19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090.scope: Deactivated successfully.
Dec 05 10:05:08 np0005546420.localdomain podman[309342]: 2025-12-05 10:05:08.121778258 +0000 UTC m=+0.085114312 container died 19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:05:08 np0005546420.localdomain podman[309342]: 2025-12-05 10:05:08.1683214 +0000 UTC m=+0.131657424 container cleanup 19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:05:08 np0005546420.localdomain ceph-mon[298353]: pgmap v111: 177 pgs: 177 active+clean; 456 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 9.5 MiB/s rd, 11 MiB/s wr, 385 op/s
Dec 05 10:05:08 np0005546420.localdomain podman[309371]: 2025-12-05 10:05:08.25012389 +0000 UTC m=+0.064722059 container remove 19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:05:08 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:08.254 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a3557963-d801-4163-b441-f6c3548efb7a]: (4, ('Fri Dec  5 10:05:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88 (19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090)\n19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090\nFri Dec  5 10:05:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88 (19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090)\n19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:08 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:08.257 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[430578e2-71c8-402e-8cff-ec84ae480d71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:08 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:08.258 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d14eca3-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:05:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:08.261 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:08 np0005546420.localdomain kernel: device tap4d14eca3-00 left promiscuous mode
Dec 05 10:05:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:08.274 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:08 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:08.279 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[5793bbe6-21e0-408d-bc32-8d41daa9a809]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:08 np0005546420.localdomain systemd[1]: libpod-conmon-19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090.scope: Deactivated successfully.
Dec 05 10:05:08 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:08.305 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9e140b-3afc-4054-b001-7ce53b802dd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:08 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:08.307 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[94dede2b-6e75-4b59-aa9a-4b38a4b3793e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:08 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:08.327 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a7f318-fbf2-4b10-96e8-89e806902355]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1220988, 'reachable_time': 34410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309392, 'error': None, 'target': 'ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:08 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:08.330 159609 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d14eca3-0067-494d-b2d9-059bccd18a88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 10:05:08 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:08.330 159609 DEBUG oslo.privsep.daemon [-] privsep: reply[51bc7523-8df2-4ceb-a702-be776b49b04a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:08.484 281103 DEBUG nova.compute.manager [req-396ff1b7-2e9a-4836-b1a8-3817f66b0f8d req-5b987aa2-5bf5-4c82-a05f-a86de3e75643 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-vif-unplugged-3337c926-0e11-468a-9ddc-efe5775aec35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:05:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:08.485 281103 DEBUG oslo_concurrency.lockutils [req-396ff1b7-2e9a-4836-b1a8-3817f66b0f8d req-5b987aa2-5bf5-4c82-a05f-a86de3e75643 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:08.485 281103 DEBUG oslo_concurrency.lockutils [req-396ff1b7-2e9a-4836-b1a8-3817f66b0f8d req-5b987aa2-5bf5-4c82-a05f-a86de3e75643 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:08.485 281103 DEBUG oslo_concurrency.lockutils [req-396ff1b7-2e9a-4836-b1a8-3817f66b0f8d req-5b987aa2-5bf5-4c82-a05f-a86de3e75643 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:08.486 281103 DEBUG nova.compute.manager [req-396ff1b7-2e9a-4836-b1a8-3817f66b0f8d req-5b987aa2-5bf5-4c82-a05f-a86de3e75643 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] No waiting events found dispatching network-vif-unplugged-3337c926-0e11-468a-9ddc-efe5775aec35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:05:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:08.486 281103 DEBUG nova.compute.manager [req-396ff1b7-2e9a-4836-b1a8-3817f66b0f8d req-5b987aa2-5bf5-4c82-a05f-a86de3e75643 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-vif-unplugged-3337c926-0e11-468a-9ddc-efe5775aec35 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 10:05:08 np0005546420.localdomain systemd[1]: tmp-crun.ZHloti.mount: Deactivated successfully.
Dec 05 10:05:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-13a852eb785da4d3051f1c4e0667427aea12d44087aa898fd40a1ddabc6aa174-merged.mount: Deactivated successfully.
Dec 05 10:05:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19d517777c86dabb469b39c433c1fafb5eb54b6e929e1ccdc92c842a1cbb1090-userdata-shm.mount: Deactivated successfully.
Dec 05 10:05:08 np0005546420.localdomain systemd[1]: run-netns-ovnmeta\x2d4d14eca3\x2d0067\x2d494d\x2db2d9\x2d059bccd18a88.mount: Deactivated successfully.
Dec 05 10:05:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-478919b9b4e507a1aca495ec9014b0d87a60bb8d1e6319aa7cb02f770c721e08-merged.mount: Deactivated successfully.
Dec 05 10:05:08 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35cc8aa14497041d34a600df6d5e7fb987d7e489d9e8a3e8820a3dc0de6c5af7-userdata-shm.mount: Deactivated successfully.
Dec 05 10:05:08 np0005546420.localdomain systemd[1]: run-netns-ovnmeta\x2d49a4879c\x2d0612\x2d443d\x2d8b44\x2d15b1f6a18cea.mount: Deactivated successfully.
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.155 281103 DEBUG nova.network.neutron [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Activated binding for port 3337c926-0e11-468a-9ddc-efe5775aec35 and host np0005546421.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.156 281103 DEBUG nova.compute.manager [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.157 281103 DEBUG nova.virt.libvirt.vif [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T10:04:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-2007800372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005546420.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-2007800372',id=7,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T10:04:53Z,launched_on='np0005546420.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005546420.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1b63f7777dfa40c1bfc42162c9fd676f',ramdisk_id='',reservation_id='r-kprv5g3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-642400384',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-642400384-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T10:04:56Z,user_data=None,user_id='21c29f3a56e54486b61ecc72cb35cc3e',uuid=e3717d5b-7a3e-4d08-82c4-1fc3cef82d42,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.157 281103 DEBUG nova.network.os_vif_util [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Converting VIF {"id": "3337c926-0e11-468a-9ddc-efe5775aec35", "address": "fa:16:3e:a2:59:b6", "network": {"id": "4d14eca3-0067-494d-b2d9-059bccd18a88", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1134815813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "1b63f7777dfa40c1bfc42162c9fd676f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3337c926-0e", "ovs_interfaceid": "3337c926-0e11-468a-9ddc-efe5775aec35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.158 281103 DEBUG nova.network.os_vif_util [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:59:b6,bridge_name='br-int',has_traffic_filtering=True,id=3337c926-0e11-468a-9ddc-efe5775aec35,network=Network(4d14eca3-0067-494d-b2d9-059bccd18a88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3337c926-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.159 281103 DEBUG os_vif [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:59:b6,bridge_name='br-int',has_traffic_filtering=True,id=3337c926-0e11-468a-9ddc-efe5775aec35,network=Network(4d14eca3-0067-494d-b2d9-059bccd18a88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3337c926-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.163 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.164 281103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3337c926-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.166 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.167 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.170 281103 INFO os_vif [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:59:b6,bridge_name='br-int',has_traffic_filtering=True,id=3337c926-0e11-468a-9ddc-efe5775aec35,network=Network(4d14eca3-0067-494d-b2d9-059bccd18a88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3337c926-0e')
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.171 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.172 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.172 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.173 281103 DEBUG nova.compute.manager [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.173 281103 INFO nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Deleting instance files /var/lib/nova/instances/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_del
Dec 05 10:05:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:09.174 281103 INFO nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Deletion of /var/lib/nova/instances/e3717d5b-7a3e-4d08-82c4-1fc3cef82d42_del complete
Dec 05 10:05:09 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1634112378' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:05:09 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1982682330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:05:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e99 e99: 6 total, 6 up, 6 in
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e100 e100: 6 total, 6 up, 6 in
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.123043) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110123113, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 551, "num_deletes": 252, "total_data_size": 515072, "memory_usage": 525224, "flush_reason": "Manual Compaction"}
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110128591, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 319939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20363, "largest_seqno": 20909, "table_properties": {"data_size": 317151, "index_size": 771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7802, "raw_average_key_size": 21, "raw_value_size": 311224, "raw_average_value_size": 848, "num_data_blocks": 34, "num_entries": 367, "num_filter_entries": 367, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929095, "oldest_key_time": 1764929095, "file_creation_time": 1764929110, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5588 microseconds, and 2258 cpu microseconds.
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.128639) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 319939 bytes OK
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.128662) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.130200) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.130221) EVENT_LOG_v1 {"time_micros": 1764929110130215, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.130245) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 511779, prev total WAL file size 511779, number of live WAL files 2.
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.130988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373537' seq:72057594037927935, type:22 .. '6D6772737461740034303038' seq:0, type:0; will stop at (end)
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(312KB)], [33(18MB)]
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110131187, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 20041226, "oldest_snapshot_seqno": -1}
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 11874 keys, 17829656 bytes, temperature: kUnknown
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110218255, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 17829656, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17763730, "index_size": 35169, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29701, "raw_key_size": 319577, "raw_average_key_size": 26, "raw_value_size": 17563136, "raw_average_value_size": 1479, "num_data_blocks": 1330, "num_entries": 11874, "num_filter_entries": 11874, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929110, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.218755) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 17829656 bytes
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.220706) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 230.1 rd, 204.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 18.8 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(118.4) write-amplify(55.7) OK, records in: 12395, records dropped: 521 output_compression: NoCompression
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.220740) EVENT_LOG_v1 {"time_micros": 1764929110220724, "job": 18, "event": "compaction_finished", "compaction_time_micros": 87091, "compaction_time_cpu_micros": 39531, "output_level": 6, "num_output_files": 1, "total_output_size": 17829656, "num_input_records": 12395, "num_output_records": 11874, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110220989, "job": 18, "event": "table_file_deletion", "file_number": 35}
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110223411, "job": 18, "event": "table_file_deletion", "file_number": 33}
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.130832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.223445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.223450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.223453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.223456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:05:10.223460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: pgmap v112: 177 pgs: 177 active+clean; 456 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 8.0 MiB/s rd, 11 MiB/s wr, 321 op/s
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: osdmap e99: 6 total, 6 up, 6 in
Dec 05 10:05:10 np0005546420.localdomain ceph-mon[298353]: osdmap e100: 6 total, 6 up, 6 in
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.383 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.551 281103 DEBUG nova.compute.manager [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.552 281103 DEBUG oslo_concurrency.lockutils [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.553 281103 DEBUG oslo_concurrency.lockutils [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.553 281103 DEBUG oslo_concurrency.lockutils [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.553 281103 DEBUG nova.compute.manager [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] No waiting events found dispatching network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.554 281103 WARNING nova.compute.manager [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received unexpected event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 for instance with vm_state active and task_state migrating.
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.554 281103 DEBUG nova.compute.manager [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.554 281103 DEBUG oslo_concurrency.lockutils [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.555 281103 DEBUG oslo_concurrency.lockutils [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.555 281103 DEBUG oslo_concurrency.lockutils [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.555 281103 DEBUG nova.compute.manager [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] No waiting events found dispatching network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.556 281103 WARNING nova.compute.manager [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received unexpected event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 for instance with vm_state active and task_state migrating.
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.556 281103 DEBUG nova.compute.manager [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.556 281103 DEBUG oslo_concurrency.lockutils [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.556 281103 DEBUG oslo_concurrency.lockutils [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.557 281103 DEBUG oslo_concurrency.lockutils [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.557 281103 DEBUG nova.compute.manager [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] No waiting events found dispatching network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:05:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:10.558 281103 WARNING nova.compute.manager [req-60a8d36b-81d4-4162-83c5-8456d803a357 req-f3b480b7-d20a-4bd6-8551-ba6d932dc0ac c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Received unexpected event network-vif-plugged-3337c926-0e11-468a-9ddc-efe5775aec35 for instance with vm_state active and task_state migrating.
Dec 05 10:05:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:11.891 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:05:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:11.892 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:05:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:11.892 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:05:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:11.892 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:05:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:05:12 np0005546420.localdomain ceph-mon[298353]: pgmap v115: 177 pgs: 177 active+clean; 441 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 18 MiB/s wr, 592 op/s
Dec 05 10:05:12 np0005546420.localdomain podman[309393]: 2025-12-05 10:05:12.523541922 +0000 UTC m=+0.092090465 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 10:05:12 np0005546420.localdomain podman[309393]: 2025-12-05 10:05:12.535461106 +0000 UTC m=+0.104009669 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:05:12 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.108 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Acquiring lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.109 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.109 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Lock "e3717d5b-7a3e-4d08-82c4-1fc3cef82d42-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.134 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.135 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.135 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.135 281103 DEBUG nova.compute.resource_tracker [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.136 281103 DEBUG oslo_concurrency.processutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:05:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:05:13 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1621447498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.546 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.562 281103 DEBUG oslo_concurrency.processutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:05:13 np0005546420.localdomain ceph-mon[298353]: pgmap v116: 177 pgs: 177 active+clean; 430 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 8.8 MiB/s rd, 15 MiB/s wr, 514 op/s
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.801 281103 WARNING nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.803 281103 DEBUG nova.compute.resource_tracker [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11680MB free_disk=41.36555099487305GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.803 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.804 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.863 281103 DEBUG nova.compute.resource_tracker [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Migration for instance e3717d5b-7a3e-4d08-82c4-1fc3cef82d42 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.920 281103 DEBUG nova.compute.resource_tracker [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.925 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.929 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.958 281103 DEBUG nova.compute.resource_tracker [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Migration ea8f417d-20a3-4fe0-8f43-fb13c347e9fa is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.958 281103 DEBUG nova.compute.resource_tracker [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:05:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:13.959 281103 DEBUG nova.compute.resource_tracker [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:05:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:14.006 281103 DEBUG oslo_concurrency.processutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:05:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:14.199 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:05:14 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/393104557' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:14.459 281103 DEBUG oslo_concurrency.processutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:05:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:14.467 281103 DEBUG nova.compute.provider_tree [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:05:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:14.521 281103 ERROR nova.scheduler.client.report [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [req-a0a0fccb-7278-4f88-b47e-da127e57d83c] Failed to update inventory to [{'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}}] for resource provider with UUID 2850b2c4-8d07-40ab-9d82-672172ca70fc.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-a0a0fccb-7278-4f88-b47e-da127e57d83c"}]}
Dec 05 10:05:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:14.552 281103 DEBUG nova.scheduler.client.report [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:05:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:14.580 281103 DEBUG nova.scheduler.client.report [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:05:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:14.581 281103 DEBUG nova.compute.provider_tree [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:05:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:14.606 281103 DEBUG nova.scheduler.client.report [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:05:14 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1621447498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:14 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/393104557' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:14 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/414429719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:14.646 281103 DEBUG nova.scheduler.client.report [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:05:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:14.687 281103 DEBUG oslo_concurrency.processutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:05:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e101 e101: 6 total, 6 up, 6 in
Dec 05 10:05:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:05:15 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2083231491' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:15.198 281103 DEBUG oslo_concurrency.processutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:05:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:15.205 281103 DEBUG nova.compute.provider_tree [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:05:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:15.290 281103 DEBUG nova.scheduler.client.report [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Updated inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with generation 5 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 05 10:05:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:15.291 281103 DEBUG nova.compute.provider_tree [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Updating resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc generation from 5 to 6 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 05 10:05:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:15.292 281103 DEBUG nova.compute.provider_tree [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:05:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:15.325 281103 DEBUG nova.compute.resource_tracker [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:05:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:15.326 281103 DEBUG oslo_concurrency.lockutils [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:15.337 281103 INFO nova.compute.manager [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Migrating instance to np0005546421.localdomain finished successfully.
Dec 05 10:05:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:15.416 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:15.482 281103 INFO nova.scheduler.client.report [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] Deleted allocation for migration ea8f417d-20a3-4fe0-8f43-fb13c347e9fa
Dec 05 10:05:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:15.483 281103 DEBUG nova.virt.libvirt.driver [None req-e06d3bdc-c48a-442f-ba01-1347e4dba9e9 731bf35b065f4cfeb76a2066c3055e96 86cb8d3b471543839983316ef2de7b3f - - default default] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Dec 05 10:05:16 np0005546420.localdomain ceph-mon[298353]: pgmap v118: 177 pgs: 177 active+clean; 430 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 3.8 MiB/s rd, 5.6 MiB/s wr, 412 op/s
Dec 05 10:05:16 np0005546420.localdomain ceph-mon[298353]: osdmap e101: 6 total, 6 up, 6 in
Dec 05 10:05:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2083231491' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/259079381' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:16.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:05:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:05:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:05:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:05:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160385 "" "Go-http-client/1.1"
Dec 05 10:05:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:05:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20147 "" "Go-http-client/1.1"
Dec 05 10:05:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:17.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:05:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:17.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:05:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:17.903 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:17.954 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:17.954 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:17.955 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:17.955 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:05:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:17.956 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:05:18 np0005546420.localdomain ceph-mon[298353]: pgmap v119: 177 pgs: 177 active+clean; 351 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 6.9 MiB/s rd, 4.3 MiB/s wr, 489 op/s
Dec 05 10:05:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:05:18 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1318555642' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:18.394 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:05:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:18.599 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:05:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:18.602 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11681MB free_disk=41.50117874145508GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:05:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:18.603 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:05:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:18.603 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:05:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:18.677 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:05:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:18.677 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:05:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:18.729 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:05:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:05:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:05:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:05:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:05:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:05:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:05:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:05:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:05:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:05:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:05:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:05:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:05:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1318555642' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:19.203 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:05:19 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2025748006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:19.224 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:05:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:19.230 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:05:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:19.266 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:05:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:19.269 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:05:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:19.270 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:05:19 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:05:19.315 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:23Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a12f730>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a12f760>], id=3337c926-0e11-468a-9ddc-efe5775aec35, ip_allocation=immediate, mac_address=fa:16:3e:a2:59:b6, name=tempest-parent-211399516, network_id=4d14eca3-0067-494d-b2d9-059bccd18a88, port_security_enabled=True, project_id=1b63f7777dfa40c1bfc42162c9fd676f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=15, security_groups=['d4162554-7d79-4103-bc2a-c014e86c3743'], standard_attr_id=390, status=DOWN, tags=[], tenant_id=1b63f7777dfa40c1bfc42162c9fd676f, trunk_details=sub_ports=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0027c0>], trunk_id=fc4bfdf3-14cc-44e0-9079-e9511071cfff, updated_at=2025-12-05T10:05:18Z on network 4d14eca3-0067-494d-b2d9-059bccd18a88
Dec 05 10:05:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:05:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:05:19 np0005546420.localdomain podman[309523]: 2025-12-05 10:05:19.522029752 +0000 UTC m=+0.094524929 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:05:19 np0005546420.localdomain podman[309523]: 2025-12-05 10:05:19.557301851 +0000 UTC m=+0.129797008 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:05:19 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:05:19 np0005546420.localdomain podman[309524]: 2025-12-05 10:05:19.576169506 +0000 UTC m=+0.144358391 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:05:19 np0005546420.localdomain podman[309524]: 2025-12-05 10:05:19.580612883 +0000 UTC m=+0.148801738 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:05:19 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:05:19 np0005546420.localdomain dnsmasq[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/addn_hosts - 2 addresses
Dec 05 10:05:19 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/host
Dec 05 10:05:19 np0005546420.localdomain podman[309578]: 2025-12-05 10:05:19.68821167 +0000 UTC m=+0.058009133 container kill a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:05:19 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/opts
Dec 05 10:05:19 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:05:19.990 262769 INFO neutron.agent.dhcp.agent [None req-96a521c3-d478-41cc-b1e7-fc64eab9c43c - - - - - -] DHCP configuration for ports {'3337c926-0e11-468a-9ddc-efe5775aec35'} is completed
Dec 05 10:05:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:20 np0005546420.localdomain ceph-mon[298353]: pgmap v120: 177 pgs: 177 active+clean; 351 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 6.1 MiB/s rd, 3.8 MiB/s wr, 429 op/s
Dec 05 10:05:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2025748006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:20.266 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:05:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:20.288 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:05:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:20.448 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:20 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:20.722 2 INFO neutron.agent.securitygroups_rpc [None req-3b08e34e-be59-42c6-8b30-fbeb7d168f39 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Security group member updated ['d4162554-7d79-4103-bc2a-c014e86c3743']
Dec 05 10:05:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:20.993 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:05:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:20.993 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:20.995 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:05:21 np0005546420.localdomain systemd[1]: tmp-crun.lfX5Nc.mount: Deactivated successfully.
Dec 05 10:05:21 np0005546420.localdomain dnsmasq[308153]: read /var/lib/neutron/dhcp/49a4879c-0612-443d-8b44-15b1f6a18cea/addn_hosts - 0 addresses
Dec 05 10:05:21 np0005546420.localdomain dnsmasq-dhcp[308153]: read /var/lib/neutron/dhcp/49a4879c-0612-443d-8b44-15b1f6a18cea/host
Dec 05 10:05:21 np0005546420.localdomain podman[309617]: 2025-12-05 10:05:21.016927872 +0000 UTC m=+0.075996184 container kill b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-49a4879c-0612-443d-8b44-15b1f6a18cea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:05:21 np0005546420.localdomain dnsmasq-dhcp[308153]: read /var/lib/neutron/dhcp/49a4879c-0612-443d-8b44-15b1f6a18cea/opts
Dec 05 10:05:21 np0005546420.localdomain dnsmasq[308153]: exiting on receipt of SIGTERM
Dec 05 10:05:21 np0005546420.localdomain podman[309656]: 2025-12-05 10:05:21.46227145 +0000 UTC m=+0.074737265 container kill b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-49a4879c-0612-443d-8b44-15b1f6a18cea, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:05:21 np0005546420.localdomain systemd[1]: libpod-b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a.scope: Deactivated successfully.
Dec 05 10:05:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:21.484 159503 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 4e4a6f83-9e6c-47aa-981e-76d88a525e9b with type ""
Dec 05 10:05:21 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:21Z|00066|binding|INFO|Removing iface tapfc269b6d-01 ovn-installed in OVS
Dec 05 10:05:21 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:21Z|00067|binding|INFO|Removing lport fc269b6d-014b-4201-bcf4-5f7f3bdd4836 ovn-installed in OVS
Dec 05 10:05:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:21.486 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.3/24', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-49a4879c-0612-443d-8b44-15b1f6a18cea', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49a4879c-0612-443d-8b44-15b1f6a18cea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b63f7777dfa40c1bfc42162c9fd676f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9b3b447-6047-4886-9c84-e76d87b6b24c, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=fc269b6d-014b-4201-bcf4-5f7f3bdd4836) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:05:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:21.489 159503 INFO neutron.agent.ovn.metadata.agent [-] Port fc269b6d-014b-4201-bcf4-5f7f3bdd4836 in datapath 49a4879c-0612-443d-8b44-15b1f6a18cea unbound from our chassis
Dec 05 10:05:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:21.493 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 49a4879c-0612-443d-8b44-15b1f6a18cea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:05:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:21.534 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[ec836ae4-680d-4043-a47d-05986949c2f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:21.537 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:21 np0005546420.localdomain podman[309671]: 2025-12-05 10:05:21.56930674 +0000 UTC m=+0.086506724 container died b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-49a4879c-0612-443d-8b44-15b1f6a18cea, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:05:21 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a-userdata-shm.mount: Deactivated successfully.
Dec 05 10:05:21 np0005546420.localdomain podman[309671]: 2025-12-05 10:05:21.610798738 +0000 UTC m=+0.127998722 container remove b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-49a4879c-0612-443d-8b44-15b1f6a18cea, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 05 10:05:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:21.622 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:21 np0005546420.localdomain kernel: device tapfc269b6d-01 left promiscuous mode
Dec 05 10:05:21 np0005546420.localdomain systemd[1]: libpod-conmon-b45a98099b3580f63367a1dee1c445d53adacd7f46b31cc20c368dc6a3e2131a.scope: Deactivated successfully.
Dec 05 10:05:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:21.639 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:05:21.668 262769 INFO neutron.agent.dhcp.agent [None req-4001f97a-2e9c-44de-b378-f467028bbe2f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:05:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:05:21.669 262769 INFO neutron.agent.dhcp.agent [None req-4001f97a-2e9c-44de-b378-f467028bbe2f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:05:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:05:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:21.932 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:22 np0005546420.localdomain podman[309698]: 2025-12-05 10:05:22.00061272 +0000 UTC m=+0.079829301 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:05:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ad16be59cbaad5f6a61300dad6c951ce8b6792e925bdbb69c28e23baa7a55577-merged.mount: Deactivated successfully.
Dec 05 10:05:22 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d49a4879c\x2d0612\x2d443d\x2d8b44\x2d15b1f6a18cea.mount: Deactivated successfully.
Dec 05 10:05:22 np0005546420.localdomain podman[309698]: 2025-12-05 10:05:22.012499323 +0000 UTC m=+0.091715904 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd)
Dec 05 10:05:22 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:05:22 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:22.066 2 INFO neutron.agent.securitygroups_rpc [None req-ca74205a-80b5-4c1e-a1c5-b7623a704ab5 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Security group member updated ['d4162554-7d79-4103-bc2a-c014e86c3743']
Dec 05 10:05:22 np0005546420.localdomain ceph-mon[298353]: pgmap v121: 177 pgs: 177 active+clean; 351 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 194 op/s
Dec 05 10:05:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/4264678141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:22 np0005546420.localdomain dnsmasq[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/addn_hosts - 1 addresses
Dec 05 10:05:22 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/host
Dec 05 10:05:22 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/opts
Dec 05 10:05:22 np0005546420.localdomain podman[309735]: 2025-12-05 10:05:22.321722371 +0000 UTC m=+0.068988858 container kill a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:05:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:22.546 281103 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764929107.5442383, e3717d5b-7a3e-4d08-82c4-1fc3cef82d42 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:05:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:22.547 281103 INFO nova.compute.manager [-] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] VM Stopped (Lifecycle Event)
Dec 05 10:05:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:22.570 281103 DEBUG nova.compute.manager [None req-fea2cb3d-8edb-4ae2-88cb-d0f06717d505 - - - - - -] [instance: e3717d5b-7a3e-4d08-82c4-1fc3cef82d42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:05:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:22.997 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:05:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1476539368' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:23 np0005546420.localdomain dnsmasq[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/addn_hosts - 0 addresses
Dec 05 10:05:23 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/host
Dec 05 10:05:23 np0005546420.localdomain dnsmasq-dhcp[307541]: read /var/lib/neutron/dhcp/4d14eca3-0067-494d-b2d9-059bccd18a88/opts
Dec 05 10:05:23 np0005546420.localdomain podman[309773]: 2025-12-05 10:05:23.374751828 +0000 UTC m=+0.070509356 container kill a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:05:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:23.562 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:23 np0005546420.localdomain kernel: device tapc9513305-54 left promiscuous mode
Dec 05 10:05:23 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:23Z|00068|binding|INFO|Releasing lport c9513305-5405-4e6d-997a-b5e59856978a from this chassis (sb_readonly=0)
Dec 05 10:05:23 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:23Z|00069|binding|INFO|Setting lport c9513305-5405-4e6d-997a-b5e59856978a down in Southbound
Dec 05 10:05:23 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:23.578 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-4d14eca3-0067-494d-b2d9-059bccd18a88', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d14eca3-0067-494d-b2d9-059bccd18a88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b63f7777dfa40c1bfc42162c9fd676f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfe0d10c-51af-4255-846b-8c331654da0e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=c9513305-5405-4e6d-997a-b5e59856978a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:05:23 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:23.580 159503 INFO neutron.agent.ovn.metadata.agent [-] Port c9513305-5405-4e6d-997a-b5e59856978a in datapath 4d14eca3-0067-494d-b2d9-059bccd18a88 unbound from our chassis
Dec 05 10:05:23 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:23.583 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d14eca3-0067-494d-b2d9-059bccd18a88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:05:23 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:23.585 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[66386ae1-3a40-4749-aa09-5dbb7d455105]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:23.589 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:23.788 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:24.254 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:24 np0005546420.localdomain ceph-mon[298353]: pgmap v122: 177 pgs: 177 active+clean; 351 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.2 MiB/s rd, 31 KiB/s wr, 144 op/s
Dec 05 10:05:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:25.500 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:26.012 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:26 np0005546420.localdomain ceph-mon[298353]: pgmap v123: 177 pgs: 177 active+clean; 351 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.2 MiB/s rd, 31 KiB/s wr, 144 op/s
Dec 05 10:05:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:26.340 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:26 np0005546420.localdomain dnsmasq[307541]: exiting on receipt of SIGTERM
Dec 05 10:05:26 np0005546420.localdomain systemd[1]: libpod-a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93.scope: Deactivated successfully.
Dec 05 10:05:26 np0005546420.localdomain podman[309813]: 2025-12-05 10:05:26.801488977 +0000 UTC m=+0.050283816 container kill a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:05:26 np0005546420.localdomain podman[309830]: 2025-12-05 10:05:26.885265037 +0000 UTC m=+0.060006234 container died a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:05:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93-userdata-shm.mount: Deactivated successfully.
Dec 05 10:05:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-474772dc4e7411bc8e30ffe0ad9d9c584466ee365e5b5657b676bb8340e8f4dd-merged.mount: Deactivated successfully.
Dec 05 10:05:26 np0005546420.localdomain podman[309830]: 2025-12-05 10:05:26.935009537 +0000 UTC m=+0.109750684 container remove a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4d14eca3-0067-494d-b2d9-059bccd18a88, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:05:26 np0005546420.localdomain systemd[1]: libpod-conmon-a88c490da69503c9a19646298f3c08d6366e5ed531c78a245a0ff623414b5f93.scope: Deactivated successfully.
Dec 05 10:05:26 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d4d14eca3\x2d0067\x2d494d\x2db2d9\x2d059bccd18a88.mount: Deactivated successfully.
Dec 05 10:05:26 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:05:26.967 262769 INFO neutron.agent.dhcp.agent [None req-2a2eca76-15f7-4593-9b5a-9d0e4b91bad5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:05:27 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:05:27.272 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:05:28 np0005546420.localdomain ceph-mon[298353]: pgmap v124: 177 pgs: 177 active+clean; 372 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 2.1 MiB/s wr, 205 op/s
Dec 05 10:05:28 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2825839204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:28 np0005546420.localdomain podman[309872]: 2025-12-05 10:05:28.64471592 +0000 UTC m=+0.071030322 container kill 83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b30a6f59-c719-4709-86d0-d8d44de009b2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:05:28 np0005546420.localdomain dnsmasq[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/addn_hosts - 0 addresses
Dec 05 10:05:28 np0005546420.localdomain dnsmasq-dhcp[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/host
Dec 05 10:05:28 np0005546420.localdomain dnsmasq-dhcp[307718]: read /var/lib/neutron/dhcp/b30a6f59-c719-4709-86d0-d8d44de009b2/opts
Dec 05 10:05:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:28.854 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:28 np0005546420.localdomain kernel: device tap5d03bb9c-79 left promiscuous mode
Dec 05 10:05:28 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:28Z|00070|binding|INFO|Releasing lport 5d03bb9c-7960-4d4c-b6e0-f33abb4191c1 from this chassis (sb_readonly=0)
Dec 05 10:05:28 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:28Z|00071|binding|INFO|Setting lport 5d03bb9c-7960-4d4c-b6e0-f33abb4191c1 down in Southbound
Dec 05 10:05:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:28.865 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-b30a6f59-c719-4709-86d0-d8d44de009b2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b30a6f59-c719-4709-86d0-d8d44de009b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86cb8d3b471543839983316ef2de7b3f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b86c7586-dad3-4ed6-bcb9-b7e99ffa9ee8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5d03bb9c-7960-4d4c-b6e0-f33abb4191c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:05:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:28.867 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5d03bb9c-7960-4d4c-b6e0-f33abb4191c1 in datapath b30a6f59-c719-4709-86d0-d8d44de009b2 unbound from our chassis
Dec 05 10:05:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:28.873 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b30a6f59-c719-4709-86d0-d8d44de009b2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:05:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:28.875 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[5cdc4e34-b4ae-4245-a70c-27df4bd3ab2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:28.878 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:29.292 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2431517934' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:30 np0005546420.localdomain ceph-mon[298353]: pgmap v125: 177 pgs: 177 active+clean; 372 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 758 KiB/s rd, 2.0 MiB/s wr, 89 op/s
Dec 05 10:05:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:30.525 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:05:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:05:31 np0005546420.localdomain podman[309895]: 2025-12-05 10:05:31.510278311 +0000 UTC m=+0.082512892 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:05:31 np0005546420.localdomain podman[309895]: 2025-12-05 10:05:31.524392902 +0000 UTC m=+0.096627523 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:05:31 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:05:31 np0005546420.localdomain systemd[1]: tmp-crun.2iQgC8.mount: Deactivated successfully.
Dec 05 10:05:31 np0005546420.localdomain podman[309894]: 2025-12-05 10:05:31.618735975 +0000 UTC m=+0.191653516 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9)
Dec 05 10:05:31 np0005546420.localdomain podman[309894]: 2025-12-05 10:05:31.635380134 +0000 UTC m=+0.208297645 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, vcs-type=git, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 10:05:31 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:05:32 np0005546420.localdomain ceph-mon[298353]: pgmap v126: 177 pgs: 177 active+clean; 342 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 807 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Dec 05 10:05:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:34.342 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:34 np0005546420.localdomain ceph-mon[298353]: pgmap v127: 177 pgs: 177 active+clean; 306 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 882 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 05 10:05:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:34.636 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:35 np0005546420.localdomain dnsmasq[307718]: exiting on receipt of SIGTERM
Dec 05 10:05:35 np0005546420.localdomain systemd[1]: tmp-crun.2DrVs5.mount: Deactivated successfully.
Dec 05 10:05:35 np0005546420.localdomain systemd[1]: libpod-83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe.scope: Deactivated successfully.
Dec 05 10:05:35 np0005546420.localdomain podman[309954]: 2025-12-05 10:05:35.16109471 +0000 UTC m=+0.051500195 container kill 83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b30a6f59-c719-4709-86d0-d8d44de009b2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 10:05:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:05:35 np0005546420.localdomain systemd[1]: tmp-crun.TlrjJR.mount: Deactivated successfully.
Dec 05 10:05:35 np0005546420.localdomain podman[309975]: 2025-12-05 10:05:35.278074174 +0000 UTC m=+0.089386103 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:05:35 np0005546420.localdomain podman[309974]: 2025-12-05 10:05:35.297564829 +0000 UTC m=+0.108800375 container died 83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b30a6f59-c719-4709-86d0-d8d44de009b2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:05:35 np0005546420.localdomain podman[309975]: 2025-12-05 10:05:35.34537592 +0000 UTC m=+0.156687789 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 10:05:35 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:05:35 np0005546420.localdomain podman[309974]: 2025-12-05 10:05:35.402569069 +0000 UTC m=+0.213804575 container remove 83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b30a6f59-c719-4709-86d0-d8d44de009b2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:05:35 np0005546420.localdomain systemd[1]: libpod-conmon-83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe.scope: Deactivated successfully.
Dec 05 10:05:35 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:05:35.487 262769 INFO neutron.agent.dhcp.agent [None req-31ab39df-5acb-4431-838e-753a08fb9eab - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:05:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:35.559 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:35 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:05:35.757 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:05:35 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:35.969 2 INFO neutron.agent.securitygroups_rpc [None req-01829f80-3210-49b7-8dc5-90fc991ad5c8 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Security group member updated ['8c9500c3-6ac9-452e-a652-72bddc07be6d']
Dec 05 10:05:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-a85a7bef52414ae1fd73fbae05968d8ee845de09eb9db252c24fa77c0cce498f-merged.mount: Deactivated successfully.
Dec 05 10:05:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83c4a0f55a53cb520df391140ab44bc9eb24db093166420b67f629c6eada93fe-userdata-shm.mount: Deactivated successfully.
Dec 05 10:05:36 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2db30a6f59\x2dc719\x2d4709\x2d86d0\x2dd8d44de009b2.mount: Deactivated successfully.
Dec 05 10:05:36 np0005546420.localdomain ceph-mon[298353]: pgmap v128: 177 pgs: 177 active+clean; 306 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 878 KiB/s rd, 2.1 MiB/s wr, 134 op/s
Dec 05 10:05:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e102 e102: 6 total, 6 up, 6 in
Dec 05 10:05:37 np0005546420.localdomain ceph-mon[298353]: osdmap e102: 6 total, 6 up, 6 in
Dec 05 10:05:37 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:37.736 2 INFO neutron.agent.securitygroups_rpc [req-735a2183-c960-481e-9eb4-785a9e1cdde3 req-9ac83efd-8db5-413d-ab08-ac7a6bf5cdc1 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Security group member updated ['13f09786-c3de-4f80-a431-bd4239c2ee01']
Dec 05 10:05:38 np0005546420.localdomain dnsmasq[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/addn_hosts - 1 addresses
Dec 05 10:05:38 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/host
Dec 05 10:05:38 np0005546420.localdomain podman[310036]: 2025-12-05 10:05:38.186089244 +0000 UTC m=+0.060398186 container kill 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:05:38 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/opts
Dec 05 10:05:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:38.451 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:38 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e103 e103: 6 total, 6 up, 6 in
Dec 05 10:05:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:38.553 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:38 np0005546420.localdomain ceph-mon[298353]: pgmap v130: 177 pgs: 177 active+clean; 226 MiB data, 892 MiB used, 41 GiB / 42 GiB avail; 181 KiB/s rd, 138 KiB/s wr, 102 op/s
Dec 05 10:05:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:39.372 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:39 np0005546420.localdomain ceph-mon[298353]: osdmap e103: 6 total, 6 up, 6 in
Dec 05 10:05:39 np0005546420.localdomain ceph-mon[298353]: pgmap v132: 177 pgs: 177 active+clean; 226 MiB data, 892 MiB used, 41 GiB / 42 GiB avail; 154 KiB/s rd, 79 KiB/s wr, 100 op/s
Dec 05 10:05:39 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1737133720' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:39 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e104 e104: 6 total, 6 up, 6 in
Dec 05 10:05:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:40 np0005546420.localdomain ceph-mon[298353]: osdmap e104: 6 total, 6 up, 6 in
Dec 05 10:05:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:40.609 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:41 np0005546420.localdomain ceph-mon[298353]: pgmap v134: 177 pgs: 177 active+clean; 265 MiB data, 952 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Dec 05 10:05:41 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:41.723 2 INFO neutron.agent.securitygroups_rpc [None req-e7017f28-3240-4b44-85ae-7a3dc282f638 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Security group member updated ['8c9500c3-6ac9-452e-a652-72bddc07be6d']
Dec 05 10:05:42 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2126774927' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:05:43 np0005546420.localdomain podman[310058]: 2025-12-05 10:05:43.499293679 +0000 UTC m=+0.080703327 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:05:43 np0005546420.localdomain podman[310058]: 2025-12-05 10:05:43.510916954 +0000 UTC m=+0.092326562 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 10:05:43 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:05:43 np0005546420.localdomain ceph-mon[298353]: pgmap v135: 177 pgs: 177 active+clean; 307 MiB data, 1020 MiB used, 41 GiB / 42 GiB avail; 7.1 MiB/s rd, 7.0 MiB/s wr, 148 op/s
Dec 05 10:05:43 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:43Z|00072|ovn_bfd|INFO|Disabled BFD on interface ovn-473cc8-0
Dec 05 10:05:43 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:43Z|00073|ovn_bfd|INFO|Disabled BFD on interface ovn-f5bb44-0
Dec 05 10:05:43 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:43Z|00074|ovn_bfd|INFO|Disabled BFD on interface ovn-40c64e-0
Dec 05 10:05:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:43.670 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:43.687 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:43 np0005546420.localdomain dnsmasq[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/addn_hosts - 0 addresses
Dec 05 10:05:43 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/host
Dec 05 10:05:43 np0005546420.localdomain dnsmasq-dhcp[307996]: read /var/lib/neutron/dhcp/64267419-8c47-450f-9ba4-afc8c103bf71/opts
Dec 05 10:05:43 np0005546420.localdomain podman[310094]: 2025-12-05 10:05:43.714222647 +0000 UTC m=+0.068864006 container kill 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:05:43 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:43Z|00075|binding|INFO|Releasing lport 40c14e92-17bf-4ad8-bc71-5d611aa76f67 from this chassis (sb_readonly=0)
Dec 05 10:05:43 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:05:43Z|00076|binding|INFO|Setting lport 40c14e92-17bf-4ad8-bc71-5d611aa76f67 down in Southbound
Dec 05 10:05:43 np0005546420.localdomain kernel: device tap40c14e92-17 left promiscuous mode
Dec 05 10:05:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:43.942 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:43.965 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:44 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:44.209 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-64267419-8c47-450f-9ba4-afc8c103bf71', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64267419-8c47-450f-9ba4-afc8c103bf71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41095831ac6247b0a5ea030490af998f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9200024c-1bb2-4d9b-96df-67796d72a9e4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=40c14e92-17bf-4ad8-bc71-5d611aa76f67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:05:44 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:44.212 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 40c14e92-17bf-4ad8-bc71-5d611aa76f67 in datapath 64267419-8c47-450f-9ba4-afc8c103bf71 unbound from our chassis
Dec 05 10:05:44 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:44.216 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64267419-8c47-450f-9ba4-afc8c103bf71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:05:44 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:05:44.217 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[61bc0c58-03b3-41b6-b2d7-555538ac8b27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:05:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:44.374 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:44 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1463241569' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e105 e105: 6 total, 6 up, 6 in
Dec 05 10:05:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:45.641 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:46 np0005546420.localdomain ceph-mon[298353]: pgmap v136: 177 pgs: 177 active+clean; 307 MiB data, 1020 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 122 op/s
Dec 05 10:05:46 np0005546420.localdomain ceph-mon[298353]: osdmap e105: 6 total, 6 up, 6 in
Dec 05 10:05:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:05:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:05:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:05:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154915 "" "Go-http-client/1.1"
Dec 05 10:05:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:05:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18728 "" "Go-http-client/1.1"
Dec 05 10:05:48 np0005546420.localdomain ceph-mon[298353]: pgmap v138: 177 pgs: 177 active+clean; 226 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 6.5 MiB/s rd, 5.8 MiB/s wr, 216 op/s
Dec 05 10:05:48 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/329139697' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:05:48 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3531840413' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:05:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:05:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:05:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:05:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:05:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:05:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:05:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:05:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:05:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:05:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:05:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:05:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:05:49 np0005546420.localdomain snmpd[68010]: empty variable list in _query
Dec 05 10:05:49 np0005546420.localdomain snmpd[68010]: empty variable list in _query
Dec 05 10:05:49 np0005546420.localdomain snmpd[68010]: empty variable list in _query
Dec 05 10:05:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:49.412 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:49 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e106 e106: 6 total, 6 up, 6 in
Dec 05 10:05:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:50 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:50.149 2 INFO neutron.agent.securitygroups_rpc [req-74ecf99b-e345-4630-ad91-053313f6c446 req-1919fa42-a2d3-4e2c-a14a-fd250addbcf1 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['7dd07b03-f51a-4652-84b7-145d368874a1']
Dec 05 10:05:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:05:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:05:50 np0005546420.localdomain podman[310115]: 2025-12-05 10:05:50.516341038 +0000 UTC m=+0.088918948 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:05:50 np0005546420.localdomain ceph-mon[298353]: pgmap v139: 177 pgs: 177 active+clean; 226 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 181 op/s
Dec 05 10:05:50 np0005546420.localdomain ceph-mon[298353]: osdmap e106: 6 total, 6 up, 6 in
Dec 05 10:05:50 np0005546420.localdomain podman[310115]: 2025-12-05 10:05:50.549716938 +0000 UTC m=+0.122294848 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:05:50 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:05:50 np0005546420.localdomain podman[310116]: 2025-12-05 10:05:50.629177096 +0000 UTC m=+0.197154415 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 05 10:05:50 np0005546420.localdomain podman[310116]: 2025-12-05 10:05:50.659710479 +0000 UTC m=+0.227687748 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 10:05:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:50.673 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:50 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:05:50 np0005546420.localdomain dnsmasq[307996]: exiting on receipt of SIGTERM
Dec 05 10:05:50 np0005546420.localdomain podman[310171]: 2025-12-05 10:05:50.872830841 +0000 UTC m=+0.072950390 container kill 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:05:50 np0005546420.localdomain systemd[1]: libpod-0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05.scope: Deactivated successfully.
Dec 05 10:05:50 np0005546420.localdomain podman[310186]: 2025-12-05 10:05:50.939715035 +0000 UTC m=+0.046876784 container died 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 10:05:51 np0005546420.localdomain podman[310186]: 2025-12-05 10:05:51.006086403 +0000 UTC m=+0.113248152 container remove 0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64267419-8c47-450f-9ba4-afc8c103bf71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:05:51 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:05:51.039 262769 INFO neutron.agent.dhcp.agent [None req-5590ae08-6a9b-4d62-9e5b-066a03f8cc98 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:05:51 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:05:51.040 262769 INFO neutron.agent.dhcp.agent [None req-5590ae08-6a9b-4d62-9e5b-066a03f8cc98 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:05:51 np0005546420.localdomain systemd[1]: libpod-conmon-0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05.scope: Deactivated successfully.
Dec 05 10:05:51 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:51.171 2 INFO neutron.agent.securitygroups_rpc [req-23821fcd-f47c-47e8-9ea5-4e64f3e1ab82 req-d1e78378-1930-4a1c-9b70-8746789333ca 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['186820cc-187b-4934-9cac-c70ced43993b']
Dec 05 10:05:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:51.266 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-980f7199c576e15abdbc6fb24f98160c4a354a48ac38db7fb54cf92b7bd3658a-merged.mount: Deactivated successfully.
Dec 05 10:05:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b0892b8d8bf46f68c461ac9b4efcbf6a9f0cc3302b7d124ac2b7932fd5cbf05-userdata-shm.mount: Deactivated successfully.
Dec 05 10:05:51 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d64267419\x2d8c47\x2d450f\x2d9ba4\x2dafc8c103bf71.mount: Deactivated successfully.
Dec 05 10:05:51 np0005546420.localdomain ceph-mon[298353]: pgmap v141: 177 pgs: 177 active+clean; 228 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 142 op/s
Dec 05 10:05:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:05:52 np0005546420.localdomain podman[310210]: 2025-12-05 10:05:52.514216836 +0000 UTC m=+0.092265450 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 05 10:05:52 np0005546420.localdomain podman[310210]: 2025-12-05 10:05:52.528364188 +0000 UTC m=+0.106412822 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:05:52 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:05:53 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:53.365 2 INFO neutron.agent.securitygroups_rpc [req-9c27cc3e-5b74-43d9-a2de-714df95f5d49 req-2dc71e88-498d-40ef-8ade-7140861a565f 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['b81ee3fd-eca2-47b4-8eaa-4f630bc76eb0']
Dec 05 10:05:53 np0005546420.localdomain sshd[310229]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:05:54 np0005546420.localdomain ceph-mon[298353]: pgmap v142: 177 pgs: 177 active+clean; 226 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 7.5 MiB/s rd, 5.8 MiB/s wr, 256 op/s
Dec 05 10:05:54 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/125528686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:05:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:54.444 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:05:55 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:55.237 2 INFO neutron.agent.securitygroups_rpc [req-24e84f8d-6e8f-4557-9205-54b3029f5245 req-092162e9-9b4d-4423-bd91-0f1f4f4310ec 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['b94edc62-bf5e-4c16-aafe-b39cd05a1a10']
Dec 05 10:05:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e107 e107: 6 total, 6 up, 6 in
Dec 05 10:05:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:55.713 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:56 np0005546420.localdomain sudo[310231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:05:56 np0005546420.localdomain sudo[310231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:05:56 np0005546420.localdomain sudo[310231]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:56 np0005546420.localdomain sudo[310249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:05:56 np0005546420.localdomain sudo[310249]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:05:56 np0005546420.localdomain ceph-mon[298353]: pgmap v143: 177 pgs: 177 active+clean; 226 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 6.2 MiB/s rd, 4.8 MiB/s wr, 212 op/s
Dec 05 10:05:56 np0005546420.localdomain ceph-mon[298353]: osdmap e107: 6 total, 6 up, 6 in
Dec 05 10:05:56 np0005546420.localdomain sshd[310299]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:05:56 np0005546420.localdomain sudo[310249]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:57 np0005546420.localdomain sudo[310300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:05:57 np0005546420.localdomain sudo[310300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:05:57 np0005546420.localdomain sudo[310300]: pam_unix(sudo:session): session closed for user root
Dec 05 10:05:57 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:05:57 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:05:57 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:05:57 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:05:57 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:57.572 2 INFO neutron.agent.securitygroups_rpc [req-025a97c7-a464-4a3a-86c1-61297e119814 req-de42a876-d25d-468b-a0f3-d33f17b567b7 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['32984ea2-4ec9-4d1b-aa2d-d8f6901ec630']
Dec 05 10:05:58 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:58.473 2 INFO neutron.agent.securitygroups_rpc [req-28b1e81a-aa3a-4892-8d38-c4574c829b60 req-fdb05b8b-d1da-458f-885e-126d06eb03b9 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['32984ea2-4ec9-4d1b-aa2d-d8f6901ec630']
Dec 05 10:05:58 np0005546420.localdomain ceph-mon[298353]: pgmap v145: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 8.1 MiB/s rd, 5.8 MiB/s wr, 247 op/s
Dec 05 10:05:58 np0005546420.localdomain sshd[310229]: Connection reset by 198.235.24.77 port 58748 [preauth]
Dec 05 10:05:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:05:59.481 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:05:59 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:05:59.504 2 INFO neutron.agent.securitygroups_rpc [req-41735c34-10dd-4511-8a89-ab34c2aff62e req-23ce0828-96f4-4e84-afaa-2db1869af464 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['32984ea2-4ec9-4d1b-aa2d-d8f6901ec630']
Dec 05 10:05:59 np0005546420.localdomain ceph-mon[298353]: pgmap v146: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 6.8 MiB/s rd, 4.9 MiB/s wr, 206 op/s
Dec 05 10:06:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:00 np0005546420.localdomain sshd[310299]: Connection reset by authenticating user root 45.140.17.124 port 34728 [preauth]
Dec 05 10:06:00 np0005546420.localdomain sshd[310319]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:06:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:00.749 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:01 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:06:02 np0005546420.localdomain sshd[310319]: Connection reset by authenticating user root 45.140.17.124 port 34742 [preauth]
Dec 05 10:06:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:06:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:06:02 np0005546420.localdomain systemd[1]: tmp-crun.SHMoLM.mount: Deactivated successfully.
Dec 05 10:06:02 np0005546420.localdomain podman[310321]: 2025-12-05 10:06:02.439906201 +0000 UTC m=+0.104024860 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 10:06:02 np0005546420.localdomain sshd[310351]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:06:02 np0005546420.localdomain podman[310322]: 2025-12-05 10:06:02.478887472 +0000 UTC m=+0.140752422 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:06:02 np0005546420.localdomain ceph-mon[298353]: pgmap v147: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 158 op/s
Dec 05 10:06:02 np0005546420.localdomain podman[310321]: 2025-12-05 10:06:02.523750663 +0000 UTC m=+0.187869302 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, version=9.6)
Dec 05 10:06:02 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:06:02 np0005546420.localdomain podman[310322]: 2025-12-05 10:06:02.543218058 +0000 UTC m=+0.205083018 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:06:02 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:06:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:04.125 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:06:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:04.126 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:06:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:04.126 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:06:04 np0005546420.localdomain sshd[310351]: Invalid user test2 from 45.140.17.124 port 27696
Dec 05 10:06:04 np0005546420.localdomain ceph-mon[298353]: pgmap v148: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 1.1 MiB/s rd, 1.4 KiB/s wr, 67 op/s
Dec 05 10:06:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:04.526 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:06:05 np0005546420.localdomain systemd[1]: tmp-crun.5qJ9mx.mount: Deactivated successfully.
Dec 05 10:06:05 np0005546420.localdomain podman[310367]: 2025-12-05 10:06:05.519174322 +0000 UTC m=+0.095488118 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:06:05 np0005546420.localdomain podman[310367]: 2025-12-05 10:06:05.614743003 +0000 UTC m=+0.191056749 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:06:05 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:06:05 np0005546420.localdomain sshd[310351]: Connection reset by invalid user test2 45.140.17.124 port 27696 [preauth]
Dec 05 10:06:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:05.754 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:05 np0005546420.localdomain sshd[310392]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:06:06 np0005546420.localdomain ceph-mon[298353]: pgmap v149: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 1.1 MiB/s rd, 1.4 KiB/s wr, 67 op/s
Dec 05 10:06:07 np0005546420.localdomain ceph-mon[298353]: pgmap v150: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 933 KiB/s rd, 1.2 KiB/s wr, 57 op/s
Dec 05 10:06:08 np0005546420.localdomain sshd[310392]: Connection reset by authenticating user root 45.140.17.124 port 27706 [preauth]
Dec 05 10:06:08 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e108 e108: 6 total, 6 up, 6 in
Dec 05 10:06:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:09.559 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:09 np0005546420.localdomain sshd[310394]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:06:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:09.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:06:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:09.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:06:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e109 e109: 6 total, 6 up, 6 in
Dec 05 10:06:09 np0005546420.localdomain ceph-mon[298353]: osdmap e108: 6 total, 6 up, 6 in
Dec 05 10:06:09 np0005546420.localdomain ceph-mon[298353]: pgmap v152: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:06:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:10.755 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:10.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:06:10 np0005546420.localdomain ceph-mon[298353]: osdmap e109: 6 total, 6 up, 6 in
Dec 05 10:06:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e110 e110: 6 total, 6 up, 6 in
Dec 05 10:06:11 np0005546420.localdomain ceph-mon[298353]: osdmap e110: 6 total, 6 up, 6 in
Dec 05 10:06:11 np0005546420.localdomain ceph-mon[298353]: pgmap v155: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 6.2 KiB/s wr, 47 op/s
Dec 05 10:06:11 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e111 e111: 6 total, 6 up, 6 in
Dec 05 10:06:12 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:12.012 262769 INFO neutron.agent.linux.ip_lib [None req-9f2aad40-9c98-4009-a71f-074522d2f431 - - - - - -] Device tapface5f2a-55 cannot be used as it has no MAC address
Dec 05 10:06:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:12.087 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:12 np0005546420.localdomain kernel: device tapface5f2a-55 entered promiscuous mode
Dec 05 10:06:12 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929172.0998] manager: (tapface5f2a-55): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 05 10:06:12 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:12Z|00077|binding|INFO|Claiming lport face5f2a-5573-4b59-a2d0-e87dc8f9b940 for this chassis.
Dec 05 10:06:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:12.101 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:12 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:12Z|00078|binding|INFO|face5f2a-5573-4b59-a2d0-e87dc8f9b940: Claiming unknown
Dec 05 10:06:12 np0005546420.localdomain systemd-udevd[310406]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:06:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:12.114 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-c1b4a004-602e-48a9-bb00-bddac320fcac', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1b4a004-602e-48a9-bb00-bddac320fcac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0588d317f184bd5b4c00fddf19c9c64', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e4e951-2d16-4ef8-ad77-3879d13c3d14, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=face5f2a-5573-4b59-a2d0-e87dc8f9b940) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:06:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:12.116 159503 INFO neutron.agent.ovn.metadata.agent [-] Port face5f2a-5573-4b59-a2d0-e87dc8f9b940 in datapath c1b4a004-602e-48a9-bb00-bddac320fcac bound to our chassis
Dec 05 10:06:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:12.118 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c1b4a004-602e-48a9-bb00-bddac320fcac or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:06:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:12.119 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4d07ad-d023-4ac0-b5ce-57a9fa1407d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:06:12 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapface5f2a-55: No such device
Dec 05 10:06:12 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:12Z|00079|binding|INFO|Setting lport face5f2a-5573-4b59-a2d0-e87dc8f9b940 ovn-installed in OVS
Dec 05 10:06:12 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:12Z|00080|binding|INFO|Setting lport face5f2a-5573-4b59-a2d0-e87dc8f9b940 up in Southbound
Dec 05 10:06:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:12.137 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:12 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapface5f2a-55: No such device
Dec 05 10:06:12 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapface5f2a-55: No such device
Dec 05 10:06:12 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapface5f2a-55: No such device
Dec 05 10:06:12 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapface5f2a-55: No such device
Dec 05 10:06:12 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapface5f2a-55: No such device
Dec 05 10:06:12 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapface5f2a-55: No such device
Dec 05 10:06:12 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapface5f2a-55: No such device
Dec 05 10:06:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:12.212 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:12.244 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:12 np0005546420.localdomain sshd[310394]: Connection reset by authenticating user root 45.140.17.124 port 27730 [preauth]
Dec 05 10:06:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:12.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:06:12 np0005546420.localdomain ceph-mon[298353]: osdmap e111: 6 total, 6 up, 6 in
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:06:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:06:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e112 e112: 6 total, 6 up, 6 in
Dec 05 10:06:13 np0005546420.localdomain podman[310478]: 
Dec 05 10:06:13 np0005546420.localdomain podman[310478]: 2025-12-05 10:06:13.304582748 +0000 UTC m=+0.088587527 container create 73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1b4a004-602e-48a9-bb00-bddac320fcac, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 05 10:06:13 np0005546420.localdomain systemd[1]: Started libpod-conmon-73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4.scope.
Dec 05 10:06:13 np0005546420.localdomain podman[310478]: 2025-12-05 10:06:13.260150731 +0000 UTC m=+0.044155530 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:06:13 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:06:13 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45aa8577cb59487083e02a0d6100e690d84a2f1ddb3ffb303e3b75856f0ea2c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:13 np0005546420.localdomain podman[310478]: 2025-12-05 10:06:13.385444029 +0000 UTC m=+0.169448808 container init 73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1b4a004-602e-48a9-bb00-bddac320fcac, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:06:13 np0005546420.localdomain podman[310478]: 2025-12-05 10:06:13.394769784 +0000 UTC m=+0.178774563 container start 73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1b4a004-602e-48a9-bb00-bddac320fcac, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 05 10:06:13 np0005546420.localdomain dnsmasq[310497]: started, version 2.85 cachesize 150
Dec 05 10:06:13 np0005546420.localdomain dnsmasq[310497]: DNS service limited to local subnets
Dec 05 10:06:13 np0005546420.localdomain dnsmasq[310497]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:06:13 np0005546420.localdomain dnsmasq[310497]: warning: no upstream servers configured
Dec 05 10:06:13 np0005546420.localdomain dnsmasq-dhcp[310497]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:06:13 np0005546420.localdomain dnsmasq[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/addn_hosts - 0 addresses
Dec 05 10:06:13 np0005546420.localdomain dnsmasq-dhcp[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/host
Dec 05 10:06:13 np0005546420.localdomain dnsmasq-dhcp[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/opts
Dec 05 10:06:13 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:13.614 262769 INFO neutron.agent.dhcp.agent [None req-d6be16ab-0f5c-4659-932d-542a4738454c - - - - - -] DHCP configuration for ports {'ff644aa4-455e-4210-a7c4-a9fc4b5edcad'} is completed
Dec 05 10:06:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:13.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:06:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:13.870 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:06:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:13.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:06:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:13.965 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:06:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e113 e113: 6 total, 6 up, 6 in
Dec 05 10:06:14 np0005546420.localdomain ceph-mon[298353]: osdmap e112: 6 total, 6 up, 6 in
Dec 05 10:06:14 np0005546420.localdomain ceph-mon[298353]: pgmap v158: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 13 KiB/s wr, 112 op/s
Dec 05 10:06:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:06:14 np0005546420.localdomain podman[310498]: 2025-12-05 10:06:14.253897948 +0000 UTC m=+0.080469769 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 10:06:14 np0005546420.localdomain podman[310498]: 2025-12-05 10:06:14.297286599 +0000 UTC m=+0.123858390 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:06:14 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:06:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:14.602 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:15 np0005546420.localdomain ceph-mon[298353]: osdmap e113: 6 total, 6 up, 6 in
Dec 05 10:06:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:15.793 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:15.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:06:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e114 e114: 6 total, 6 up, 6 in
Dec 05 10:06:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:16.419 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:06:15Z, description=, device_id=105df6c0-f411-4822-ad27-efbd28afab73, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a02c670>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a02c130>], id=7e2a39d3-e2f6-4316-bd4a-2ee6b29e7d58, ip_allocation=immediate, mac_address=fa:16:3e:94:ce:2c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:06:10Z, description=, dns_domain=, id=c1b4a004-602e-48a9-bb00-bddac320fcac, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-1409165773-network, port_security_enabled=True, project_id=a0588d317f184bd5b4c00fddf19c9c64, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=36142, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=803, status=ACTIVE, subnets=['9c827b55-33b5-4221-be75-57695f23c14c'], tags=[], tenant_id=a0588d317f184bd5b4c00fddf19c9c64, updated_at=2025-12-05T10:06:11Z, vlan_transparent=None, network_id=c1b4a004-602e-48a9-bb00-bddac320fcac, port_security_enabled=False, project_id=a0588d317f184bd5b4c00fddf19c9c64, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=850, status=DOWN, tags=[], tenant_id=a0588d317f184bd5b4c00fddf19c9c64, updated_at=2025-12-05T10:06:16Z on network c1b4a004-602e-48a9-bb00-bddac320fcac
Dec 05 10:06:16 np0005546420.localdomain ceph-mon[298353]: pgmap v160: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 13 KiB/s wr, 106 op/s
Dec 05 10:06:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1947143728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:06:16 np0005546420.localdomain ceph-mon[298353]: osdmap e114: 6 total, 6 up, 6 in
Dec 05 10:06:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:06:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:06:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:06:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154915 "" "Go-http-client/1.1"
Dec 05 10:06:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:06:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18723 "" "Go-http-client/1.1"
Dec 05 10:06:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:17.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:06:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:17.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:06:18 np0005546420.localdomain ceph-mon[298353]: pgmap v162: 177 pgs: 18 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 149 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 132 KiB/s rd, 20 KiB/s wr, 187 op/s
Dec 05 10:06:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:06:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:06:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:06:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:06:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:06:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:06:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:06:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:06:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:06:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:06:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:06:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:06:19 np0005546420.localdomain dnsmasq[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/addn_hosts - 1 addresses
Dec 05 10:06:19 np0005546420.localdomain dnsmasq-dhcp[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/host
Dec 05 10:06:19 np0005546420.localdomain dnsmasq-dhcp[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/opts
Dec 05 10:06:19 np0005546420.localdomain podman[310534]: 2025-12-05 10:06:19.100108023 +0000 UTC m=+0.056603048 container kill 73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1b4a004-602e-48a9-bb00-bddac320fcac, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:06:19 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:19.473 262769 INFO neutron.agent.dhcp.agent [None req-49632f19-c953-40d7-990a-96fd5c7f4c7a - - - - - -] DHCP configuration for ports {'7e2a39d3-e2f6-4316-bd4a-2ee6b29e7d58'} is completed
Dec 05 10:06:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e115 e115: 6 total, 6 up, 6 in
Dec 05 10:06:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/139994625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:06:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:19.642 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:19.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:06:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:19.898 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:06:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:19.899 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:06:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:19.899 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:06:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:19.899 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:06:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:19.899 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:06:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:20.098 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:06:15Z, description=, device_id=105df6c0-f411-4822-ad27-efbd28afab73, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a083160>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0837c0>], id=7e2a39d3-e2f6-4316-bd4a-2ee6b29e7d58, ip_allocation=immediate, mac_address=fa:16:3e:94:ce:2c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:06:10Z, description=, dns_domain=, id=c1b4a004-602e-48a9-bb00-bddac320fcac, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-1409165773-network, port_security_enabled=True, project_id=a0588d317f184bd5b4c00fddf19c9c64, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=36142, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=803, status=ACTIVE, subnets=['9c827b55-33b5-4221-be75-57695f23c14c'], tags=[], tenant_id=a0588d317f184bd5b4c00fddf19c9c64, updated_at=2025-12-05T10:06:11Z, vlan_transparent=None, network_id=c1b4a004-602e-48a9-bb00-bddac320fcac, port_security_enabled=False, project_id=a0588d317f184bd5b4c00fddf19c9c64, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=850, status=DOWN, tags=[], tenant_id=a0588d317f184bd5b4c00fddf19c9c64, updated_at=2025-12-05T10:06:16Z on network c1b4a004-602e-48a9-bb00-bddac320fcac
Dec 05 10:06:20 np0005546420.localdomain sshd[310590]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:06:20 np0005546420.localdomain systemd[1]: tmp-crun.W4vLyg.mount: Deactivated successfully.
Dec 05 10:06:20 np0005546420.localdomain dnsmasq[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/addn_hosts - 1 addresses
Dec 05 10:06:20 np0005546420.localdomain dnsmasq-dhcp[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/host
Dec 05 10:06:20 np0005546420.localdomain podman[310593]: 2025-12-05 10:06:20.36092121 +0000 UTC m=+0.071262986 container kill 73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1b4a004-602e-48a9-bb00-bddac320fcac, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:06:20 np0005546420.localdomain dnsmasq-dhcp[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/opts
Dec 05 10:06:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:06:20 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2501743856' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:06:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:20.423 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:06:20 np0005546420.localdomain ceph-mon[298353]: pgmap v163: 177 pgs: 18 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 149 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 112 KiB/s rd, 17 KiB/s wr, 158 op/s
Dec 05 10:06:20 np0005546420.localdomain ceph-mon[298353]: osdmap e115: 6 total, 6 up, 6 in
Dec 05 10:06:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2501743856' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:06:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e116 e116: 6 total, 6 up, 6 in
Dec 05 10:06:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:20.667 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:06:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:20.668 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11663MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:06:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:20.668 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:06:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:20.669 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:06:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:20.730 262769 INFO neutron.agent.dhcp.agent [None req-b300d547-7961-4419-a950-e822c3207fa7 - - - - - -] DHCP configuration for ports {'7e2a39d3-e2f6-4316-bd4a-2ee6b29e7d58'} is completed
Dec 05 10:06:20 np0005546420.localdomain sshd[310590]: Invalid user admin from 78.128.112.74 port 39098
Dec 05 10:06:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:06:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:06:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:20.771 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:06:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:20.771 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:06:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:20.847 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:06:20 np0005546420.localdomain sshd[310590]: Connection closed by invalid user admin 78.128.112.74 port 39098 [preauth]
Dec 05 10:06:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:20.865 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:20 np0005546420.localdomain podman[310615]: 2025-12-05 10:06:20.880389346 +0000 UTC m=+0.124142659 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:06:20 np0005546420.localdomain podman[310615]: 2025-12-05 10:06:20.885853994 +0000 UTC m=+0.129607307 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:06:20 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:06:20 np0005546420.localdomain podman[310616]: 2025-12-05 10:06:20.946449052 +0000 UTC m=+0.186343767 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 05 10:06:20 np0005546420.localdomain podman[310616]: 2025-12-05 10:06:20.974682508 +0000 UTC m=+0.214577213 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:06:20 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:06:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:06:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/892556572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:06:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:21.275 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:06:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:21.281 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:06:21 np0005546420.localdomain ceph-mon[298353]: osdmap e116: 6 total, 6 up, 6 in
Dec 05 10:06:21 np0005546420.localdomain ceph-mon[298353]: pgmap v166: 177 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 173 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 177 KiB/s rd, 23 KiB/s wr, 251 op/s
Dec 05 10:06:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/892556572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:06:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:22.848 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:22.849 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:06:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:22.850 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:06:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:22.865 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:06:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:22.868 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:06:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:22.868 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:06:23 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:23Z|00081|ovn_bfd|INFO|Enabled BFD on interface ovn-473cc8-0
Dec 05 10:06:23 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:23Z|00082|ovn_bfd|INFO|Enabled BFD on interface ovn-f5bb44-0
Dec 05 10:06:23 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:23Z|00083|ovn_bfd|INFO|Enabled BFD on interface ovn-40c64e-0
Dec 05 10:06:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:23.141 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e117 e117: 6 total, 6 up, 6 in
Dec 05 10:06:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:23.161 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:23.170 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:23.182 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:23.194 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:23.237 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:06:23 np0005546420.localdomain podman[310678]: 2025-12-05 10:06:23.503619627 +0000 UTC m=+0.081455550 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Dec 05 10:06:23 np0005546420.localdomain podman[310678]: 2025-12-05 10:06:23.520720251 +0000 UTC m=+0.098556204 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 05 10:06:23 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:06:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:23.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:06:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:24.139 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:24 np0005546420.localdomain ceph-mon[298353]: pgmap v167: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 176 KiB/s rd, 21 KiB/s wr, 246 op/s
Dec 05 10:06:24 np0005546420.localdomain ceph-mon[298353]: osdmap e117: 6 total, 6 up, 6 in
Dec 05 10:06:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3429503590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:06:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e118 e118: 6 total, 6 up, 6 in
Dec 05 10:06:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:24.688 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:24.925 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:25.034 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:25 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:25Z|00084|ovn_bfd|INFO|Disabled BFD on interface ovn-473cc8-0
Dec 05 10:06:25 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:25Z|00085|ovn_bfd|INFO|Disabled BFD on interface ovn-f5bb44-0
Dec 05 10:06:25 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:25Z|00086|ovn_bfd|INFO|Disabled BFD on interface ovn-40c64e-0
Dec 05 10:06:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:25.138 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:25.141 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:25.160 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1518590665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:06:25 np0005546420.localdomain ceph-mon[298353]: osdmap e118: 6 total, 6 up, 6 in
Dec 05 10:06:25 np0005546420.localdomain systemd[1]: tmp-crun.3SOauC.mount: Deactivated successfully.
Dec 05 10:06:25 np0005546420.localdomain podman[310713]: 2025-12-05 10:06:25.205446274 +0000 UTC m=+0.070890496 container kill 73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1b4a004-602e-48a9-bb00-bddac320fcac, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:06:25 np0005546420.localdomain dnsmasq[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/addn_hosts - 0 addresses
Dec 05 10:06:25 np0005546420.localdomain dnsmasq-dhcp[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/host
Dec 05 10:06:25 np0005546420.localdomain dnsmasq-dhcp[310497]: read /var/lib/neutron/dhcp/c1b4a004-602e-48a9-bb00-bddac320fcac/opts
Dec 05 10:06:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:25.384 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:25 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:25Z|00087|binding|INFO|Releasing lport face5f2a-5573-4b59-a2d0-e87dc8f9b940 from this chassis (sb_readonly=0)
Dec 05 10:06:25 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:25Z|00088|binding|INFO|Setting lport face5f2a-5573-4b59-a2d0-e87dc8f9b940 down in Southbound
Dec 05 10:06:25 np0005546420.localdomain kernel: device tapface5f2a-55 left promiscuous mode
Dec 05 10:06:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:25.395 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:25.399 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-c1b4a004-602e-48a9-bb00-bddac320fcac', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1b4a004-602e-48a9-bb00-bddac320fcac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a0588d317f184bd5b4c00fddf19c9c64', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e4e951-2d16-4ef8-ad77-3879d13c3d14, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=face5f2a-5573-4b59-a2d0-e87dc8f9b940) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:06:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:25.401 159503 INFO neutron.agent.ovn.metadata.agent [-] Port face5f2a-5573-4b59-a2d0-e87dc8f9b940 in datapath c1b4a004-602e-48a9-bb00-bddac320fcac unbound from our chassis
Dec 05 10:06:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:25.403 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1b4a004-602e-48a9-bb00-bddac320fcac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:06:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:25.404 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a50c5440-09c8-4596-8b4c-3aff950fbbd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:06:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:25.412 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e119 e119: 6 total, 6 up, 6 in
Dec 05 10:06:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:25.873 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:26 np0005546420.localdomain ceph-mon[298353]: pgmap v170: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 100 KiB/s rd, 7.7 KiB/s wr, 137 op/s
Dec 05 10:06:26 np0005546420.localdomain ceph-mon[298353]: osdmap e119: 6 total, 6 up, 6 in
Dec 05 10:06:27 np0005546420.localdomain podman[310753]: 2025-12-05 10:06:27.417542434 +0000 UTC m=+0.059333351 container kill 73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1b4a004-602e-48a9-bb00-bddac320fcac, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 10:06:27 np0005546420.localdomain systemd[1]: tmp-crun.aipimp.mount: Deactivated successfully.
Dec 05 10:06:27 np0005546420.localdomain dnsmasq[310497]: exiting on receipt of SIGTERM
Dec 05 10:06:27 np0005546420.localdomain systemd[1]: libpod-73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4.scope: Deactivated successfully.
Dec 05 10:06:27 np0005546420.localdomain podman[310769]: 2025-12-05 10:06:27.494547917 +0000 UTC m=+0.055908486 container died 73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1b4a004-602e-48a9-bb00-bddac320fcac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:06:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4-userdata-shm.mount: Deactivated successfully.
Dec 05 10:06:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-45aa8577cb59487083e02a0d6100e690d84a2f1ddb3ffb303e3b75856f0ea2c0-merged.mount: Deactivated successfully.
Dec 05 10:06:27 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e120 e120: 6 total, 6 up, 6 in
Dec 05 10:06:27 np0005546420.localdomain podman[310769]: 2025-12-05 10:06:27.548271575 +0000 UTC m=+0.109632094 container remove 73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1b4a004-602e-48a9-bb00-bddac320fcac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:06:27 np0005546420.localdomain systemd[1]: libpod-conmon-73161ab5271b9687b54bab7a9cdb57ec1affe4bd7b0f8e97461e1974bb0a6cb4.scope: Deactivated successfully.
Dec 05 10:06:27 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:27.590 262769 INFO neutron.agent.dhcp.agent [None req-3cccd3b4-d264-4edd-b539-ef92d49cede3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:06:27 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:27.844 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:06:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:27.852 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:06:28 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2dc1b4a004\x2d602e\x2d48a9\x2dbb00\x2dbddac320fcac.mount: Deactivated successfully.
Dec 05 10:06:28 np0005546420.localdomain ceph-mon[298353]: pgmap v172: 177 pgs: 177 active+clean; 225 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 147 KiB/s rd, 13 MiB/s wr, 203 op/s
Dec 05 10:06:28 np0005546420.localdomain ceph-mon[298353]: osdmap e120: 6 total, 6 up, 6 in
Dec 05 10:06:28 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e121 e121: 6 total, 6 up, 6 in
Dec 05 10:06:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:28.757 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e122 e122: 6 total, 6 up, 6 in
Dec 05 10:06:29 np0005546420.localdomain ceph-mon[298353]: osdmap e121: 6 total, 6 up, 6 in
Dec 05 10:06:29 np0005546420.localdomain ceph-mon[298353]: pgmap v175: 177 pgs: 177 active+clean; 225 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 141 KiB/s rd, 16 MiB/s wr, 198 op/s
Dec 05 10:06:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:29.734 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e123 e123: 6 total, 6 up, 6 in
Dec 05 10:06:30 np0005546420.localdomain ceph-mon[298353]: osdmap e122: 6 total, 6 up, 6 in
Dec 05 10:06:30 np0005546420.localdomain ceph-mon[298353]: osdmap e123: 6 total, 6 up, 6 in
Dec 05 10:06:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:30.927 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:31 np0005546420.localdomain ceph-mon[298353]: pgmap v178: 177 pgs: 177 active+clean; 161 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 103 KiB/s rd, 8.0 MiB/s wr, 143 op/s
Dec 05 10:06:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:06:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:06:32 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:32.900 262769 INFO neutron.agent.linux.ip_lib [None req-19d27192-16e2-4a8c-8338-61c54804e356 - - - - - -] Device tap0d6a9a87-ab cannot be used as it has no MAC address
Dec 05 10:06:32 np0005546420.localdomain podman[310796]: 2025-12-05 10:06:32.912729389 +0000 UTC m=+0.081613224 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:06:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:32.922 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:32 np0005546420.localdomain kernel: device tap0d6a9a87-ab entered promiscuous mode
Dec 05 10:06:32 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:32Z|00089|binding|INFO|Claiming lport 0d6a9a87-ab56-44d9-bdd4-78437fa8ca13 for this chassis.
Dec 05 10:06:32 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:32Z|00090|binding|INFO|0d6a9a87-ab56-44d9-bdd4-78437fa8ca13: Claiming unknown
Dec 05 10:06:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:32.930 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:32 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929192.9313] manager: (tap0d6a9a87-ab): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Dec 05 10:06:32 np0005546420.localdomain systemd-udevd[310839]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:06:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:32.948 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-311e3ec4-c82f-4ee8-8604-a265abcdb048', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-311e3ec4-c82f-4ee8-8604-a265abcdb048', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '923729c253424d61ae3e38f2b3b261a3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f525ef9f-bb54-4b7e-8e94-66f8114d6728, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=0d6a9a87-ab56-44d9-bdd4-78437fa8ca13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:06:32 np0005546420.localdomain podman[310796]: 2025-12-05 10:06:32.954404938 +0000 UTC m=+0.123288763 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:06:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:32.954 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 0d6a9a87-ab56-44d9-bdd4-78437fa8ca13 in datapath 311e3ec4-c82f-4ee8-8604-a265abcdb048 bound to our chassis
Dec 05 10:06:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:32.955 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 311e3ec4-c82f-4ee8-8604-a265abcdb048 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:06:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:32.957 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[18ea9248-d5ae-45ba-87ab-2f0a8d14b834]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:06:32 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap0d6a9a87-ab: No such device
Dec 05 10:06:32 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:32Z|00091|binding|INFO|Setting lport 0d6a9a87-ab56-44d9-bdd4-78437fa8ca13 ovn-installed in OVS
Dec 05 10:06:32 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:32Z|00092|binding|INFO|Setting lport 0d6a9a87-ab56-44d9-bdd4-78437fa8ca13 up in Southbound
Dec 05 10:06:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:32.962 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:32 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap0d6a9a87-ab: No such device
Dec 05 10:06:32 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap0d6a9a87-ab: No such device
Dec 05 10:06:32 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap0d6a9a87-ab: No such device
Dec 05 10:06:32 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:06:32 np0005546420.localdomain podman[310795]: 2025-12-05 10:06:32.976540187 +0000 UTC m=+0.148710763 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec 05 10:06:32 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap0d6a9a87-ab: No such device
Dec 05 10:06:32 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap0d6a9a87-ab: No such device
Dec 05 10:06:32 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap0d6a9a87-ab: No such device
Dec 05 10:06:32 np0005546420.localdomain podman[310795]: 2025-12-05 10:06:32.991564018 +0000 UTC m=+0.163734584 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Dec 05 10:06:32 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap0d6a9a87-ab: No such device
Dec 05 10:06:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:32.998 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:33 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:06:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:33.021 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:34 np0005546420.localdomain podman[310917]: 
Dec 05 10:06:34 np0005546420.localdomain podman[310917]: 2025-12-05 10:06:34.076089817 +0000 UTC m=+0.090237109 container create 64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-311e3ec4-c82f-4ee8-8604-a265abcdb048, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:06:34 np0005546420.localdomain systemd[1]: Started libpod-conmon-64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8.scope.
Dec 05 10:06:34 np0005546420.localdomain podman[310917]: 2025-12-05 10:06:34.030731686 +0000 UTC m=+0.044879008 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:06:34 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:06:34 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a3409a3ba8c7fdcdfe80d4fbcfc14bd0f3f632cb265782dc5a245e18271dc2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:34 np0005546420.localdomain podman[310917]: 2025-12-05 10:06:34.158043211 +0000 UTC m=+0.172190503 container init 64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-311e3ec4-c82f-4ee8-8604-a265abcdb048, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:06:34 np0005546420.localdomain podman[310917]: 2025-12-05 10:06:34.170521844 +0000 UTC m=+0.184669136 container start 64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-311e3ec4-c82f-4ee8-8604-a265abcdb048, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:06:34 np0005546420.localdomain dnsmasq[310936]: started, version 2.85 cachesize 150
Dec 05 10:06:34 np0005546420.localdomain dnsmasq[310936]: DNS service limited to local subnets
Dec 05 10:06:34 np0005546420.localdomain dnsmasq[310936]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:06:34 np0005546420.localdomain dnsmasq[310936]: warning: no upstream servers configured
Dec 05 10:06:34 np0005546420.localdomain dnsmasq-dhcp[310936]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:06:34 np0005546420.localdomain dnsmasq[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/addn_hosts - 0 addresses
Dec 05 10:06:34 np0005546420.localdomain dnsmasq-dhcp[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/host
Dec 05 10:06:34 np0005546420.localdomain dnsmasq-dhcp[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/opts
Dec 05 10:06:34 np0005546420.localdomain ceph-mon[298353]: pgmap v179: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 5.7 MiB/s wr, 110 op/s
Dec 05 10:06:34 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:34.626 262769 INFO neutron.agent.dhcp.agent [None req-b9b23d9b-c696-4cb2-b994-ba34e68dabea - - - - - -] DHCP configuration for ports {'a2f152dd-9cdd-438c-aeff-7f3683c2ec57'} is completed
Dec 05 10:06:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:34.771 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:35 np0005546420.localdomain systemd[1]: tmp-crun.fHfgt0.mount: Deactivated successfully.
Dec 05 10:06:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 e124: 6 total, 6 up, 6 in
Dec 05 10:06:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:35.960 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:06:36 np0005546420.localdomain systemd[1]: tmp-crun.1Oan5r.mount: Deactivated successfully.
Dec 05 10:06:36 np0005546420.localdomain podman[310937]: 2025-12-05 10:06:36.51516914 +0000 UTC m=+0.090183357 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:06:36 np0005546420.localdomain ceph-mon[298353]: pgmap v180: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 4.9 MiB/s wr, 94 op/s
Dec 05 10:06:36 np0005546420.localdomain ceph-mon[298353]: osdmap e124: 6 total, 6 up, 6 in
Dec 05 10:06:36 np0005546420.localdomain podman[310937]: 2025-12-05 10:06:36.557392995 +0000 UTC m=+0.132407262 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 05 10:06:36 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:06:38 np0005546420.localdomain ceph-mon[298353]: pgmap v182: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 4.2 MiB/s wr, 82 op/s
Dec 05 10:06:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:39.818 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:40 np0005546420.localdomain ceph-mon[298353]: pgmap v183: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 3.7 MiB/s wr, 71 op/s
Dec 05 10:06:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:40.975 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:41 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:41.891 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:06:41Z, description=, device_id=7aface02-e3d8-4626-a753-0bd9b7a1d43f, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a043400>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0430a0>], id=9d7ac1bf-ed6b-4e0f-bb13-08323e92ea4e, ip_allocation=immediate, mac_address=fa:16:3e:1c:d3:32, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:06:30Z, description=, dns_domain=, id=311e3ec4-c82f-4ee8-8604-a265abcdb048, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1206229969-network, port_security_enabled=True, project_id=923729c253424d61ae3e38f2b3b261a3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25912, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=919, status=ACTIVE, subnets=['739bea92-dcff-4571-829c-4a1daa084318'], tags=[], tenant_id=923729c253424d61ae3e38f2b3b261a3, updated_at=2025-12-05T10:06:31Z, vlan_transparent=None, network_id=311e3ec4-c82f-4ee8-8604-a265abcdb048, port_security_enabled=False, project_id=923729c253424d61ae3e38f2b3b261a3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=970, status=DOWN, tags=[], tenant_id=923729c253424d61ae3e38f2b3b261a3, updated_at=2025-12-05T10:06:41Z on network 311e3ec4-c82f-4ee8-8604-a265abcdb048
Dec 05 10:06:42 np0005546420.localdomain dnsmasq[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/addn_hosts - 1 addresses
Dec 05 10:06:42 np0005546420.localdomain dnsmasq-dhcp[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/host
Dec 05 10:06:42 np0005546420.localdomain dnsmasq-dhcp[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/opts
Dec 05 10:06:42 np0005546420.localdomain podman[310979]: 2025-12-05 10:06:42.10805368 +0000 UTC m=+0.062676044 container kill 64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-311e3ec4-c82f-4ee8-8604-a265abcdb048, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 10:06:42 np0005546420.localdomain ceph-mon[298353]: pgmap v184: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 2.4 KiB/s rd, 511 B/s wr, 4 op/s
Dec 05 10:06:42 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:42.837 262769 INFO neutron.agent.dhcp.agent [None req-9b02a58d-0654-4347-afc1-286c0cc94297 - - - - - -] DHCP configuration for ports {'9d7ac1bf-ed6b-4e0f-bb13-08323e92ea4e'} is completed
Dec 05 10:06:43 np0005546420.localdomain ceph-mon[298353]: pgmap v185: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:06:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:06:44 np0005546420.localdomain systemd[1]: tmp-crun.iYZzCw.mount: Deactivated successfully.
Dec 05 10:06:44 np0005546420.localdomain podman[311001]: 2025-12-05 10:06:44.504486845 +0000 UTC m=+0.084378880 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:06:44 np0005546420.localdomain podman[311001]: 2025-12-05 10:06:44.514944625 +0000 UTC m=+0.094836650 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Dec 05 10:06:44 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:06:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:44.576 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:06:41Z, description=, device_id=7aface02-e3d8-4626-a753-0bd9b7a1d43f, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0a4310>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a192910>], id=9d7ac1bf-ed6b-4e0f-bb13-08323e92ea4e, ip_allocation=immediate, mac_address=fa:16:3e:1c:d3:32, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:06:30Z, description=, dns_domain=, id=311e3ec4-c82f-4ee8-8604-a265abcdb048, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1206229969-network, port_security_enabled=True, project_id=923729c253424d61ae3e38f2b3b261a3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25912, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=919, status=ACTIVE, subnets=['739bea92-dcff-4571-829c-4a1daa084318'], tags=[], tenant_id=923729c253424d61ae3e38f2b3b261a3, updated_at=2025-12-05T10:06:31Z, vlan_transparent=None, network_id=311e3ec4-c82f-4ee8-8604-a265abcdb048, port_security_enabled=False, project_id=923729c253424d61ae3e38f2b3b261a3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=970, status=DOWN, tags=[], tenant_id=923729c253424d61ae3e38f2b3b261a3, updated_at=2025-12-05T10:06:41Z on network 311e3ec4-c82f-4ee8-8604-a265abcdb048
Dec 05 10:06:44 np0005546420.localdomain podman[311038]: 2025-12-05 10:06:44.811400179 +0000 UTC m=+0.067859322 container kill 64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-311e3ec4-c82f-4ee8-8604-a265abcdb048, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 05 10:06:44 np0005546420.localdomain dnsmasq[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/addn_hosts - 1 addresses
Dec 05 10:06:44 np0005546420.localdomain dnsmasq-dhcp[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/host
Dec 05 10:06:44 np0005546420.localdomain dnsmasq-dhcp[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/opts
Dec 05 10:06:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:44.860 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:45 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:45.215 262769 INFO neutron.agent.dhcp.agent [None req-44fc2583-d506-45fb-8cf1-c2d010324c2b - - - - - -] DHCP configuration for ports {'9d7ac1bf-ed6b-4e0f-bb13-08323e92ea4e'} is completed
Dec 05 10:06:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:45.977 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:46Z|00093|ovn_bfd|INFO|Enabled BFD on interface ovn-473cc8-0
Dec 05 10:06:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:46Z|00094|ovn_bfd|INFO|Enabled BFD on interface ovn-f5bb44-0
Dec 05 10:06:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:46Z|00095|ovn_bfd|INFO|Enabled BFD on interface ovn-40c64e-0
Dec 05 10:06:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:46.482 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:46.498 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:46.503 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:46.512 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:46.516 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:46.523 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:46 np0005546420.localdomain ceph-mon[298353]: pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:06:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:06:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:06:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:06:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154915 "" "Go-http-client/1.1"
Dec 05 10:06:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:06:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18728 "" "Go-http-client/1.1"
Dec 05 10:06:47 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:47.546 262769 INFO neutron.agent.linux.ip_lib [None req-977fb36d-27a0-4710-ab52-eafb88132bfe - - - - - -] Device tap97842860-b4 cannot be used as it has no MAC address
Dec 05 10:06:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:47.573 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:47 np0005546420.localdomain kernel: device tap97842860-b4 entered promiscuous mode
Dec 05 10:06:47 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929207.5834] manager: (tap97842860-b4): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Dec 05 10:06:47 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:47Z|00096|binding|INFO|Claiming lport 97842860-b4c0-47d4-8dcb-3c189db44e9d for this chassis.
Dec 05 10:06:47 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:47Z|00097|binding|INFO|97842860-b4c0-47d4-8dcb-3c189db44e9d: Claiming unknown
Dec 05 10:06:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:47.586 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:47 np0005546420.localdomain systemd-udevd[311070]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:06:47 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:47.601 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d32e87601df423a8cce9cd945df205b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d7cd891-78aa-40df-ae51-5e677706c6d7, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=97842860-b4c0-47d4-8dcb-3c189db44e9d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:06:47 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:47.603 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 97842860-b4c0-47d4-8dcb-3c189db44e9d in datapath fafba9db-fcb2-47e3-98d9-e1d81b19f8b5 bound to our chassis
Dec 05 10:06:47 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:47.608 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fafba9db-fcb2-47e3-98d9-e1d81b19f8b5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:06:47 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:47.610 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[4830806d-e3a0-46ad-9754-055d56e994c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:06:47 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap97842860-b4: No such device
Dec 05 10:06:47 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:47Z|00098|binding|INFO|Setting lport 97842860-b4c0-47d4-8dcb-3c189db44e9d ovn-installed in OVS
Dec 05 10:06:47 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:47Z|00099|binding|INFO|Setting lport 97842860-b4c0-47d4-8dcb-3c189db44e9d up in Southbound
Dec 05 10:06:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:47.620 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:47.622 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:47 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap97842860-b4: No such device
Dec 05 10:06:47 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap97842860-b4: No such device
Dec 05 10:06:47 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap97842860-b4: No such device
Dec 05 10:06:47 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap97842860-b4: No such device
Dec 05 10:06:47 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap97842860-b4: No such device
Dec 05 10:06:47 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap97842860-b4: No such device
Dec 05 10:06:47 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap97842860-b4: No such device
Dec 05 10:06:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:47.657 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:47.688 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:48.083 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:48.197 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:48.324 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:48 np0005546420.localdomain ceph-mon[298353]: pgmap v187: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:06:48 np0005546420.localdomain podman[311142]: 
Dec 05 10:06:48 np0005546420.localdomain podman[311142]: 2025-12-05 10:06:48.601239049 +0000 UTC m=+0.113810333 container create 660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:06:48 np0005546420.localdomain systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 10:06:48 np0005546420.localdomain podman[311142]: 2025-12-05 10:06:48.534009276 +0000 UTC m=+0.046580590 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:06:48 np0005546420.localdomain systemd[1]: Started libpod-conmon-660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701.scope.
Dec 05 10:06:48 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:06:48 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/814eacc47442bf79df949ae0f43a280e1759df3060e8cc9901e5ffbe78108511/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:48 np0005546420.localdomain podman[311142]: 2025-12-05 10:06:48.674504376 +0000 UTC m=+0.187075660 container init 660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:06:48 np0005546420.localdomain podman[311142]: 2025-12-05 10:06:48.683881624 +0000 UTC m=+0.196452908 container start 660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:06:48 np0005546420.localdomain dnsmasq[311160]: started, version 2.85 cachesize 150
Dec 05 10:06:48 np0005546420.localdomain dnsmasq[311160]: DNS service limited to local subnets
Dec 05 10:06:48 np0005546420.localdomain dnsmasq[311160]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:06:48 np0005546420.localdomain dnsmasq[311160]: warning: no upstream servers configured
Dec 05 10:06:48 np0005546420.localdomain dnsmasq-dhcp[311160]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:06:48 np0005546420.localdomain dnsmasq[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/addn_hosts - 0 addresses
Dec 05 10:06:48 np0005546420.localdomain dnsmasq-dhcp[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/host
Dec 05 10:06:48 np0005546420.localdomain dnsmasq-dhcp[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/opts
Dec 05 10:06:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:48.826 262769 INFO neutron.agent.dhcp.agent [None req-02bbb474-188c-40ae-a4dc-78fd66338737 - - - - - -] DHCP configuration for ports {'1f0bb353-d62e-4843-92f3-d27615f7c7c3'} is completed
Dec 05 10:06:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:06:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:06:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:06:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:06:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:06:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:06:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:06:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:06:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:06:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:06:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:06:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:06:49 np0005546420.localdomain dnsmasq[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/addn_hosts - 0 addresses
Dec 05 10:06:49 np0005546420.localdomain dnsmasq-dhcp[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/host
Dec 05 10:06:49 np0005546420.localdomain podman[311180]: 2025-12-05 10:06:49.219620929 +0000 UTC m=+0.060154857 container kill 64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-311e3ec4-c82f-4ee8-8604-a265abcdb048, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:06:49 np0005546420.localdomain dnsmasq-dhcp[310936]: read /var/lib/neutron/dhcp/311e3ec4-c82f-4ee8-8604-a265abcdb048/opts
Dec 05 10:06:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:49Z|00100|ovn_bfd|INFO|Disabled BFD on interface ovn-473cc8-0
Dec 05 10:06:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:49Z|00101|ovn_bfd|INFO|Disabled BFD on interface ovn-f5bb44-0
Dec 05 10:06:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:49Z|00102|ovn_bfd|INFO|Disabled BFD on interface ovn-40c64e-0
Dec 05 10:06:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:49.310 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:49.313 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:49.330 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:49.715 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:49 np0005546420.localdomain kernel: device tap0d6a9a87-ab left promiscuous mode
Dec 05 10:06:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:49Z|00103|binding|INFO|Releasing lport 0d6a9a87-ab56-44d9-bdd4-78437fa8ca13 from this chassis (sb_readonly=0)
Dec 05 10:06:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:49Z|00104|binding|INFO|Setting lport 0d6a9a87-ab56-44d9-bdd4-78437fa8ca13 down in Southbound
Dec 05 10:06:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:49.746 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:49.862 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:50.042 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-311e3ec4-c82f-4ee8-8604-a265abcdb048', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-311e3ec4-c82f-4ee8-8604-a265abcdb048', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '923729c253424d61ae3e38f2b3b261a3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f525ef9f-bb54-4b7e-8e94-66f8114d6728, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=0d6a9a87-ab56-44d9-bdd4-78437fa8ca13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:06:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:50.044 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 0d6a9a87-ab56-44d9-bdd4-78437fa8ca13 in datapath 311e3ec4-c82f-4ee8-8604-a265abcdb048 unbound from our chassis
Dec 05 10:06:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:50.047 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 311e3ec4-c82f-4ee8-8604-a265abcdb048, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:06:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:50.048 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[40e3b66b-c153-4c76-95bf-ab3ea0c44c76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:06:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:50 np0005546420.localdomain ceph-mon[298353]: pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:06:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:50.980 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:06:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:06:51 np0005546420.localdomain podman[311202]: 2025-12-05 10:06:51.524629999 +0000 UTC m=+0.096276675 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 05 10:06:51 np0005546420.localdomain podman[311202]: 2025-12-05 10:06:51.530233061 +0000 UTC m=+0.101879787 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 05 10:06:51 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:06:51 np0005546420.localdomain ceph-mon[298353]: pgmap v189: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:06:51 np0005546420.localdomain podman[311201]: 2025-12-05 10:06:51.612109263 +0000 UTC m=+0.185351697 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:06:51 np0005546420.localdomain podman[311201]: 2025-12-05 10:06:51.627393671 +0000 UTC m=+0.200636155 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:06:51 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:06:52 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:52.646 262769 INFO neutron.agent.linux.ip_lib [None req-61dc8231-ff72-45ea-986f-f9ce531aa839 - - - - - -] Device tapdf1abeba-ca cannot be used as it has no MAC address
Dec 05 10:06:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:52.669 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:52 np0005546420.localdomain kernel: device tapdf1abeba-ca entered promiscuous mode
Dec 05 10:06:52 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929212.6775] manager: (tapdf1abeba-ca): new Generic device (/org/freedesktop/NetworkManager/Devices/23)
Dec 05 10:06:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:52Z|00105|binding|INFO|Claiming lport df1abeba-ca5b-4254-a64d-89589bc431ce for this chassis.
Dec 05 10:06:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:52Z|00106|binding|INFO|df1abeba-ca5b-4254-a64d-89589bc431ce: Claiming unknown
Dec 05 10:06:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:52.677 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:52 np0005546420.localdomain systemd-udevd[311253]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:06:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:52.688 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d4b2a89-de1f-493b-9845-878f486cd053, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=df1abeba-ca5b-4254-a64d-89589bc431ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:06:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:52Z|00107|binding|INFO|Setting lport df1abeba-ca5b-4254-a64d-89589bc431ce ovn-installed in OVS
Dec 05 10:06:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:52Z|00108|binding|INFO|Setting lport df1abeba-ca5b-4254-a64d-89589bc431ce up in Southbound
Dec 05 10:06:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:52.689 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:52.691 159503 INFO neutron.agent.ovn.metadata.agent [-] Port df1abeba-ca5b-4254-a64d-89589bc431ce in datapath 76a1e8e9-cad5-4783-94a8-2dc805c8e0f4 bound to our chassis
Dec 05 10:06:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:52.692 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:52.692 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 76a1e8e9-cad5-4783-94a8-2dc805c8e0f4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:06:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:52.693 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[92887d40-6eaa-48cb-85a3-9f717d79f3d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:06:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapdf1abeba-ca: No such device
Dec 05 10:06:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapdf1abeba-ca: No such device
Dec 05 10:06:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapdf1abeba-ca: No such device
Dec 05 10:06:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:52.716 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapdf1abeba-ca: No such device
Dec 05 10:06:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapdf1abeba-ca: No such device
Dec 05 10:06:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapdf1abeba-ca: No such device
Dec 05 10:06:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapdf1abeba-ca: No such device
Dec 05 10:06:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapdf1abeba-ca: No such device
Dec 05 10:06:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:52.752 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:52.779 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:53 np0005546420.localdomain podman[311323]: 
Dec 05 10:06:53 np0005546420.localdomain podman[311323]: 2025-12-05 10:06:53.685247469 +0000 UTC m=+0.086135454 container create 3a13d86236bba501f647baef3b1a0edde4621921fd015c959c94c96f49d320ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:06:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:06:53 np0005546420.localdomain systemd[1]: Started libpod-conmon-3a13d86236bba501f647baef3b1a0edde4621921fd015c959c94c96f49d320ff.scope.
Dec 05 10:06:53 np0005546420.localdomain podman[311323]: 2025-12-05 10:06:53.64291799 +0000 UTC m=+0.043805965 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:06:53 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:06:53 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6cfae2a6a52390048a66964ff9bbf99de2a4033d9e5789e84b96bc4fbe872d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:06:53 np0005546420.localdomain podman[311323]: 2025-12-05 10:06:53.76090648 +0000 UTC m=+0.161794455 container init 3a13d86236bba501f647baef3b1a0edde4621921fd015c959c94c96f49d320ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:06:53 np0005546420.localdomain podman[311323]: 2025-12-05 10:06:53.770231635 +0000 UTC m=+0.171119610 container start 3a13d86236bba501f647baef3b1a0edde4621921fd015c959c94c96f49d320ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:06:53 np0005546420.localdomain dnsmasq[311349]: started, version 2.85 cachesize 150
Dec 05 10:06:53 np0005546420.localdomain dnsmasq[311349]: DNS service limited to local subnets
Dec 05 10:06:53 np0005546420.localdomain dnsmasq[311349]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:06:53 np0005546420.localdomain dnsmasq[311349]: warning: no upstream servers configured
Dec 05 10:06:53 np0005546420.localdomain dnsmasq-dhcp[311349]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:06:53 np0005546420.localdomain dnsmasq[311349]: read /var/lib/neutron/dhcp/76a1e8e9-cad5-4783-94a8-2dc805c8e0f4/addn_hosts - 0 addresses
Dec 05 10:06:53 np0005546420.localdomain dnsmasq-dhcp[311349]: read /var/lib/neutron/dhcp/76a1e8e9-cad5-4783-94a8-2dc805c8e0f4/host
Dec 05 10:06:53 np0005546420.localdomain dnsmasq-dhcp[311349]: read /var/lib/neutron/dhcp/76a1e8e9-cad5-4783-94a8-2dc805c8e0f4/opts
Dec 05 10:06:53 np0005546420.localdomain podman[311337]: 2025-12-05 10:06:53.835952282 +0000 UTC m=+0.106220620 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 05 10:06:53 np0005546420.localdomain podman[311337]: 2025-12-05 10:06:53.850449947 +0000 UTC m=+0.120718275 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 05 10:06:53 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:06:53 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:53.919 262769 INFO neutron.agent.dhcp.agent [None req-5bb665ca-4c24-462e-8f17-fe1149037ed4 - - - - - -] DHCP configuration for ports {'286e4a19-d9ea-4023-af0c-189ab6049390'} is completed
Dec 05 10:06:54 np0005546420.localdomain ceph-mon[298353]: pgmap v190: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:06:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:54Z|00109|binding|INFO|Removing iface tapdf1abeba-ca ovn-installed in OVS
Dec 05 10:06:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:06:54Z|00110|binding|INFO|Removing lport df1abeba-ca5b-4254-a64d-89589bc431ce ovn-installed in OVS
Dec 05 10:06:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:54.456 159503 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port cf9f0343-b60d-4fbc-ba52-c4fe1f144557 with type ""
Dec 05 10:06:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:54.457 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d4b2a89-de1f-493b-9845-878f486cd053, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=df1abeba-ca5b-4254-a64d-89589bc431ce) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:06:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:54.458 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:54.460 159503 INFO neutron.agent.ovn.metadata.agent [-] Port df1abeba-ca5b-4254-a64d-89589bc431ce in datapath 76a1e8e9-cad5-4783-94a8-2dc805c8e0f4 unbound from our chassis
Dec 05 10:06:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:54.462 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76a1e8e9-cad5-4783-94a8-2dc805c8e0f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:06:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:06:54.463 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a1576cb2-6e2b-4176-b3cd-ce91b4edf93e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:06:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:54.465 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:54.469 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:54 np0005546420.localdomain kernel: device tapdf1abeba-ca left promiscuous mode
Dec 05 10:06:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:54.482 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:54.890 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:55 np0005546420.localdomain dnsmasq[311349]: read /var/lib/neutron/dhcp/76a1e8e9-cad5-4783-94a8-2dc805c8e0f4/addn_hosts - 0 addresses
Dec 05 10:06:55 np0005546420.localdomain podman[311376]: 2025-12-05 10:06:55.034854141 +0000 UTC m=+0.059552999 container kill 3a13d86236bba501f647baef3b1a0edde4621921fd015c959c94c96f49d320ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:06:55 np0005546420.localdomain dnsmasq-dhcp[311349]: read /var/lib/neutron/dhcp/76a1e8e9-cad5-4783-94a8-2dc805c8e0f4/host
Dec 05 10:06:55 np0005546420.localdomain dnsmasq-dhcp[311349]: read /var/lib/neutron/dhcp/76a1e8e9-cad5-4783-94a8-2dc805c8e0f4/opts
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent [None req-fa93573f-e56f-47d9-8e96-7904b9dd7c5f - - - - - -] Unable to reload_allocations dhcp for 76a1e8e9-cad5-4783-94a8-2dc805c8e0f4.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapdf1abeba-ca not found in namespace qdhcp-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4.
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     return fut.result()
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     raise self._exception
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapdf1abeba-ca not found in namespace qdhcp-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4.
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.059 262769 ERROR neutron.agent.dhcp.agent 
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.066 262769 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Dec 05 10:06:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:06:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:55.207 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.366 262769 INFO neutron.agent.dhcp.agent [None req-783d0007-38c2-437b-925d-ce74c82634ab - - - - - -] All active networks have been fetched through RPC.
Dec 05 10:06:55 np0005546420.localdomain dnsmasq[311349]: exiting on receipt of SIGTERM
Dec 05 10:06:55 np0005546420.localdomain podman[311405]: 2025-12-05 10:06:55.538761809 +0000 UTC m=+0.047263631 container kill 3a13d86236bba501f647baef3b1a0edde4621921fd015c959c94c96f49d320ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:06:55 np0005546420.localdomain systemd[1]: libpod-3a13d86236bba501f647baef3b1a0edde4621921fd015c959c94c96f49d320ff.scope: Deactivated successfully.
Dec 05 10:06:55 np0005546420.localdomain podman[311419]: 2025-12-05 10:06:55.602662378 +0000 UTC m=+0.046247489 container died 3a13d86236bba501f647baef3b1a0edde4621921fd015c959c94c96f49d320ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:06:55 np0005546420.localdomain podman[311419]: 2025-12-05 10:06:55.647822413 +0000 UTC m=+0.091407484 container remove 3a13d86236bba501f647baef3b1a0edde4621921fd015c959c94c96f49d320ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76a1e8e9-cad5-4783-94a8-2dc805c8e0f4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:06:55 np0005546420.localdomain systemd[1]: libpod-conmon-3a13d86236bba501f647baef3b1a0edde4621921fd015c959c94c96f49d320ff.scope: Deactivated successfully.
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.674 262769 INFO neutron.agent.dhcp.agent [-] Starting network a9e43747-3c15-4ac1-bbfd-681ea90cb420 dhcp configuration
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.675 262769 INFO neutron.agent.dhcp.agent [-] Finished network a9e43747-3c15-4ac1-bbfd-681ea90cb420 dhcp configuration
Dec 05 10:06:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:55.676 262769 INFO neutron.agent.dhcp.agent [None req-e72f08a9-71d6-4e3c-a621-891373a07277 - - - - - -] Synchronizing state complete
Dec 05 10:06:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:56.021 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:06:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-1e6cfae2a6a52390048a66964ff9bbf99de2a4033d9e5789e84b96bc4fbe872d-merged.mount: Deactivated successfully.
Dec 05 10:06:56 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a13d86236bba501f647baef3b1a0edde4621921fd015c959c94c96f49d320ff-userdata-shm.mount: Deactivated successfully.
Dec 05 10:06:56 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d76a1e8e9\x2dcad5\x2d4783\x2d94a8\x2d2dc805c8e0f4.mount: Deactivated successfully.
Dec 05 10:06:56 np0005546420.localdomain ceph-mon[298353]: pgmap v191: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:06:57 np0005546420.localdomain dnsmasq[310936]: exiting on receipt of SIGTERM
Dec 05 10:06:57 np0005546420.localdomain podman[311461]: 2025-12-05 10:06:57.00605387 +0000 UTC m=+0.060922519 container kill 64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-311e3ec4-c82f-4ee8-8604-a265abcdb048, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:06:57 np0005546420.localdomain systemd[1]: libpod-64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8.scope: Deactivated successfully.
Dec 05 10:06:57 np0005546420.localdomain podman[311474]: 2025-12-05 10:06:57.075020056 +0000 UTC m=+0.056247637 container died 64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-311e3ec4-c82f-4ee8-8604-a265abcdb048, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:06:57 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8-userdata-shm.mount: Deactivated successfully.
Dec 05 10:06:57 np0005546420.localdomain podman[311474]: 2025-12-05 10:06:57.160898581 +0000 UTC m=+0.142126112 container cleanup 64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-311e3ec4-c82f-4ee8-8604-a265abcdb048, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:06:57 np0005546420.localdomain systemd[1]: libpod-conmon-64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8.scope: Deactivated successfully.
Dec 05 10:06:57 np0005546420.localdomain podman[311476]: 2025-12-05 10:06:57.186116234 +0000 UTC m=+0.158365949 container remove 64b74163ead87a08b488a56239b56ca6c709b23d74f062b54c66124a2c1768a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-311e3ec4-c82f-4ee8-8604-a265abcdb048, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:06:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:57.452 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:06:56Z, description=, device_id=fcdf2a70-8422-4b3b-bbd3-9d868b216415, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a07ae20>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a192910>], id=1989b559-7e68-4571-b899-89a0aca2d682, ip_allocation=immediate, mac_address=fa:16:3e:34:dc:69, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:06:43Z, description=, dns_domain=, id=fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-2061761889-network, port_security_enabled=True, project_id=1d32e87601df423a8cce9cd945df205b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42539, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=981, status=ACTIVE, subnets=['bd3e571a-6eee-47ef-93dc-f2573a91f00a'], tags=[], tenant_id=1d32e87601df423a8cce9cd945df205b, updated_at=2025-12-05T10:06:45Z, vlan_transparent=None, network_id=fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, port_security_enabled=False, project_id=1d32e87601df423a8cce9cd945df205b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1075, status=DOWN, tags=[], tenant_id=1d32e87601df423a8cce9cd945df205b, updated_at=2025-12-05T10:06:56Z on network fafba9db-fcb2-47e3-98d9-e1d81b19f8b5
Dec 05 10:06:57 np0005546420.localdomain sudo[311504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:06:57 np0005546420.localdomain sudo[311504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:06:57 np0005546420.localdomain sudo[311504]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:57 np0005546420.localdomain sudo[311522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 10:06:57 np0005546420.localdomain sudo[311522]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:06:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:57.652 262769 INFO neutron.agent.dhcp.agent [None req-e424a895-d164-4b1e-9acc-6f0657bb2ad7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:06:57 np0005546420.localdomain dnsmasq[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/addn_hosts - 1 addresses
Dec 05 10:06:57 np0005546420.localdomain dnsmasq-dhcp[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/host
Dec 05 10:06:57 np0005546420.localdomain podman[311555]: 2025-12-05 10:06:57.664813478 +0000 UTC m=+0.050397597 container kill 660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:06:57 np0005546420.localdomain dnsmasq-dhcp[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/opts
Dec 05 10:06:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:57.680 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:06:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:57.944 262769 INFO neutron.agent.dhcp.agent [None req-44fd56c1-b4eb-4e01-be35-ca8275dacf8f - - - - - -] DHCP configuration for ports {'1989b559-7e68-4571-b899-89a0aca2d682'} is completed
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:57.980490) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929217980546, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1812, "num_deletes": 263, "total_data_size": 2373070, "memory_usage": 2414760, "flush_reason": "Manual Compaction"}
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929217991722, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1549498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20914, "largest_seqno": 22721, "table_properties": {"data_size": 1542433, "index_size": 4087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15904, "raw_average_key_size": 21, "raw_value_size": 1527985, "raw_average_value_size": 2050, "num_data_blocks": 178, "num_entries": 745, "num_filter_entries": 745, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929110, "oldest_key_time": 1764929110, "file_creation_time": 1764929217, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 11277 microseconds, and 4416 cpu microseconds.
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:57.991767) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1549498 bytes OK
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:57.991786) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:57.993853) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:57.993871) EVENT_LOG_v1 {"time_micros": 1764929217993865, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:57.993890) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2364623, prev total WAL file size 2381003, number of live WAL files 2.
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:57.994578) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end)
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1513KB)], [36(17MB)]
Dec 05 10:06:57 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929217994607, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 19379154, "oldest_snapshot_seqno": -1}
Dec 05 10:06:58 np0005546420.localdomain sudo[311522]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 12080 keys, 16507774 bytes, temperature: kUnknown
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929218094115, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 16507774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16440375, "index_size": 36130, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30213, "raw_key_size": 324581, "raw_average_key_size": 26, "raw_value_size": 16236029, "raw_average_value_size": 1344, "num_data_blocks": 1365, "num_entries": 12080, "num_filter_entries": 12080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929217, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:58.094437) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 16507774 bytes
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:58.096564) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.6 rd, 165.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 17.0 +0.0 blob) out(15.7 +0.0 blob), read-write-amplify(23.2) write-amplify(10.7) OK, records in: 12619, records dropped: 539 output_compression: NoCompression
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:58.096592) EVENT_LOG_v1 {"time_micros": 1764929218096580, "job": 20, "event": "compaction_finished", "compaction_time_micros": 99608, "compaction_time_cpu_micros": 30596, "output_level": 6, "num_output_files": 1, "total_output_size": 16507774, "num_input_records": 12619, "num_output_records": 12080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929218097055, "job": 20, "event": "table_file_deletion", "file_number": 38}
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929218099645, "job": 20, "event": "table_file_deletion", "file_number": 36}
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:57.994531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:58.099696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:58.099702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:58.099704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:58.099706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:06:58.099708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:06:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d6a3409a3ba8c7fdcdfe80d4fbcfc14bd0f3f632cb265782dc5a245e18271dc2-merged.mount: Deactivated successfully.
Dec 05 10:06:58 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d311e3ec4\x2dc82f\x2d4ee8\x2d8604\x2da265abcdb048.mount: Deactivated successfully.
Dec 05 10:06:58 np0005546420.localdomain sudo[311597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:06:58 np0005546420.localdomain sudo[311597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:06:58 np0005546420.localdomain sudo[311597]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:58 np0005546420.localdomain sudo[311615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:06:58 np0005546420.localdomain sudo[311615]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:06:58 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:06:58.295 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:06:58 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:06:58.504 2 INFO neutron.agent.securitygroups_rpc [None req-d5329bb2-7f39-47e8-83b6-2e3a7f24ca0c 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:06:58 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:06:58 np0005546420.localdomain sudo[311615]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:59 np0005546420.localdomain sudo[311666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:06:59 np0005546420.localdomain sudo[311666]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:06:59 np0005546420.localdomain sudo[311666]: pam_unix(sudo:session): session closed for user root
Dec 05 10:06:59 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:06:59 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:06:59 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:06:59 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:06:59 np0005546420.localdomain ceph-mon[298353]: pgmap v193: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:06:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:06:59.895 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:01.049 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:01 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:07:01 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:01.898 2 INFO neutron.agent.securitygroups_rpc [None req-9e30647c-bb8b-4e54-8626-2981d1fced38 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:07:01 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:01.972 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:06:56Z, description=, device_id=fcdf2a70-8422-4b3b-bbd3-9d868b216415, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a00f3a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a00fac0>], id=1989b559-7e68-4571-b899-89a0aca2d682, ip_allocation=immediate, mac_address=fa:16:3e:34:dc:69, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:06:43Z, description=, dns_domain=, id=fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-2061761889-network, port_security_enabled=True, project_id=1d32e87601df423a8cce9cd945df205b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42539, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=981, status=ACTIVE, subnets=['bd3e571a-6eee-47ef-93dc-f2573a91f00a'], tags=[], tenant_id=1d32e87601df423a8cce9cd945df205b, updated_at=2025-12-05T10:06:45Z, vlan_transparent=None, network_id=fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, port_security_enabled=False, project_id=1d32e87601df423a8cce9cd945df205b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1075, status=DOWN, tags=[], tenant_id=1d32e87601df423a8cce9cd945df205b, updated_at=2025-12-05T10:06:56Z on network fafba9db-fcb2-47e3-98d9-e1d81b19f8b5
Dec 05 10:07:02 np0005546420.localdomain dnsmasq[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/addn_hosts - 1 addresses
Dec 05 10:07:02 np0005546420.localdomain dnsmasq-dhcp[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/host
Dec 05 10:07:02 np0005546420.localdomain podman[311702]: 2025-12-05 10:07:02.217087607 +0000 UTC m=+0.063024424 container kill 660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:07:02 np0005546420.localdomain dnsmasq-dhcp[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/opts
Dec 05 10:07:02 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:02.532 262769 INFO neutron.agent.dhcp.agent [None req-dfbe2b64-27af-4fc2-a044-0935b9313881 - - - - - -] DHCP configuration for ports {'1989b559-7e68-4571-b899-89a0aca2d682'} is completed
Dec 05 10:07:02 np0005546420.localdomain ceph-mon[298353]: pgmap v194: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:07:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:07:03 np0005546420.localdomain podman[311723]: 2025-12-05 10:07:03.516439826 +0000 UTC m=+0.083670988 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:07:03 np0005546420.localdomain podman[311722]: 2025-12-05 10:07:03.579191912 +0000 UTC m=+0.146492116 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public)
Dec 05 10:07:03 np0005546420.localdomain ceph-mon[298353]: pgmap v195: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3307567201' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:07:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3307567201' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:07:03 np0005546420.localdomain podman[311723]: 2025-12-05 10:07:03.606570541 +0000 UTC m=+0.173801723 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:07:03 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:07:03 np0005546420.localdomain podman[311722]: 2025-12-05 10:07:03.620687345 +0000 UTC m=+0.187987559 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 10:07:03 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:07:04 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:04.078 2 INFO neutron.agent.securitygroups_rpc [None req-cd05dfd1-15ee-449d-9a9d-b36249599776 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:07:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:04.126 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:07:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:04.126 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:07:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:04.126 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:07:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:04.925 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:05 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:05.715 2 INFO neutron.agent.securitygroups_rpc [None req-bc6b99ba-244a-4145-895a-01784236c368 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:07:05 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:05Z|00111|ovn_bfd|INFO|Enabled BFD on interface ovn-473cc8-0
Dec 05 10:07:05 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:05Z|00112|ovn_bfd|INFO|Enabled BFD on interface ovn-f5bb44-0
Dec 05 10:07:05 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:05Z|00113|ovn_bfd|INFO|Enabled BFD on interface ovn-40c64e-0
Dec 05 10:07:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:05.956 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:05.968 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:05.973 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:05.988 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:05.993 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:06.049 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:06.073 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:06 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:06.507 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:07:06 np0005546420.localdomain ceph-mon[298353]: pgmap v196: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:06.925 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:06.928 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:07.371 2 INFO neutron.agent.securitygroups_rpc [None req-996755b2-015b-4c0b-97a6-4ea5ff6a5f09 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:07:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:07:07 np0005546420.localdomain podman[311768]: 2025-12-05 10:07:07.502150314 +0000 UTC m=+0.077829239 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Dec 05 10:07:07 np0005546420.localdomain podman[311768]: 2025-12-05 10:07:07.550401854 +0000 UTC m=+0.126080729 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:07:07 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:07:07 np0005546420.localdomain ceph-mon[298353]: pgmap v197: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:07.758 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:08 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:08.626 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:07:09 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:09.479 2 INFO neutron.agent.securitygroups_rpc [None req-2b543872-dbbb-45d0-a3c9-b80170d954e4 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:07:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:09.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:07:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:09.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:07:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:09.964 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:10 np0005546420.localdomain ceph-mon[298353]: pgmap v198: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:11.094 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:11 np0005546420.localdomain ceph-mon[298353]: pgmap v199: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:11.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:07:12 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:12Z|00114|ovn_bfd|INFO|Disabled BFD on interface ovn-473cc8-0
Dec 05 10:07:12 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:12Z|00115|ovn_bfd|INFO|Disabled BFD on interface ovn-f5bb44-0
Dec 05 10:07:12 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:12Z|00116|ovn_bfd|INFO|Disabled BFD on interface ovn-40c64e-0
Dec 05 10:07:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:12.085 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:12.102 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:12.105 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:12 np0005546420.localdomain dnsmasq[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/addn_hosts - 0 addresses
Dec 05 10:07:12 np0005546420.localdomain podman[311811]: 2025-12-05 10:07:12.228600654 +0000 UTC m=+0.046450115 container kill 660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:07:12 np0005546420.localdomain dnsmasq-dhcp[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/host
Dec 05 10:07:12 np0005546420.localdomain dnsmasq-dhcp[311160]: read /var/lib/neutron/dhcp/fafba9db-fcb2-47e3-98d9-e1d81b19f8b5/opts
Dec 05 10:07:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:12.401 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:12 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:12Z|00117|binding|INFO|Releasing lport 97842860-b4c0-47d4-8dcb-3c189db44e9d from this chassis (sb_readonly=0)
Dec 05 10:07:12 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:12Z|00118|binding|INFO|Setting lport 97842860-b4c0-47d4-8dcb-3c189db44e9d down in Southbound
Dec 05 10:07:12 np0005546420.localdomain kernel: device tap97842860-b4 left promiscuous mode
Dec 05 10:07:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:12.412 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1d32e87601df423a8cce9cd945df205b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d7cd891-78aa-40df-ae51-5e677706c6d7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=97842860-b4c0-47d4-8dcb-3c189db44e9d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:07:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:12.413 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 97842860-b4c0-47d4-8dcb-3c189db44e9d in datapath fafba9db-fcb2-47e3-98d9-e1d81b19f8b5 unbound from our chassis
Dec 05 10:07:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:12.414 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:07:12 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:12.415 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cc3b47-92f5-4743-9bb9-4ba21d6f611a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:07:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:12.422 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:12.866 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:07:12 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:12.921 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:07:13 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:13.377 2 INFO neutron.agent.securitygroups_rpc [None req-93bbc332-f3c5-4877-b9fb-2efd8bb0321e 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:07:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:13.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:07:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:13.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:07:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:13.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:07:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:14.191 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:07:14 np0005546420.localdomain ceph-mon[298353]: pgmap v200: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:14.990 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:07:15 np0005546420.localdomain podman[311832]: 2025-12-05 10:07:15.515604009 +0000 UTC m=+0.090402095 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 05 10:07:15 np0005546420.localdomain podman[311832]: 2025-12-05 10:07:15.531450214 +0000 UTC m=+0.106248290 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:07:15 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:07:15 np0005546420.localdomain dnsmasq[311160]: exiting on receipt of SIGTERM
Dec 05 10:07:15 np0005546420.localdomain podman[311868]: 2025-12-05 10:07:15.90486035 +0000 UTC m=+0.060720034 container kill 660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:07:15 np0005546420.localdomain systemd[1]: libpod-660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701.scope: Deactivated successfully.
Dec 05 10:07:15 np0005546420.localdomain podman[311880]: 2025-12-05 10:07:15.955181603 +0000 UTC m=+0.039350379 container died 660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 05 10:07:15 np0005546420.localdomain systemd[1]: tmp-crun.jYGljH.mount: Deactivated successfully.
Dec 05 10:07:16 np0005546420.localdomain podman[311880]: 2025-12-05 10:07:16.045392751 +0000 UTC m=+0.129561497 container cleanup 660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:07:16 np0005546420.localdomain systemd[1]: libpod-conmon-660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701.scope: Deactivated successfully.
Dec 05 10:07:16 np0005546420.localdomain podman[311887]: 2025-12-05 10:07:16.068108847 +0000 UTC m=+0.139804390 container remove 660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fafba9db-fcb2-47e3-98d9-e1d81b19f8b5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:07:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:16.098 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:16.104 262769 INFO neutron.agent.dhcp.agent [None req-2351a17d-dd88-4dd0-b27b-524f6cac6ec0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:07:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:16.138 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:07:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-814eacc47442bf79df949ae0f43a280e1759df3060e8cc9901e5ffbe78108511-merged.mount: Deactivated successfully.
Dec 05 10:07:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-660e03ce40e62de5e235727700326c0ee7a3122b8cf1fd19ecfff4db449d2701-userdata-shm.mount: Deactivated successfully.
Dec 05 10:07:16 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2dfafba9db\x2dfcb2\x2d47e3\x2d98d9\x2de1d81b19f8b5.mount: Deactivated successfully.
Dec 05 10:07:16 np0005546420.localdomain ceph-mon[298353]: pgmap v201: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:16 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:16.738 2 INFO neutron.agent.securitygroups_rpc [None req-3ef0a6b0-ef4d-43c6-a568-29535e1c7f80 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:07:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:16.776 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:16.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:07:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:07:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:07:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:07:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:07:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:07:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18254 "" "Go-http-client/1.1"
Dec 05 10:07:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:17.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:07:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:18.142 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:07:18 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:18.161 2 INFO neutron.agent.securitygroups_rpc [None req-404e8085-b656-45b6-bed4-f502f810c658 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:07:18 np0005546420.localdomain ceph-mon[298353]: pgmap v202: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/640022840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:07:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:07:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:07:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:07:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:07:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:07:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:07:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:07:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:07:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:07:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:07:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:07:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:07:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1000473937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:07:19 np0005546420.localdomain ceph-mon[298353]: pgmap v203: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:19.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:07:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:19.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:07:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:19.904 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:07:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:19.905 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:07:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:19.905 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:07:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:19.905 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:07:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:19.906 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:07:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:20.029 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:07:20 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3782447908' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:07:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:20.398 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:07:20 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:20.523 2 INFO neutron.agent.securitygroups_rpc [None req-3620aa62-41f0-4b22-a239-903391f500d0 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:07:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:20.625 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:07:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:20.627 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11703MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:07:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:20.628 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:07:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:20.629 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:07:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3782447908' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:07:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:20.702 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:07:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:20.703 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:07:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:20.722 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.126 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:07:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2502534238' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.191 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.197 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.222 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.224 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.225 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:07:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:21.605 262769 INFO neutron.agent.linux.ip_lib [None req-46ecf760-be18-4338-933c-7562302a9e2d - - - - - -] Device tap105cfa08-cf cannot be used as it has no MAC address
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.630 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:21 np0005546420.localdomain kernel: device tap105cfa08-cf entered promiscuous mode
Dec 05 10:07:21 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929241.6403] manager: (tap105cfa08-cf): new Generic device (/org/freedesktop/NetworkManager/Devices/24)
Dec 05 10:07:21 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:21Z|00119|binding|INFO|Claiming lport 105cfa08-cfd2-4ae4-9b3e-a602cdf9d59b for this chassis.
Dec 05 10:07:21 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:21Z|00120|binding|INFO|105cfa08-cfd2-4ae4-9b3e-a602cdf9d59b: Claiming unknown
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.642 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:21 np0005546420.localdomain systemd-udevd[311962]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:07:21 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:21Z|00121|binding|INFO|Setting lport 105cfa08-cfd2-4ae4-9b3e-a602cdf9d59b ovn-installed in OVS
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.659 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:21 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap105cfa08-cf: No such device
Dec 05 10:07:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:07:21 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap105cfa08-cf: No such device
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.679 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:07:21 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap105cfa08-cf: No such device
Dec 05 10:07:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2502534238' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:07:21 np0005546420.localdomain ceph-mon[298353]: pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:21 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap105cfa08-cf: No such device
Dec 05 10:07:21 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap105cfa08-cf: No such device
Dec 05 10:07:21 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap105cfa08-cf: No such device
Dec 05 10:07:21 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap105cfa08-cf: No such device
Dec 05 10:07:21 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap105cfa08-cf: No such device
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.727 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:21.756 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:21 np0005546420.localdomain systemd[1]: tmp-crun.nGiwac.mount: Deactivated successfully.
Dec 05 10:07:21 np0005546420.localdomain podman[311970]: 2025-12-05 10:07:21.789584892 +0000 UTC m=+0.095909983 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:07:21 np0005546420.localdomain podman[311970]: 2025-12-05 10:07:21.798880477 +0000 UTC m=+0.105205608 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:07:21 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:07:21 np0005546420.localdomain podman[311973]: 2025-12-05 10:07:21.879590143 +0000 UTC m=+0.184418669 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 05 10:07:21 np0005546420.localdomain podman[311973]: 2025-12-05 10:07:21.888426774 +0000 UTC m=+0.193255350 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:07:21 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:07:21 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:21Z|00122|binding|INFO|Setting lport 105cfa08-cfd2-4ae4-9b3e-a602cdf9d59b up in Southbound
Dec 05 10:07:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:21.981 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-5e8d468b-7808-4421-a740-f514723c8cec', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8d468b-7808-4421-a740-f514723c8cec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c886059-6d4a-4c5b-8762-dd2a97452342, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=105cfa08-cfd2-4ae4-9b3e-a602cdf9d59b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:07:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:21.982 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 105cfa08-cfd2-4ae4-9b3e-a602cdf9d59b in datapath 5e8d468b-7808-4421-a740-f514723c8cec bound to our chassis
Dec 05 10:07:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:21.983 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5e8d468b-7808-4421-a740-f514723c8cec or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:07:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:21.984 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[aef9984a-58f4-4f07-9556-37cdba752862]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:07:22 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:22.010 2 INFO neutron.agent.securitygroups_rpc [None req-d7be546d-b50c-461e-88aa-7aeab4bd324a 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:07:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:22.079 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:07:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:22.080 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:07:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:22.082 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:22.226 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:07:22 np0005546420.localdomain systemd[1]: tmp-crun.kpdDhy.mount: Deactivated successfully.
Dec 05 10:07:22 np0005546420.localdomain podman[312076]: 
Dec 05 10:07:22 np0005546420.localdomain podman[312076]: 2025-12-05 10:07:22.732214999 +0000 UTC m=+0.091741575 container create e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e8d468b-7808-4421-a740-f514723c8cec, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:07:22 np0005546420.localdomain systemd[1]: Started libpod-conmon-e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3.scope.
Dec 05 10:07:22 np0005546420.localdomain podman[312076]: 2025-12-05 10:07:22.68985763 +0000 UTC m=+0.049384216 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:07:22 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:07:22 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3d5d05f24580d4db5b16d2a5aeb175ce376b763d1e1220f37fb42f3c74eeb1e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:22 np0005546420.localdomain podman[312076]: 2025-12-05 10:07:22.801580406 +0000 UTC m=+0.161106952 container init e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e8d468b-7808-4421-a740-f514723c8cec, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:07:22 np0005546420.localdomain podman[312076]: 2025-12-05 10:07:22.811764519 +0000 UTC m=+0.171291035 container start e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e8d468b-7808-4421-a740-f514723c8cec, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:22 np0005546420.localdomain dnsmasq[312094]: started, version 2.85 cachesize 150
Dec 05 10:07:22 np0005546420.localdomain dnsmasq[312094]: DNS service limited to local subnets
Dec 05 10:07:22 np0005546420.localdomain dnsmasq[312094]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:07:22 np0005546420.localdomain dnsmasq[312094]: warning: no upstream servers configured
Dec 05 10:07:22 np0005546420.localdomain dnsmasq-dhcp[312094]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:07:22 np0005546420.localdomain dnsmasq[312094]: read /var/lib/neutron/dhcp/5e8d468b-7808-4421-a740-f514723c8cec/addn_hosts - 0 addresses
Dec 05 10:07:22 np0005546420.localdomain dnsmasq-dhcp[312094]: read /var/lib/neutron/dhcp/5e8d468b-7808-4421-a740-f514723c8cec/host
Dec 05 10:07:22 np0005546420.localdomain dnsmasq-dhcp[312094]: read /var/lib/neutron/dhcp/5e8d468b-7808-4421-a740-f514723c8cec/opts
Dec 05 10:07:23 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:23.083 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:07:23 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:23.149 262769 INFO neutron.agent.dhcp.agent [None req-be8556de-bd60-4177-bc9c-cf0de38ddd5c - - - - - -] DHCP configuration for ports {'75b69cb2-4506-45ac-ba6d-76b565cae3d0'} is completed
Dec 05 10:07:23 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:23.269 2 INFO neutron.agent.securitygroups_rpc [None req-79871f0e-1ec9-49b3-8ee1-b4966cac2e7e 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']
Dec 05 10:07:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:23.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:07:23 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:23.941 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:07:24 np0005546420.localdomain ceph-mon[298353]: pgmap v205: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/875477024' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:07:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:07:24 np0005546420.localdomain podman[312095]: 2025-12-05 10:07:24.505413965 +0000 UTC m=+0.078087166 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:07:24 np0005546420.localdomain podman[312095]: 2025-12-05 10:07:24.518727803 +0000 UTC m=+0.091401044 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:24 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:07:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:24.721 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:24 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:24Z|00123|binding|INFO|Releasing lport 105cfa08-cfd2-4ae4-9b3e-a602cdf9d59b from this chassis (sb_readonly=0)
Dec 05 10:07:24 np0005546420.localdomain kernel: device tap105cfa08-cf left promiscuous mode
Dec 05 10:07:24 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:24Z|00124|binding|INFO|Setting lport 105cfa08-cfd2-4ae4-9b3e-a602cdf9d59b down in Southbound
Dec 05 10:07:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:24.737 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-5e8d468b-7808-4421-a740-f514723c8cec', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e8d468b-7808-4421-a740-f514723c8cec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c886059-6d4a-4c5b-8762-dd2a97452342, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=105cfa08-cfd2-4ae4-9b3e-a602cdf9d59b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:07:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:24.739 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 105cfa08-cfd2-4ae4-9b3e-a602cdf9d59b in datapath 5e8d468b-7808-4421-a740-f514723c8cec unbound from our chassis
Dec 05 10:07:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:24.741 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e8d468b-7808-4421-a740-f514723c8cec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:07:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:24.743 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[f74bed56-f608-4a58-bc96-5da6509c42aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:07:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:24.744 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:25.075 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1184725333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:07:25 np0005546420.localdomain dnsmasq[312094]: read /var/lib/neutron/dhcp/5e8d468b-7808-4421-a740-f514723c8cec/addn_hosts - 0 addresses
Dec 05 10:07:25 np0005546420.localdomain dnsmasq-dhcp[312094]: read /var/lib/neutron/dhcp/5e8d468b-7808-4421-a740-f514723c8cec/host
Dec 05 10:07:25 np0005546420.localdomain dnsmasq-dhcp[312094]: read /var/lib/neutron/dhcp/5e8d468b-7808-4421-a740-f514723c8cec/opts
Dec 05 10:07:25 np0005546420.localdomain podman[312130]: 2025-12-05 10:07:25.420071263 +0000 UTC m=+0.061725084 container kill e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e8d468b-7808-4421-a740-f514723c8cec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent [None req-78e212d1-af35-458e-bc8b-9ca97c455a57 - - - - - -] Unable to reload_allocations dhcp for 5e8d468b-7808-4421-a740-f514723c8cec.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap105cfa08-cf not found in namespace qdhcp-5e8d468b-7808-4421-a740-f514723c8cec.
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     return fut.result()
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     raise self._exception
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap105cfa08-cf not found in namespace qdhcp-5e8d468b-7808-4421-a740-f514723c8cec.
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.448 262769 ERROR neutron.agent.dhcp.agent 
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.451 262769 INFO neutron.agent.dhcp.agent [None req-e72f08a9-71d6-4e3c-a621-891373a07277 - - - - - -] Synchronizing state
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.736 262769 INFO neutron.agent.dhcp.agent [None req-f2bc4270-f076-4b90-ac2e-08073c0429b6 - - - - - -] All active networks have been fetched through RPC.
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.736 262769 INFO neutron.agent.dhcp.agent [-] Starting network 5e8d468b-7808-4421-a740-f514723c8cec dhcp configuration
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.737 262769 INFO neutron.agent.dhcp.agent [-] Finished network 5e8d468b-7808-4421-a740-f514723c8cec dhcp configuration
Dec 05 10:07:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:25.738 262769 INFO neutron.agent.dhcp.agent [None req-f2bc4270-f076-4b90-ac2e-08073c0429b6 - - - - - -] Synchronizing state complete
Dec 05 10:07:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:25.848 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:25 np0005546420.localdomain dnsmasq[312094]: exiting on receipt of SIGTERM
Dec 05 10:07:25 np0005546420.localdomain podman[312162]: 2025-12-05 10:07:25.981898568 +0000 UTC m=+0.053988277 container kill e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e8d468b-7808-4421-a740-f514723c8cec, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:07:25 np0005546420.localdomain systemd[1]: libpod-e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3.scope: Deactivated successfully.
Dec 05 10:07:26 np0005546420.localdomain podman[312175]: 2025-12-05 10:07:26.04683124 +0000 UTC m=+0.053110410 container died e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e8d468b-7808-4421-a740-f514723c8cec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:07:26 np0005546420.localdomain podman[312175]: 2025-12-05 10:07:26.075309834 +0000 UTC m=+0.081588974 container cleanup e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e8d468b-7808-4421-a740-f514723c8cec, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:07:26 np0005546420.localdomain systemd[1]: libpod-conmon-e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3.scope: Deactivated successfully.
Dec 05 10:07:26 np0005546420.localdomain podman[312178]: 2025-12-05 10:07:26.130370823 +0000 UTC m=+0.125456850 container remove e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5e8d468b-7808-4421-a740-f514723c8cec, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:07:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:26.154 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e3d5d05f24580d4db5b16d2a5aeb175ce376b763d1e1220f37fb42f3c74eeb1e-merged.mount: Deactivated successfully.
Dec 05 10:07:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4e18b645da1ec591c0180032e564d3289e7a87ffdb3c0d093ed0c64b22602e3-userdata-shm.mount: Deactivated successfully.
Dec 05 10:07:26 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d5e8d468b\x2d7808\x2d4421\x2da740\x2df514723c8cec.mount: Deactivated successfully.
Dec 05 10:07:26 np0005546420.localdomain ceph-mon[298353]: pgmap v206: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:28 np0005546420.localdomain ceph-mon[298353]: pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:29 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:29.078 262769 INFO neutron.agent.linux.ip_lib [None req-a832b203-d849-4112-83a7-bf88785ac9ef - - - - - -] Device tapb065d4ca-62 cannot be used as it has no MAC address
Dec 05 10:07:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:29.110 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:29 np0005546420.localdomain kernel: device tapb065d4ca-62 entered promiscuous mode
Dec 05 10:07:29 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:29Z|00125|binding|INFO|Claiming lport b065d4ca-6216-4d08-98c2-131e65b74104 for this chassis.
Dec 05 10:07:29 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929249.1168] manager: (tapb065d4ca-62): new Generic device (/org/freedesktop/NetworkManager/Devices/25)
Dec 05 10:07:29 np0005546420.localdomain systemd-udevd[312213]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:07:29 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:29Z|00126|binding|INFO|b065d4ca-6216-4d08-98c2-131e65b74104: Claiming unknown
Dec 05 10:07:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:29.118 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:29.137 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-1a709339-1ffd-4303-8611-133ecb004bad', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a709339-1ffd-4303-8611-133ecb004bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60e9ddeb-cd29-43e9-aaa8-267f7c47cda0, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=b065d4ca-6216-4d08-98c2-131e65b74104) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:07:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:29.139 159503 INFO neutron.agent.ovn.metadata.agent [-] Port b065d4ca-6216-4d08-98c2-131e65b74104 in datapath 1a709339-1ffd-4303-8611-133ecb004bad bound to our chassis
Dec 05 10:07:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:29.140 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1a709339-1ffd-4303-8611-133ecb004bad or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:07:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:29.140 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9ecda8-3097-4299-8e84-3baf990ae3b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:07:29 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb065d4ca-62: No such device
Dec 05 10:07:29 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb065d4ca-62: No such device
Dec 05 10:07:29 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:29Z|00127|binding|INFO|Setting lport b065d4ca-6216-4d08-98c2-131e65b74104 ovn-installed in OVS
Dec 05 10:07:29 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:29Z|00128|binding|INFO|Setting lport b065d4ca-6216-4d08-98c2-131e65b74104 up in Southbound
Dec 05 10:07:29 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb065d4ca-62: No such device
Dec 05 10:07:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:29.150 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:29 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb065d4ca-62: No such device
Dec 05 10:07:29 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb065d4ca-62: No such device
Dec 05 10:07:29 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb065d4ca-62: No such device
Dec 05 10:07:29 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb065d4ca-62: No such device
Dec 05 10:07:29 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb065d4ca-62: No such device
Dec 05 10:07:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:29.179 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:29.203 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:29 np0005546420.localdomain ceph-mon[298353]: pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:30.130 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:30 np0005546420.localdomain podman[312284]: 
Dec 05 10:07:30 np0005546420.localdomain podman[312284]: 2025-12-05 10:07:30.463997433 +0000 UTC m=+0.094532051 container create 88532b3542085537b9dbe83824cc735a57090e006efe89d805bb8658e239e77b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a709339-1ffd-4303-8611-133ecb004bad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 05 10:07:30 np0005546420.localdomain systemd[1]: Started libpod-conmon-88532b3542085537b9dbe83824cc735a57090e006efe89d805bb8658e239e77b.scope.
Dec 05 10:07:30 np0005546420.localdomain podman[312284]: 2025-12-05 10:07:30.419376574 +0000 UTC m=+0.049911202 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:07:30 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:07:30 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/772648ffb58326d13e1bac5a2da64cdbea4b8cca0c1baf45022732d57098b410/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:30 np0005546420.localdomain podman[312284]: 2025-12-05 10:07:30.541424778 +0000 UTC m=+0.171959386 container init 88532b3542085537b9dbe83824cc735a57090e006efe89d805bb8658e239e77b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a709339-1ffd-4303-8611-133ecb004bad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 10:07:30 np0005546420.localdomain podman[312284]: 2025-12-05 10:07:30.551949801 +0000 UTC m=+0.182484419 container start 88532b3542085537b9dbe83824cc735a57090e006efe89d805bb8658e239e77b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a709339-1ffd-4303-8611-133ecb004bad, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:07:30 np0005546420.localdomain dnsmasq[312302]: started, version 2.85 cachesize 150
Dec 05 10:07:30 np0005546420.localdomain dnsmasq[312302]: DNS service limited to local subnets
Dec 05 10:07:30 np0005546420.localdomain dnsmasq[312302]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:07:30 np0005546420.localdomain dnsmasq[312302]: warning: no upstream servers configured
Dec 05 10:07:30 np0005546420.localdomain dnsmasq-dhcp[312302]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:07:30 np0005546420.localdomain dnsmasq[312302]: read /var/lib/neutron/dhcp/1a709339-1ffd-4303-8611-133ecb004bad/addn_hosts - 0 addresses
Dec 05 10:07:30 np0005546420.localdomain dnsmasq-dhcp[312302]: read /var/lib/neutron/dhcp/1a709339-1ffd-4303-8611-133ecb004bad/host
Dec 05 10:07:30 np0005546420.localdomain dnsmasq-dhcp[312302]: read /var/lib/neutron/dhcp/1a709339-1ffd-4303-8611-133ecb004bad/opts
Dec 05 10:07:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:30.695 262769 INFO neutron.agent.dhcp.agent [None req-6373725c-e3ad-41d2-b415-5ef15e9c2241 - - - - - -] DHCP configuration for ports {'8ae78cab-1541-4173-8433-bda3982791e9'} is completed
Dec 05 10:07:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:31.157 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:32 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:32Z|00129|binding|INFO|Releasing lport b065d4ca-6216-4d08-98c2-131e65b74104 from this chassis (sb_readonly=0)
Dec 05 10:07:32 np0005546420.localdomain kernel: device tapb065d4ca-62 left promiscuous mode
Dec 05 10:07:32 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:32Z|00130|binding|INFO|Setting lport b065d4ca-6216-4d08-98c2-131e65b74104 down in Southbound
Dec 05 10:07:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:32.310 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:32.325 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-1a709339-1ffd-4303-8611-133ecb004bad', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1a709339-1ffd-4303-8611-133ecb004bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60e9ddeb-cd29-43e9-aaa8-267f7c47cda0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=b065d4ca-6216-4d08-98c2-131e65b74104) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:07:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:32.328 159503 INFO neutron.agent.ovn.metadata.agent [-] Port b065d4ca-6216-4d08-98c2-131e65b74104 in datapath 1a709339-1ffd-4303-8611-133ecb004bad unbound from our chassis
Dec 05 10:07:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:32.331 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:32.331 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1a709339-1ffd-4303-8611-133ecb004bad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:07:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:32.333 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c740fa-3a3f-448e-91aa-7c70e9f51f30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:07:32 np0005546420.localdomain ceph-mon[298353]: pgmap v209: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:33 np0005546420.localdomain dnsmasq[312302]: read /var/lib/neutron/dhcp/1a709339-1ffd-4303-8611-133ecb004bad/addn_hosts - 0 addresses
Dec 05 10:07:33 np0005546420.localdomain dnsmasq-dhcp[312302]: read /var/lib/neutron/dhcp/1a709339-1ffd-4303-8611-133ecb004bad/host
Dec 05 10:07:33 np0005546420.localdomain dnsmasq-dhcp[312302]: read /var/lib/neutron/dhcp/1a709339-1ffd-4303-8611-133ecb004bad/opts
Dec 05 10:07:33 np0005546420.localdomain podman[312322]: 2025-12-05 10:07:33.438357736 +0000 UTC m=+0.065103868 container kill 88532b3542085537b9dbe83824cc735a57090e006efe89d805bb8658e239e77b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a709339-1ffd-4303-8611-133ecb004bad, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent [None req-aa000fbe-0dd7-4561-ade1-e4ca7fd1303d - - - - - -] Unable to reload_allocations dhcp for 1a709339-1ffd-4303-8611-133ecb004bad.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapb065d4ca-62 not found in namespace qdhcp-1a709339-1ffd-4303-8611-133ecb004bad.
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     return fut.result()
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     raise self._exception
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapb065d4ca-62 not found in namespace qdhcp-1a709339-1ffd-4303-8611-133ecb004bad.
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.467 262769 ERROR neutron.agent.dhcp.agent 
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.470 262769 INFO neutron.agent.dhcp.agent [None req-f2bc4270-f076-4b90-ac2e-08073c0429b6 - - - - - -] Synchronizing state
Dec 05 10:07:33 np0005546420.localdomain ceph-mon[298353]: pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.676 262769 INFO neutron.agent.dhcp.agent [None req-200d2229-3056-421d-b81d-d900430543b6 - - - - - -] All active networks have been fetched through RPC.
Dec 05 10:07:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:33.678 262769 INFO neutron.agent.dhcp.agent [-] Starting network 1a709339-1ffd-4303-8611-133ecb004bad dhcp configuration
Dec 05 10:07:33 np0005546420.localdomain dnsmasq[312302]: exiting on receipt of SIGTERM
Dec 05 10:07:33 np0005546420.localdomain podman[312355]: 2025-12-05 10:07:33.863209009 +0000 UTC m=+0.063683165 container kill 88532b3542085537b9dbe83824cc735a57090e006efe89d805bb8658e239e77b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a709339-1ffd-4303-8611-133ecb004bad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:07:33 np0005546420.localdomain systemd[1]: libpod-88532b3542085537b9dbe83824cc735a57090e006efe89d805bb8658e239e77b.scope: Deactivated successfully.
Dec 05 10:07:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:07:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:07:33 np0005546420.localdomain podman[312370]: 2025-12-05 10:07:33.929800382 +0000 UTC m=+0.047449516 container died 88532b3542085537b9dbe83824cc735a57090e006efe89d805bb8658e239e77b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a709339-1ffd-4303-8611-133ecb004bad, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:07:33 np0005546420.localdomain podman[312383]: 2025-12-05 10:07:33.974144483 +0000 UTC m=+0.076457158 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:07:34 np0005546420.localdomain podman[312370]: 2025-12-05 10:07:34.025360863 +0000 UTC m=+0.143009927 container remove 88532b3542085537b9dbe83824cc735a57090e006efe89d805bb8658e239e77b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1a709339-1ffd-4303-8611-133ecb004bad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 05 10:07:34 np0005546420.localdomain systemd[1]: libpod-conmon-88532b3542085537b9dbe83824cc735a57090e006efe89d805bb8658e239e77b.scope: Deactivated successfully.
Dec 05 10:07:34 np0005546420.localdomain podman[312383]: 2025-12-05 10:07:34.063761302 +0000 UTC m=+0.166074007 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:07:34 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:07:34 np0005546420.localdomain podman[312376]: 2025-12-05 10:07:34.068427194 +0000 UTC m=+0.176518426 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, managed_by=edpm_ansible)
Dec 05 10:07:34 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:34.144 262769 INFO neutron.agent.dhcp.agent [None req-872c85cc-c7f7-4a30-872f-e505785e5e0a - - - - - -] Finished network 1a709339-1ffd-4303-8611-133ecb004bad dhcp configuration
Dec 05 10:07:34 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:34.145 262769 INFO neutron.agent.dhcp.agent [None req-200d2229-3056-421d-b81d-d900430543b6 - - - - - -] Synchronizing state complete
Dec 05 10:07:34 np0005546420.localdomain podman[312376]: 2025-12-05 10:07:34.151457302 +0000 UTC m=+0.259548584 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 10:07:34 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:07:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-772648ffb58326d13e1bac5a2da64cdbea4b8cca0c1baf45022732d57098b410-merged.mount: Deactivated successfully.
Dec 05 10:07:34 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88532b3542085537b9dbe83824cc735a57090e006efe89d805bb8658e239e77b-userdata-shm.mount: Deactivated successfully.
Dec 05 10:07:34 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d1a709339\x2d1ffd\x2d4303\x2d8611\x2d133ecb004bad.mount: Deactivated successfully.
Dec 05 10:07:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:34.498 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:35.169 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:36.160 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:36 np0005546420.localdomain ceph-mon[298353]: pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:37 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:37.180 2 INFO neutron.agent.securitygroups_rpc [None req-b11c3a7e-3730-4dce-9b71-9d82cc4e3184 2f90c5186cc14a0a8a8f7faf3454b78f 0b296e0ab4b6447982bcfc680b8ba396 - - default default] Security group member updated ['13eefb8b-3a4b-4bb6-80e0-07e6a1e0bd51']
Dec 05 10:07:37 np0005546420.localdomain ceph-mon[298353]: pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:07:38 np0005546420.localdomain podman[312443]: 2025-12-05 10:07:38.501147795 +0000 UTC m=+0.078824780 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:07:38 np0005546420.localdomain podman[312443]: 2025-12-05 10:07:38.544496064 +0000 UTC m=+0.122173099 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:07:38 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:07:38 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:38.696 2 INFO neutron.agent.securitygroups_rpc [None req-5d7e3fb7-310f-4118-b811-99351dc54ebc 2f90c5186cc14a0a8a8f7faf3454b78f 0b296e0ab4b6447982bcfc680b8ba396 - - default default] Security group member updated ['13eefb8b-3a4b-4bb6-80e0-07e6a1e0bd51']
Dec 05 10:07:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:40.199 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:40 np0005546420.localdomain ceph-mon[298353]: pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:41.163 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:42 np0005546420.localdomain ceph-mon[298353]: pgmap v214: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:43 np0005546420.localdomain ceph-mon[298353]: pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:43 np0005546420.localdomain sshd[312468]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:07:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:45.203 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:45 np0005546420.localdomain sshd[312468]: Invalid user admin from 45.135.232.92 port 38972
Dec 05 10:07:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:07:45 np0005546420.localdomain systemd[1]: tmp-crun.RaFy1N.mount: Deactivated successfully.
Dec 05 10:07:45 np0005546420.localdomain podman[312470]: 2025-12-05 10:07:45.810558314 +0000 UTC m=+0.092827389 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 10:07:45 np0005546420.localdomain podman[312470]: 2025-12-05 10:07:45.821090727 +0000 UTC m=+0.103359792 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:45 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:07:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:46.195 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:46 np0005546420.localdomain ceph-mon[298353]: pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:46 np0005546420.localdomain sshd[312468]: Connection reset by invalid user admin 45.135.232.92 port 38972 [preauth]
Dec 05 10:07:46 np0005546420.localdomain sshd[312488]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:07:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:07:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:07:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:07:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:07:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:07:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18263 "" "Go-http-client/1.1"
Dec 05 10:07:47 np0005546420.localdomain ceph-mon[298353]: pgmap v217: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:48.260 262769 INFO neutron.agent.linux.ip_lib [None req-5d723ed4-75df-4bff-a191-d7a50dd6c3cc - - - - - -] Device tape1895315-0c cannot be used as it has no MAC address
Dec 05 10:07:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:48.281 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:48 np0005546420.localdomain kernel: device tape1895315-0c entered promiscuous mode
Dec 05 10:07:48 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929268.2912] manager: (tape1895315-0c): new Generic device (/org/freedesktop/NetworkManager/Devices/26)
Dec 05 10:07:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:48.293 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:48 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:48Z|00131|binding|INFO|Claiming lport e1895315-0c1b-48a7-8753-718d8e537e9c for this chassis.
Dec 05 10:07:48 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:48Z|00132|binding|INFO|e1895315-0c1b-48a7-8753-718d8e537e9c: Claiming unknown
Dec 05 10:07:48 np0005546420.localdomain systemd-udevd[312500]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:07:48 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:48.309 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-799d815d-4b9e-4aa4-95d3-a5c55a324f34', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-799d815d-4b9e-4aa4-95d3-a5c55a324f34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c280aa5-7264-4c82-8276-acaf8631d06d, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=e1895315-0c1b-48a7-8753-718d8e537e9c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:07:48 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:48.311 159503 INFO neutron.agent.ovn.metadata.agent [-] Port e1895315-0c1b-48a7-8753-718d8e537e9c in datapath 799d815d-4b9e-4aa4-95d3-a5c55a324f34 bound to our chassis
Dec 05 10:07:48 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:48.313 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 799d815d-4b9e-4aa4-95d3-a5c55a324f34 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:07:48 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:48.314 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9ba543-9ddb-4b54-9a4a-434a7cbbfb65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:07:48 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape1895315-0c: No such device
Dec 05 10:07:48 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:48Z|00133|binding|INFO|Setting lport e1895315-0c1b-48a7-8753-718d8e537e9c ovn-installed in OVS
Dec 05 10:07:48 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:48Z|00134|binding|INFO|Setting lport e1895315-0c1b-48a7-8753-718d8e537e9c up in Southbound
Dec 05 10:07:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:48.330 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:48 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape1895315-0c: No such device
Dec 05 10:07:48 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape1895315-0c: No such device
Dec 05 10:07:48 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape1895315-0c: No such device
Dec 05 10:07:48 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape1895315-0c: No such device
Dec 05 10:07:48 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape1895315-0c: No such device
Dec 05 10:07:48 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape1895315-0c: No such device
Dec 05 10:07:48 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape1895315-0c: No such device
Dec 05 10:07:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:48.369 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:48.429 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:07:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:07:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:07:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:07:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:07:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:07:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:07:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:07:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:07:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:07:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:07:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:07:49 np0005546420.localdomain podman[312568]: 
Dec 05 10:07:49 np0005546420.localdomain podman[312568]: 2025-12-05 10:07:49.301809173 +0000 UTC m=+0.089362433 container create dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-799d815d-4b9e-4aa4-95d3-a5c55a324f34, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:07:49 np0005546420.localdomain systemd[1]: Started libpod-conmon-dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9.scope.
Dec 05 10:07:49 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:07:49 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbdb6370f48b9b72761129e8d4f8cd8bd968d2e127297fc13ba0bd44fbf45d4d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:49 np0005546420.localdomain podman[312568]: 2025-12-05 10:07:49.359040959 +0000 UTC m=+0.146594219 container init dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-799d815d-4b9e-4aa4-95d3-a5c55a324f34, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 05 10:07:49 np0005546420.localdomain podman[312568]: 2025-12-05 10:07:49.262573779 +0000 UTC m=+0.050127069 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:07:49 np0005546420.localdomain podman[312568]: 2025-12-05 10:07:49.366801026 +0000 UTC m=+0.154354286 container start dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-799d815d-4b9e-4aa4-95d3-a5c55a324f34, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 05 10:07:49 np0005546420.localdomain dnsmasq[312587]: started, version 2.85 cachesize 150
Dec 05 10:07:49 np0005546420.localdomain dnsmasq[312587]: DNS service limited to local subnets
Dec 05 10:07:49 np0005546420.localdomain dnsmasq[312587]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:07:49 np0005546420.localdomain dnsmasq[312587]: warning: no upstream servers configured
Dec 05 10:07:49 np0005546420.localdomain dnsmasq-dhcp[312587]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:07:49 np0005546420.localdomain dnsmasq[312587]: read /var/lib/neutron/dhcp/799d815d-4b9e-4aa4-95d3-a5c55a324f34/addn_hosts - 0 addresses
Dec 05 10:07:49 np0005546420.localdomain dnsmasq-dhcp[312587]: read /var/lib/neutron/dhcp/799d815d-4b9e-4aa4-95d3-a5c55a324f34/host
Dec 05 10:07:49 np0005546420.localdomain dnsmasq-dhcp[312587]: read /var/lib/neutron/dhcp/799d815d-4b9e-4aa4-95d3-a5c55a324f34/opts
Dec 05 10:07:49 np0005546420.localdomain sshd[312488]: Connection reset by authenticating user root 45.135.232.92 port 46906 [preauth]
Dec 05 10:07:49 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:49.548 262769 INFO neutron.agent.dhcp.agent [None req-20536975-92de-48df-8f73-9d69faa622e6 - - - - - -] DHCP configuration for ports {'2a4f9551-3840-4e53-b92f-e3867c168ad7'} is completed
Dec 05 10:07:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:49.597 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:49Z|00135|binding|INFO|Releasing lport e1895315-0c1b-48a7-8753-718d8e537e9c from this chassis (sb_readonly=0)
Dec 05 10:07:49 np0005546420.localdomain kernel: device tape1895315-0c left promiscuous mode
Dec 05 10:07:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:49Z|00136|binding|INFO|Setting lport e1895315-0c1b-48a7-8753-718d8e537e9c down in Southbound
Dec 05 10:07:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:49.608 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-799d815d-4b9e-4aa4-95d3-a5c55a324f34', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-799d815d-4b9e-4aa4-95d3-a5c55a324f34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c280aa5-7264-4c82-8276-acaf8631d06d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=e1895315-0c1b-48a7-8753-718d8e537e9c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:07:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:49.609 159503 INFO neutron.agent.ovn.metadata.agent [-] Port e1895315-0c1b-48a7-8753-718d8e537e9c in datapath 799d815d-4b9e-4aa4-95d3-a5c55a324f34 unbound from our chassis
Dec 05 10:07:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:49.610 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 799d815d-4b9e-4aa4-95d3-a5c55a324f34, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:07:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:49.611 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[71d4d0f7-ac6a-418f-8b66-7c4ac239fd0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:07:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:49.626 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:49 np0005546420.localdomain sshd[312590]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:07:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:50 np0005546420.localdomain dnsmasq[312587]: read /var/lib/neutron/dhcp/799d815d-4b9e-4aa4-95d3-a5c55a324f34/addn_hosts - 0 addresses
Dec 05 10:07:50 np0005546420.localdomain dnsmasq-dhcp[312587]: read /var/lib/neutron/dhcp/799d815d-4b9e-4aa4-95d3-a5c55a324f34/host
Dec 05 10:07:50 np0005546420.localdomain dnsmasq-dhcp[312587]: read /var/lib/neutron/dhcp/799d815d-4b9e-4aa4-95d3-a5c55a324f34/opts
Dec 05 10:07:50 np0005546420.localdomain podman[312608]: 2025-12-05 10:07:50.110638896 +0000 UTC m=+0.060805677 container kill dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-799d815d-4b9e-4aa4-95d3-a5c55a324f34, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent [None req-a10bef42-c318-4ee1-b203-9a47f8cbca3b - - - - - -] Unable to reload_allocations dhcp for 799d815d-4b9e-4aa4-95d3-a5c55a324f34.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tape1895315-0c not found in namespace qdhcp-799d815d-4b9e-4aa4-95d3-a5c55a324f34.
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     return fut.result()
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     raise self._exception
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tape1895315-0c not found in namespace qdhcp-799d815d-4b9e-4aa4-95d3-a5c55a324f34.
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.134 262769 ERROR neutron.agent.dhcp.agent 
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.139 262769 INFO neutron.agent.dhcp.agent [None req-200d2229-3056-421d-b81d-d900430543b6 - - - - - -] Synchronizing state
Dec 05 10:07:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:50.207 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:50 np0005546420.localdomain ceph-mon[298353]: pgmap v218: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:50.278 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.291 262769 INFO neutron.agent.dhcp.agent [None req-34144f92-2acf-4a30-aa47-05dd785631df - - - - - -] All active networks have been fetched through RPC.
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.292 262769 INFO neutron.agent.dhcp.agent [-] Starting network 75b36aee-b9df-40d0-a03b-5c22192226ba dhcp configuration
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.292 262769 INFO neutron.agent.dhcp.agent [-] Finished network 75b36aee-b9df-40d0-a03b-5c22192226ba dhcp configuration
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.293 262769 INFO neutron.agent.dhcp.agent [-] Starting network 799d815d-4b9e-4aa4-95d3-a5c55a324f34 dhcp configuration
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.293 262769 INFO neutron.agent.dhcp.agent [-] Finished network 799d815d-4b9e-4aa4-95d3-a5c55a324f34 dhcp configuration
Dec 05 10:07:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:50.294 262769 INFO neutron.agent.dhcp.agent [None req-34144f92-2acf-4a30-aa47-05dd785631df - - - - - -] Synchronizing state complete
Dec 05 10:07:50 np0005546420.localdomain dnsmasq[312587]: exiting on receipt of SIGTERM
Dec 05 10:07:50 np0005546420.localdomain podman[312638]: 2025-12-05 10:07:50.632284788 +0000 UTC m=+0.064029486 container kill dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-799d815d-4b9e-4aa4-95d3-a5c55a324f34, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:50 np0005546420.localdomain systemd[1]: libpod-dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9.scope: Deactivated successfully.
Dec 05 10:07:50 np0005546420.localdomain podman[312652]: 2025-12-05 10:07:50.703498292 +0000 UTC m=+0.056985389 container died dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-799d815d-4b9e-4aa4-95d3-a5c55a324f34, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 10:07:50 np0005546420.localdomain podman[312652]: 2025-12-05 10:07:50.736066671 +0000 UTC m=+0.089553728 container cleanup dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-799d815d-4b9e-4aa4-95d3-a5c55a324f34, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:07:50 np0005546420.localdomain systemd[1]: libpod-conmon-dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9.scope: Deactivated successfully.
Dec 05 10:07:50 np0005546420.localdomain podman[312654]: 2025-12-05 10:07:50.785325643 +0000 UTC m=+0.128526594 container remove dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-799d815d-4b9e-4aa4-95d3-a5c55a324f34, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:51.197 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:51 np0005546420.localdomain sshd[312590]: Invalid user public from 45.135.232.92 port 46908
Dec 05 10:07:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-dbdb6370f48b9b72761129e8d4f8cd8bd968d2e127297fc13ba0bd44fbf45d4d-merged.mount: Deactivated successfully.
Dec 05 10:07:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc329c689a7460268ea7a356d88458e7123096e67dd9f42fd1e2ba075fc10eb9-userdata-shm.mount: Deactivated successfully.
Dec 05 10:07:51 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d799d815d\x2d4b9e\x2d4aa4\x2d95d3\x2da5c55a324f34.mount: Deactivated successfully.
Dec 05 10:07:51 np0005546420.localdomain sshd[312590]: Connection reset by invalid user public 45.135.232.92 port 46908 [preauth]
Dec 05 10:07:51 np0005546420.localdomain sshd[312681]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:07:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:07:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:07:52 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:52.305 262769 INFO neutron.agent.linux.ip_lib [None req-1f095b4e-ebd0-4813-96f2-82a8723a2d6d - - - - - -] Device tape5e0e1c3-ec cannot be used as it has no MAC address
Dec 05 10:07:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:52.376 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:52 np0005546420.localdomain systemd[1]: tmp-crun.eb0XJe.mount: Deactivated successfully.
Dec 05 10:07:52 np0005546420.localdomain podman[312685]: 2025-12-05 10:07:52.392478074 +0000 UTC m=+0.152088307 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:07:52 np0005546420.localdomain kernel: device tape5e0e1c3-ec entered promiscuous mode
Dec 05 10:07:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:52Z|00137|binding|INFO|Claiming lport e5e0e1c3-ece0-4be2-b3cb-b13d88e8f23e for this chassis.
Dec 05 10:07:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:52Z|00138|binding|INFO|e5e0e1c3-ece0-4be2-b3cb-b13d88e8f23e: Claiming unknown
Dec 05 10:07:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:52.399 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:52 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929272.4033] manager: (tape5e0e1c3-ec): new Generic device (/org/freedesktop/NetworkManager/Devices/27)
Dec 05 10:07:52 np0005546420.localdomain podman[312686]: 2025-12-05 10:07:52.409070303 +0000 UTC m=+0.168937494 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:07:52 np0005546420.localdomain systemd-udevd[312730]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:07:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:52.415 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-75b36aee-b9df-40d0-a03b-5c22192226ba', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75b36aee-b9df-40d0-a03b-5c22192226ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6225d95e0a924813958256bdb79de31f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4fbf3a80-5208-4b6c-a425-a7be197c965a, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=e5e0e1c3-ece0-4be2-b3cb-b13d88e8f23e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:07:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:52.417 159503 INFO neutron.agent.ovn.metadata.agent [-] Port e5e0e1c3-ece0-4be2-b3cb-b13d88e8f23e in datapath 75b36aee-b9df-40d0-a03b-5c22192226ba bound to our chassis
Dec 05 10:07:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:52.421 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port ab73187c-a1c3-4be4-ad84-8ee3262c1192 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:07:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:52.421 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75b36aee-b9df-40d0-a03b-5c22192226ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:07:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:52.423 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[5b185f50-ccb9-49ec-ace0-0a6f47fd1350]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:07:52 np0005546420.localdomain podman[312685]: 2025-12-05 10:07:52.427501648 +0000 UTC m=+0.187111851 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:07:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape5e0e1c3-ec: No such device
Dec 05 10:07:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape5e0e1c3-ec: No such device
Dec 05 10:07:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:52Z|00139|binding|INFO|Setting lport e5e0e1c3-ece0-4be2-b3cb-b13d88e8f23e ovn-installed in OVS
Dec 05 10:07:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:52Z|00140|binding|INFO|Setting lport e5e0e1c3-ece0-4be2-b3cb-b13d88e8f23e up in Southbound
Dec 05 10:07:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:52.444 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:52 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:07:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape5e0e1c3-ec: No such device
Dec 05 10:07:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape5e0e1c3-ec: No such device
Dec 05 10:07:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape5e0e1c3-ec: No such device
Dec 05 10:07:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape5e0e1c3-ec: No such device
Dec 05 10:07:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape5e0e1c3-ec: No such device
Dec 05 10:07:52 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape5e0e1c3-ec: No such device
Dec 05 10:07:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:52.478 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:52 np0005546420.localdomain podman[312686]: 2025-12-05 10:07:52.488567192 +0000 UTC m=+0.248434363 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 10:07:52 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:07:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:52.511 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:52 np0005546420.localdomain ceph-mon[298353]: pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:53 np0005546420.localdomain podman[312804]: 
Dec 05 10:07:53 np0005546420.localdomain podman[312804]: 2025-12-05 10:07:53.595939532 +0000 UTC m=+0.088151736 container create 1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-75b36aee-b9df-40d0-a03b-5c22192226ba, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:07:53 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e125 e125: 6 total, 6 up, 6 in
Dec 05 10:07:53 np0005546420.localdomain ceph-mon[298353]: pgmap v220: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:53 np0005546420.localdomain podman[312804]: 2025-12-05 10:07:53.552941813 +0000 UTC m=+0.045154037 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:07:53 np0005546420.localdomain systemd[1]: Started libpod-conmon-1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b.scope.
Dec 05 10:07:53 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:07:53 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961db4bc10e6a9075005b5190aaaa19d5fe4e7ba14f15ddddfc2d9351acbacdb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:53 np0005546420.localdomain podman[312804]: 2025-12-05 10:07:53.68844868 +0000 UTC m=+0.180660884 container init 1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-75b36aee-b9df-40d0-a03b-5c22192226ba, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:07:53 np0005546420.localdomain podman[312804]: 2025-12-05 10:07:53.697698883 +0000 UTC m=+0.189911087 container start 1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-75b36aee-b9df-40d0-a03b-5c22192226ba, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:07:53 np0005546420.localdomain dnsmasq[312823]: started, version 2.85 cachesize 150
Dec 05 10:07:53 np0005546420.localdomain dnsmasq[312823]: DNS service limited to local subnets
Dec 05 10:07:53 np0005546420.localdomain dnsmasq[312823]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:07:53 np0005546420.localdomain dnsmasq[312823]: warning: no upstream servers configured
Dec 05 10:07:53 np0005546420.localdomain dnsmasq-dhcp[312823]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:07:53 np0005546420.localdomain dnsmasq[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/addn_hosts - 0 addresses
Dec 05 10:07:53 np0005546420.localdomain dnsmasq-dhcp[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/host
Dec 05 10:07:53 np0005546420.localdomain dnsmasq-dhcp[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/opts
Dec 05 10:07:53 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:53.856 262769 INFO neutron.agent.dhcp.agent [None req-0eeb6d7e-8d7e-4102-9b2e-75c4d34383fe - - - - - -] DHCP configuration for ports {'3d61839d-859d-4509-832d-c4d10314ac97'} is completed
Dec 05 10:07:54 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:54.057 262769 INFO neutron.agent.linux.ip_lib [None req-9dc2eba1-d1a4-4dca-8464-0d4e5a9d4df9 - - - - - -] Device tapf8e4a975-68 cannot be used as it has no MAC address
Dec 05 10:07:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:54.082 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:54 np0005546420.localdomain kernel: device tapf8e4a975-68 entered promiscuous mode
Dec 05 10:07:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:54Z|00141|binding|INFO|Claiming lport f8e4a975-68d0-4028-a21b-fdf65ffc532b for this chassis.
Dec 05 10:07:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:54Z|00142|binding|INFO|f8e4a975-68d0-4028-a21b-fdf65ffc532b: Claiming unknown
Dec 05 10:07:54 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929274.0892] manager: (tapf8e4a975-68): new Generic device (/org/freedesktop/NetworkManager/Devices/28)
Dec 05 10:07:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:54.091 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:54 np0005546420.localdomain sshd[312681]: Connection reset by authenticating user root 45.135.232.92 port 46924 [preauth]
Dec 05 10:07:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:54.103 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:54Z|00143|binding|INFO|Setting lport f8e4a975-68d0-4028-a21b-fdf65ffc532b ovn-installed in OVS
Dec 05 10:07:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:54Z|00144|binding|INFO|Setting lport f8e4a975-68d0-4028-a21b-fdf65ffc532b up in Southbound
Dec 05 10:07:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:54.109 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:54.110 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-0d6b1ff2-fb15-4efe-8688-22522d947bc8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d6b1ff2-fb15-4efe-8688-22522d947bc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b32afc0f-41ec-4de4-937f-23bec30428a6, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=f8e4a975-68d0-4028-a21b-fdf65ffc532b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:07:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:54.113 159503 INFO neutron.agent.ovn.metadata.agent [-] Port f8e4a975-68d0-4028-a21b-fdf65ffc532b in datapath 0d6b1ff2-fb15-4efe-8688-22522d947bc8 bound to our chassis
Dec 05 10:07:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:54.114 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0d6b1ff2-fb15-4efe-8688-22522d947bc8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:07:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:54.115 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[0db6ab6d-ccb5-4bba-b0ca-b712d5b543f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:07:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:54.140 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:54.179 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:54.206 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:54 np0005546420.localdomain sshd[312844]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:07:54 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:54.480 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:07:54Z, description=, device_id=da08f82a-06a1-48d1-9ddf-1ff4fed633bf, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a02aa90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a051130>], id=cebc1387-7c6f-46d6-8239-86b6c7428e11, ip_allocation=immediate, mac_address=fa:16:3e:f2:4b:e7, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:07:47Z, description=, dns_domain=, id=75b36aee-b9df-40d0-a03b-5c22192226ba, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1452372109-network, port_security_enabled=True, project_id=6225d95e0a924813958256bdb79de31f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42622, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1265, status=ACTIVE, subnets=['4411b5c1-cc65-40e6-96bf-9bfff22e8064'], tags=[], tenant_id=6225d95e0a924813958256bdb79de31f, updated_at=2025-12-05T10:07:50Z, vlan_transparent=None, network_id=75b36aee-b9df-40d0-a03b-5c22192226ba, port_security_enabled=False, project_id=6225d95e0a924813958256bdb79de31f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1301, status=DOWN, tags=[], tenant_id=6225d95e0a924813958256bdb79de31f, updated_at=2025-12-05T10:07:54Z on network 75b36aee-b9df-40d0-a03b-5c22192226ba
Dec 05 10:07:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:07:54 np0005546420.localdomain ceph-mon[298353]: osdmap e125: 6 total, 6 up, 6 in
Dec 05 10:07:54 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e126 e126: 6 total, 6 up, 6 in
Dec 05 10:07:54 np0005546420.localdomain podman[312879]: 2025-12-05 10:07:54.716178107 +0000 UTC m=+0.081712208 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 05 10:07:54 np0005546420.localdomain dnsmasq[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/addn_hosts - 1 addresses
Dec 05 10:07:54 np0005546420.localdomain dnsmasq-dhcp[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/host
Dec 05 10:07:54 np0005546420.localdomain dnsmasq-dhcp[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/opts
Dec 05 10:07:54 np0005546420.localdomain podman[312881]: 2025-12-05 10:07:54.72214311 +0000 UTC m=+0.077415726 container kill 1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-75b36aee-b9df-40d0-a03b-5c22192226ba, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:07:54 np0005546420.localdomain podman[312879]: 2025-12-05 10:07:54.735266203 +0000 UTC m=+0.100800314 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 05 10:07:54 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:07:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:55.034 262769 INFO neutron.agent.dhcp.agent [None req-fe8ac054-4561-470b-95bc-f9ff60a016af - - - - - -] DHCP configuration for ports {'cebc1387-7c6f-46d6-8239-86b6c7428e11'} is completed
Dec 05 10:07:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:07:55 np0005546420.localdomain podman[312944]: 
Dec 05 10:07:55 np0005546420.localdomain podman[312944]: 2025-12-05 10:07:55.185501404 +0000 UTC m=+0.095847141 container create c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0d6b1ff2-fb15-4efe-8688-22522d947bc8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:07:55 np0005546420.localdomain podman[312944]: 2025-12-05 10:07:55.136648005 +0000 UTC m=+0.046993812 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:07:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:55.251 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:55 np0005546420.localdomain systemd[1]: Started libpod-conmon-c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346.scope.
Dec 05 10:07:55 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:07:55 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17ca334ce918049378393258773efff677c3b861f34765f4bf6efb28c139947/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:07:55 np0005546420.localdomain podman[312944]: 2025-12-05 10:07:55.315639536 +0000 UTC m=+0.225985283 container init c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0d6b1ff2-fb15-4efe-8688-22522d947bc8, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:55 np0005546420.localdomain podman[312944]: 2025-12-05 10:07:55.324767196 +0000 UTC m=+0.235112933 container start c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0d6b1ff2-fb15-4efe-8688-22522d947bc8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:55 np0005546420.localdomain dnsmasq[312963]: started, version 2.85 cachesize 150
Dec 05 10:07:55 np0005546420.localdomain dnsmasq[312963]: DNS service limited to local subnets
Dec 05 10:07:55 np0005546420.localdomain dnsmasq[312963]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:07:55 np0005546420.localdomain dnsmasq[312963]: warning: no upstream servers configured
Dec 05 10:07:55 np0005546420.localdomain dnsmasq-dhcp[312963]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:07:55 np0005546420.localdomain dnsmasq[312963]: read /var/lib/neutron/dhcp/0d6b1ff2-fb15-4efe-8688-22522d947bc8/addn_hosts - 0 addresses
Dec 05 10:07:55 np0005546420.localdomain dnsmasq-dhcp[312963]: read /var/lib/neutron/dhcp/0d6b1ff2-fb15-4efe-8688-22522d947bc8/host
Dec 05 10:07:55 np0005546420.localdomain dnsmasq-dhcp[312963]: read /var/lib/neutron/dhcp/0d6b1ff2-fb15-4efe-8688-22522d947bc8/opts
Dec 05 10:07:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:55.394 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:07:54Z, description=, device_id=da08f82a-06a1-48d1-9ddf-1ff4fed633bf, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a03a100>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0534c0>], id=cebc1387-7c6f-46d6-8239-86b6c7428e11, ip_allocation=immediate, mac_address=fa:16:3e:f2:4b:e7, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:07:47Z, description=, dns_domain=, id=75b36aee-b9df-40d0-a03b-5c22192226ba, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1452372109-network, port_security_enabled=True, project_id=6225d95e0a924813958256bdb79de31f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42622, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1265, status=ACTIVE, subnets=['4411b5c1-cc65-40e6-96bf-9bfff22e8064'], tags=[], tenant_id=6225d95e0a924813958256bdb79de31f, updated_at=2025-12-05T10:07:50Z, vlan_transparent=None, network_id=75b36aee-b9df-40d0-a03b-5c22192226ba, port_security_enabled=False, project_id=6225d95e0a924813958256bdb79de31f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1301, status=DOWN, tags=[], tenant_id=6225d95e0a924813958256bdb79de31f, updated_at=2025-12-05T10:07:54Z on network 75b36aee-b9df-40d0-a03b-5c22192226ba
Dec 05 10:07:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:55.485 262769 INFO neutron.agent.dhcp.agent [None req-67ecef17-4526-418d-8daa-be83a701b2f7 - - - - - -] DHCP configuration for ports {'e6758fc8-42c4-41a4-a5ed-c4ca74018058'} is completed
Dec 05 10:07:55 np0005546420.localdomain dnsmasq[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/addn_hosts - 1 addresses
Dec 05 10:07:55 np0005546420.localdomain dnsmasq-dhcp[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/host
Dec 05 10:07:55 np0005546420.localdomain dnsmasq-dhcp[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/opts
Dec 05 10:07:55 np0005546420.localdomain podman[312981]: 2025-12-05 10:07:55.607734587 +0000 UTC m=+0.059293260 container kill 1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-75b36aee-b9df-40d0-a03b-5c22192226ba, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:07:55 np0005546420.localdomain ceph-mon[298353]: osdmap e126: 6 total, 6 up, 6 in
Dec 05 10:07:55 np0005546420.localdomain ceph-mon[298353]: pgmap v223: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:07:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:55.915 262769 INFO neutron.agent.dhcp.agent [None req-9d729b42-d8a1-4052-9395-73367069c6e1 - - - - - -] DHCP configuration for ports {'cebc1387-7c6f-46d6-8239-86b6c7428e11'} is completed
Dec 05 10:07:55 np0005546420.localdomain sshd[312844]: Invalid user ubuntu from 45.135.232.92 port 22130
Dec 05 10:07:56 np0005546420.localdomain podman[313019]: 2025-12-05 10:07:56.184063147 +0000 UTC m=+0.065319746 container kill c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0d6b1ff2-fb15-4efe-8688-22522d947bc8, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:07:56 np0005546420.localdomain dnsmasq[312963]: read /var/lib/neutron/dhcp/0d6b1ff2-fb15-4efe-8688-22522d947bc8/addn_hosts - 0 addresses
Dec 05 10:07:56 np0005546420.localdomain dnsmasq-dhcp[312963]: read /var/lib/neutron/dhcp/0d6b1ff2-fb15-4efe-8688-22522d947bc8/host
Dec 05 10:07:56 np0005546420.localdomain dnsmasq-dhcp[312963]: read /var/lib/neutron/dhcp/0d6b1ff2-fb15-4efe-8688-22522d947bc8/opts
Dec 05 10:07:56 np0005546420.localdomain systemd[1]: tmp-crun.My5mKk.mount: Deactivated successfully.
Dec 05 10:07:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:56.200 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:56 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:56Z|00145|ovn_bfd|INFO|Enabled BFD on interface ovn-473cc8-0
Dec 05 10:07:56 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:56Z|00146|ovn_bfd|INFO|Enabled BFD on interface ovn-f5bb44-0
Dec 05 10:07:56 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:56Z|00147|ovn_bfd|INFO|Enabled BFD on interface ovn-40c64e-0
Dec 05 10:07:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:56.220 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:56.238 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:56.241 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:56.244 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:56.249 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:56.265 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:56 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:56.557 262769 INFO neutron.agent.dhcp.agent [None req-5af54613-6df5-4342-bc0d-149661223ab7 - - - - - -] DHCP configuration for ports {'e6758fc8-42c4-41a4-a5ed-c4ca74018058', 'f8e4a975-68d0-4028-a21b-fdf65ffc532b'} is completed
Dec 05 10:07:56 np0005546420.localdomain sshd[312844]: Connection reset by invalid user ubuntu 45.135.232.92 port 22130 [preauth]
Dec 05 10:07:57 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:57Z|00148|binding|INFO|Removing iface tapf8e4a975-68 ovn-installed in OVS
Dec 05 10:07:57 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:57.097 159503 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 9f26bf02-dde7-4719-96be-6de6dd8c74ce with type ""
Dec 05 10:07:57 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:07:57Z|00149|binding|INFO|Removing lport f8e4a975-68d0-4028-a21b-fdf65ffc532b ovn-installed in OVS
Dec 05 10:07:57 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:57.099 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-0d6b1ff2-fb15-4efe-8688-22522d947bc8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d6b1ff2-fb15-4efe-8688-22522d947bc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b32afc0f-41ec-4de4-937f-23bec30428a6, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=f8e4a975-68d0-4028-a21b-fdf65ffc532b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:07:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:57.100 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:57 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:57.102 159503 INFO neutron.agent.ovn.metadata.agent [-] Port f8e4a975-68d0-4028-a21b-fdf65ffc532b in datapath 0d6b1ff2-fb15-4efe-8688-22522d947bc8 unbound from our chassis
Dec 05 10:07:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:57.105 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:57 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:57.106 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d6b1ff2-fb15-4efe-8688-22522d947bc8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:07:57 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:07:57.107 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[c30784f7-8422-4df6-bf21-1dcc6b6d2805]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:07:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:57.160 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:57.185 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:57 np0005546420.localdomain podman[313059]: 2025-12-05 10:07:57.244835466 +0000 UTC m=+0.071035719 container kill c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0d6b1ff2-fb15-4efe-8688-22522d947bc8, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:07:57 np0005546420.localdomain systemd[1]: tmp-crun.s2zXyG.mount: Deactivated successfully.
Dec 05 10:07:57 np0005546420.localdomain dnsmasq[312963]: exiting on receipt of SIGTERM
Dec 05 10:07:57 np0005546420.localdomain systemd[1]: libpod-c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346.scope: Deactivated successfully.
Dec 05 10:07:57 np0005546420.localdomain podman[313074]: 2025-12-05 10:07:57.322820159 +0000 UTC m=+0.060243709 container died c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0d6b1ff2-fb15-4efe-8688-22522d947bc8, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 05 10:07:57 np0005546420.localdomain podman[313074]: 2025-12-05 10:07:57.351315093 +0000 UTC m=+0.088738573 container cleanup c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0d6b1ff2-fb15-4efe-8688-22522d947bc8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:07:57 np0005546420.localdomain systemd[1]: libpod-conmon-c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346.scope: Deactivated successfully.
Dec 05 10:07:57 np0005546420.localdomain podman[313075]: 2025-12-05 10:07:57.395813898 +0000 UTC m=+0.126158271 container remove c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0d6b1ff2-fb15-4efe-8688-22522d947bc8, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:07:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:57.409 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:57 np0005546420.localdomain kernel: device tapf8e4a975-68 left promiscuous mode
Dec 05 10:07:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:57.422 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:57.445 262769 INFO neutron.agent.dhcp.agent [None req-34144f92-2acf-4a30-aa47-05dd785631df - - - - - -] Synchronizing state
Dec 05 10:07:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:57.601 262769 INFO neutron.agent.dhcp.agent [None req-91762fde-1d92-4cb8-86da-8673aff5c808 - - - - - -] All active networks have been fetched through RPC.
Dec 05 10:07:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:57.602 262769 INFO neutron.agent.dhcp.agent [-] Starting network 0d6b1ff2-fb15-4efe-8688-22522d947bc8 dhcp configuration
Dec 05 10:07:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:57.603 262769 INFO neutron.agent.dhcp.agent [-] Finished network 0d6b1ff2-fb15-4efe-8688-22522d947bc8 dhcp configuration
Dec 05 10:07:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:57.604 262769 INFO neutron.agent.dhcp.agent [None req-91762fde-1d92-4cb8-86da-8673aff5c808 - - - - - -] Synchronizing state complete
Dec 05 10:07:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:07:57.706 262769 INFO neutron.agent.dhcp.agent [None req-53dcd7a2-0eda-4b48-a047-bd675c61dee5 - - - - - -] DHCP configuration for ports {'e6758fc8-42c4-41a4-a5ed-c4ca74018058'} is completed
Dec 05 10:07:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:57.924 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:57.935 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:07:58.074 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:07:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-b17ca334ce918049378393258773efff677c3b861f34765f4bf6efb28c139947-merged.mount: Deactivated successfully.
Dec 05 10:07:58 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3e6c70aac5fb113892083f76491ccd66d323539edfdb6d3d39026ffa5105346-userdata-shm.mount: Deactivated successfully.
Dec 05 10:07:58 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d0d6b1ff2\x2dfb15\x2d4efe\x2d8688\x2d22522d947bc8.mount: Deactivated successfully.
Dec 05 10:07:58 np0005546420.localdomain ceph-mon[298353]: pgmap v224: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Dec 05 10:07:59 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:07:59.144 2 INFO neutron.agent.securitygroups_rpc [None req-14d2ee81-d08b-413d-82bf-6598bbd8e1f9 aa5c05ef703b4c5c829b56913fd95190 4b28cfa3b851441a981f2fa213cf5388 - - default default] Security group member updated ['b0dca337-aa85-43fc-b2b1-0bd096c3b725']
Dec 05 10:07:59 np0005546420.localdomain sudo[313100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:07:59 np0005546420.localdomain sudo[313100]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:07:59 np0005546420.localdomain sudo[313100]: pam_unix(sudo:session): session closed for user root
Dec 05 10:07:59 np0005546420.localdomain sudo[313118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:07:59 np0005546420.localdomain sudo[313118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:08:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:00 np0005546420.localdomain sudo[313118]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:00.286 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:00 np0005546420.localdomain ceph-mon[298353]: pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Dec 05 10:08:00 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:08:00 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:08:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e127 e127: 6 total, 6 up, 6 in
Dec 05 10:08:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:01.203 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:01 np0005546420.localdomain sudo[313170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:08:01 np0005546420.localdomain sudo[313170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:08:01 np0005546420.localdomain sudo[313170]: pam_unix(sudo:session): session closed for user root
Dec 05 10:08:01 np0005546420.localdomain ceph-mon[298353]: osdmap e127: 6 total, 6 up, 6 in
Dec 05 10:08:01 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:08:01 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:08:01 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:08:01 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:08:02 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:02.280 2 INFO neutron.agent.securitygroups_rpc [None req-1102cf77-141c-44b1-a68f-102edd6b04af aa5c05ef703b4c5c829b56913fd95190 4b28cfa3b851441a981f2fa213cf5388 - - default default] Security group member updated ['b0dca337-aa85-43fc-b2b1-0bd096c3b725']
Dec 05 10:08:02 np0005546420.localdomain ceph-mon[298353]: pgmap v227: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 4.1 KiB/s wr, 52 op/s
Dec 05 10:08:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:08:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2912266881' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:08:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:08:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2912266881' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:08:03 np0005546420.localdomain ceph-mon[298353]: pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 3.6 KiB/s wr, 46 op/s
Dec 05 10:08:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2912266881' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:08:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2912266881' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:08:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:04.127 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:08:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:04.128 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:08:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:04.128 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:08:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:08:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:08:04 np0005546420.localdomain podman[313189]: 2025-12-05 10:08:04.534590262 +0000 UTC m=+0.100010399 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:08:04 np0005546420.localdomain systemd[1]: tmp-crun.EYGec3.mount: Deactivated successfully.
Dec 05 10:08:04 np0005546420.localdomain podman[313188]: 2025-12-05 10:08:04.585261736 +0000 UTC m=+0.150933451 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:08:04 np0005546420.localdomain podman[313189]: 2025-12-05 10:08:04.599126872 +0000 UTC m=+0.164547059 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:08:04 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:08:04 np0005546420.localdomain podman[313188]: 2025-12-05 10:08:04.631488114 +0000 UTC m=+0.197159899 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, name=ubi9-minimal)
Dec 05 10:08:04 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:08:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:05.289 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:06.254 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:06 np0005546420.localdomain ceph-mon[298353]: pgmap v229: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 39 op/s
Dec 05 10:08:06 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:08:08 np0005546420.localdomain ceph-mon[298353]: pgmap v230: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:08:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:08:09 np0005546420.localdomain podman[313232]: 2025-12-05 10:08:09.493619528 +0000 UTC m=+0.073348691 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 05 10:08:09 np0005546420.localdomain podman[313232]: 2025-12-05 10:08:09.541499226 +0000 UTC m=+0.121228449 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:08:09 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:08:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:10.292 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:10 np0005546420.localdomain ceph-mon[298353]: pgmap v231: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:08:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:10.615 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:08:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:10.616 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:10.618 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:08:10 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:10.771 2 INFO neutron.agent.securitygroups_rpc [None req-47d393f7-6bb2-4cd6-87e1-92a7bd760d87 866ff8446ba1414ca637bc2541e2b20c 1e403462f5fd4d6cbcd026f0f727dd2a - - default default] Security group member updated ['5c745050-d1d8-421b-8aa7-80574a4f3dcd']
Dec 05 10:08:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:10.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:10.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:08:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:11.283 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:11 np0005546420.localdomain ceph-mon[298353]: pgmap v232: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:08:12 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:12.414 2 INFO neutron.agent.securitygroups_rpc [None req-81969956-331a-4777-bdc3-13cdd435ac12 866ff8446ba1414ca637bc2541e2b20c 1e403462f5fd4d6cbcd026f0f727dd2a - - default default] Security group member updated ['5c745050-d1d8-421b-8aa7-80574a4f3dcd']
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:08:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:08:13 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:13.768 262769 INFO neutron.agent.linux.ip_lib [None req-7e5c8dc0-7f5a-491f-9f59-dd764af1ce0a - - - - - -] Device tap661ae74e-8b cannot be used as it has no MAC address
Dec 05 10:08:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:13.817 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:13 np0005546420.localdomain kernel: device tap661ae74e-8b entered promiscuous mode
Dec 05 10:08:13 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929293.8285] manager: (tap661ae74e-8b): new Generic device (/org/freedesktop/NetworkManager/Devices/29)
Dec 05 10:08:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:13Z|00150|binding|INFO|Claiming lport 661ae74e-8bdb-415a-b57d-2b7a8f070f74 for this chassis.
Dec 05 10:08:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:13.830 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:13Z|00151|binding|INFO|661ae74e-8bdb-415a-b57d-2b7a8f070f74: Claiming unknown
Dec 05 10:08:13 np0005546420.localdomain systemd-udevd[313266]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:08:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:13.845 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe42:75d0/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-1b1e26af-c55f-4efa-b472-1a071382b77b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b1e26af-c55f-4efa-b472-1a071382b77b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8da57e2736240a0ac7055e85adea6da', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e5358a6-29ac-4753-9a8b-f5335383b65d, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=661ae74e-8bdb-415a-b57d-2b7a8f070f74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:08:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:13.849 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 661ae74e-8bdb-415a-b57d-2b7a8f070f74 in datapath 1b1e26af-c55f-4efa-b472-1a071382b77b bound to our chassis
Dec 05 10:08:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:13.851 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 4e9ecdf6-e3de-44d4-af01-6c1ce3fc12b7 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:08:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:13.852 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1b1e26af-c55f-4efa-b472-1a071382b77b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:08:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:13.852 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe0f950-6bf3-464d-87a1-e1e4d051fd56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:08:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap661ae74e-8b: No such device
Dec 05 10:08:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:13Z|00152|binding|INFO|Setting lport 661ae74e-8bdb-415a-b57d-2b7a8f070f74 ovn-installed in OVS
Dec 05 10:08:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:13Z|00153|binding|INFO|Setting lport 661ae74e-8bdb-415a-b57d-2b7a8f070f74 up in Southbound
Dec 05 10:08:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap661ae74e-8b: No such device
Dec 05 10:08:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:13.874 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:13.875 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap661ae74e-8b: No such device
Dec 05 10:08:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap661ae74e-8b: No such device
Dec 05 10:08:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap661ae74e-8b: No such device
Dec 05 10:08:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap661ae74e-8b: No such device
Dec 05 10:08:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap661ae74e-8b: No such device
Dec 05 10:08:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:13.906 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap661ae74e-8b: No such device
Dec 05 10:08:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:13.931 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:14 np0005546420.localdomain ceph-mon[298353]: pgmap v233: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:08:14 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:14.586 2 INFO neutron.agent.securitygroups_rpc [None req-d79b5d17-8f31-461a-b302-40e91cde7849 a052c73754704caaa399378c7e50192a f8da57e2736240a0ac7055e85adea6da - - default default] Security group member updated ['4a643f0b-9f81-4463-adc9-4f8f421f9506']
Dec 05 10:08:14 np0005546420.localdomain podman[313337]: 
Dec 05 10:08:14 np0005546420.localdomain podman[313337]: 2025-12-05 10:08:14.759499137 +0000 UTC m=+0.094048387 container create 5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b1e26af-c55f-4efa-b472-1a071382b77b, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:08:14 np0005546420.localdomain systemd[1]: Started libpod-conmon-5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04.scope.
Dec 05 10:08:14 np0005546420.localdomain podman[313337]: 2025-12-05 10:08:14.712581647 +0000 UTC m=+0.047130937 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:08:14 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:08:14 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c973d092944e1a82c208515088fe33fcec16679368f69f3ce73c8c25449cf1cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:14 np0005546420.localdomain podman[313337]: 2025-12-05 10:08:14.843513544 +0000 UTC m=+0.178062804 container init 5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b1e26af-c55f-4efa-b472-1a071382b77b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:08:14 np0005546420.localdomain podman[313337]: 2025-12-05 10:08:14.852321954 +0000 UTC m=+0.186871204 container start 5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b1e26af-c55f-4efa-b472-1a071382b77b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:08:14 np0005546420.localdomain dnsmasq[313355]: started, version 2.85 cachesize 150
Dec 05 10:08:14 np0005546420.localdomain dnsmasq[313355]: DNS service limited to local subnets
Dec 05 10:08:14 np0005546420.localdomain dnsmasq[313355]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:08:14 np0005546420.localdomain dnsmasq[313355]: warning: no upstream servers configured
Dec 05 10:08:14 np0005546420.localdomain dnsmasq[313355]: read /var/lib/neutron/dhcp/1b1e26af-c55f-4efa-b472-1a071382b77b/addn_hosts - 0 addresses
Dec 05 10:08:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:14.869 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:14.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:14.870 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:08:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:14.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:08:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:14.889 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:08:14 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:14.918 262769 INFO neutron.agent.dhcp.agent [None req-7e5c8dc0-7f5a-491f-9f59-dd764af1ce0a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:08:13Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a07a460>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a07afa0>], id=ba236b3d-a834-4efa-88ff-bafc68a1461f, ip_allocation=immediate, mac_address=fa:16:3e:a7:58:24, name=tempest-NetworksIpV6TestAttrs-462304080, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:08:10Z, description=, dns_domain=, id=1b1e26af-c55f-4efa-b472-1a071382b77b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksIpV6TestAttrs-test-network-536501617, port_security_enabled=True, project_id=f8da57e2736240a0ac7055e85adea6da, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17802, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1418, status=ACTIVE, subnets=['9d084559-387e-40b2-9330-35fedbbfe96b'], tags=[], tenant_id=f8da57e2736240a0ac7055e85adea6da, updated_at=2025-12-05T10:08:12Z, vlan_transparent=None, network_id=1b1e26af-c55f-4efa-b472-1a071382b77b, port_security_enabled=True, project_id=f8da57e2736240a0ac7055e85adea6da, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['4a643f0b-9f81-4463-adc9-4f8f421f9506'], standard_attr_id=1436, status=DOWN, tags=[], tenant_id=f8da57e2736240a0ac7055e85adea6da, updated_at=2025-12-05T10:08:13Z on network 1b1e26af-c55f-4efa-b472-1a071382b77b
Dec 05 10:08:14 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:14.990 262769 INFO neutron.agent.dhcp.agent [None req-9f7f6193-b938-40a1-ad5e-5c7e21e48a1e - - - - - -] DHCP configuration for ports {'617783c9-5f93-4d8b-b79f-b1fdc05e164b'} is completed
Dec 05 10:08:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:15 np0005546420.localdomain dnsmasq[313355]: read /var/lib/neutron/dhcp/1b1e26af-c55f-4efa-b472-1a071382b77b/addn_hosts - 1 addresses
Dec 05 10:08:15 np0005546420.localdomain podman[313374]: 2025-12-05 10:08:15.112185885 +0000 UTC m=+0.057415572 container kill 5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b1e26af-c55f-4efa-b472-1a071382b77b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:08:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:15.294 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:15.381 262769 INFO neutron.agent.dhcp.agent [None req-e5b485e1-3d1f-47f9-9223-2227080fb05b - - - - - -] DHCP configuration for ports {'ba236b3d-a834-4efa-88ff-bafc68a1461f'} is completed
Dec 05 10:08:15 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:15.620 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:08:15 np0005546420.localdomain dnsmasq[313355]: exiting on receipt of SIGTERM
Dec 05 10:08:15 np0005546420.localdomain podman[313412]: 2025-12-05 10:08:15.726896293 +0000 UTC m=+0.062041734 container kill 5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b1e26af-c55f-4efa-b472-1a071382b77b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:08:15 np0005546420.localdomain systemd[1]: libpod-5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04.scope: Deactivated successfully.
Dec 05 10:08:15 np0005546420.localdomain podman[313426]: 2025-12-05 10:08:15.80309786 +0000 UTC m=+0.065022445 container died 5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b1e26af-c55f-4efa-b472-1a071382b77b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:08:15 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04-userdata-shm.mount: Deactivated successfully.
Dec 05 10:08:15 np0005546420.localdomain podman[313426]: 2025-12-05 10:08:15.839235389 +0000 UTC m=+0.101159974 container cleanup 5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b1e26af-c55f-4efa-b472-1a071382b77b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:08:15 np0005546420.localdomain systemd[1]: libpod-conmon-5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04.scope: Deactivated successfully.
Dec 05 10:08:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:08:15 np0005546420.localdomain podman[313433]: 2025-12-05 10:08:15.896430853 +0000 UTC m=+0.145668029 container remove 5b144cde17106237a9c3cbd8032531cce4dfc72b550b3d3f81f721728a5e7c04 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1b1e26af-c55f-4efa-b472-1a071382b77b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:08:15 np0005546420.localdomain kernel: device tap661ae74e-8b left promiscuous mode
Dec 05 10:08:15 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:15Z|00154|binding|INFO|Releasing lport 661ae74e-8bdb-415a-b57d-2b7a8f070f74 from this chassis (sb_readonly=0)
Dec 05 10:08:15 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:15Z|00155|binding|INFO|Setting lport 661ae74e-8bdb-415a-b57d-2b7a8f070f74 down in Southbound
Dec 05 10:08:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:15.914 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:15 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:15.923 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe42:75d0/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-1b1e26af-c55f-4efa-b472-1a071382b77b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b1e26af-c55f-4efa-b472-1a071382b77b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8da57e2736240a0ac7055e85adea6da', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e5358a6-29ac-4753-9a8b-f5335383b65d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=661ae74e-8bdb-415a-b57d-2b7a8f070f74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:08:15 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:15.925 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 661ae74e-8bdb-415a-b57d-2b7a8f070f74 in datapath 1b1e26af-c55f-4efa-b472-1a071382b77b unbound from our chassis
Dec 05 10:08:15 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:15.928 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1b1e26af-c55f-4efa-b472-1a071382b77b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:08:15 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:15.929 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[83990e4e-dccb-4726-b80c-3f1b599d6909]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:08:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:15.931 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:15 np0005546420.localdomain podman[313453]: 2025-12-05 10:08:15.96442277 +0000 UTC m=+0.091973563 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 05 10:08:15 np0005546420.localdomain podman[313453]: 2025-12-05 10:08:15.978391638 +0000 UTC m=+0.105942431 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:08:15 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:08:16 np0005546420.localdomain ceph-mon[298353]: pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:08:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:16.319 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:16.355 262769 INFO neutron.agent.dhcp.agent [None req-c2973794-3683-4c62-bfa5-6a06ff9cf027 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:16 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c973d092944e1a82c208515088fe33fcec16679368f69f3ce73c8c25449cf1cb-merged.mount: Deactivated successfully.
Dec 05 10:08:16 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d1b1e26af\x2dc55f\x2d4efa\x2db472\x2d1a071382b77b.mount: Deactivated successfully.
Dec 05 10:08:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:17.000 262769 INFO neutron.agent.linux.ip_lib [None req-c61280b1-bc5a-4ad4-8860-26834ebb1019 - - - - - -] Device tap378e2d2a-fa cannot be used as it has no MAC address
Dec 05 10:08:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:17.024 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:17 np0005546420.localdomain kernel: device tap378e2d2a-fa entered promiscuous mode
Dec 05 10:08:17 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929297.0345] manager: (tap378e2d2a-fa): new Generic device (/org/freedesktop/NetworkManager/Devices/30)
Dec 05 10:08:17 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:17Z|00156|binding|INFO|Claiming lport 378e2d2a-fa97-4716-8485-1515051a81f0 for this chassis.
Dec 05 10:08:17 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:17Z|00157|binding|INFO|378e2d2a-fa97-4716-8485-1515051a81f0: Claiming unknown
Dec 05 10:08:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:17.037 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:17 np0005546420.localdomain systemd-udevd[313484]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:08:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:17.047 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90e0d4d8-d783-4708-8bda-a067618fd840, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=378e2d2a-fa97-4716-8485-1515051a81f0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:08:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:17.049 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 378e2d2a-fa97-4716-8485-1515051a81f0 in datapath e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5 bound to our chassis
Dec 05 10:08:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:17.051 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:08:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:17.052 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[f08a4674-00eb-4cab-90a8-c9a4111a0cf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:08:17 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap378e2d2a-fa: No such device
Dec 05 10:08:17 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap378e2d2a-fa: No such device
Dec 05 10:08:17 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:17Z|00158|binding|INFO|Setting lport 378e2d2a-fa97-4716-8485-1515051a81f0 ovn-installed in OVS
Dec 05 10:08:17 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:17Z|00159|binding|INFO|Setting lport 378e2d2a-fa97-4716-8485-1515051a81f0 up in Southbound
Dec 05 10:08:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:17.067 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:17 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap378e2d2a-fa: No such device
Dec 05 10:08:17 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap378e2d2a-fa: No such device
Dec 05 10:08:17 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap378e2d2a-fa: No such device
Dec 05 10:08:17 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap378e2d2a-fa: No such device
Dec 05 10:08:17 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap378e2d2a-fa: No such device
Dec 05 10:08:17 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap378e2d2a-fa: No such device
Dec 05 10:08:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:17.116 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:17.150 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:08:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:08:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:08:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154915 "" "Go-http-client/1.1"
Dec 05 10:08:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:08:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18728 "" "Go-http-client/1.1"
Dec 05 10:08:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e128 e128: 6 total, 6 up, 6 in
Dec 05 10:08:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:17.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:18 np0005546420.localdomain podman[313555]: 
Dec 05 10:08:18 np0005546420.localdomain podman[313555]: 2025-12-05 10:08:18.166163062 +0000 UTC m=+0.097485782 container create f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:08:18 np0005546420.localdomain systemd[1]: Started libpod-conmon-f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337.scope.
Dec 05 10:08:18 np0005546420.localdomain podman[313555]: 2025-12-05 10:08:18.122091549 +0000 UTC m=+0.053414299 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:08:18 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:08:18 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/441f7d9cf0179a14e9e2b89dc9b6affb54fad773b845c24ba289aa8c0f30be83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:18 np0005546420.localdomain podman[313555]: 2025-12-05 10:08:18.240471811 +0000 UTC m=+0.171794521 container init f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 10:08:18 np0005546420.localdomain podman[313555]: 2025-12-05 10:08:18.255079288 +0000 UTC m=+0.186402008 container start f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:08:18 np0005546420.localdomain dnsmasq[313573]: started, version 2.85 cachesize 150
Dec 05 10:08:18 np0005546420.localdomain dnsmasq[313573]: DNS service limited to local subnets
Dec 05 10:08:18 np0005546420.localdomain dnsmasq[313573]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:08:18 np0005546420.localdomain dnsmasq[313573]: warning: no upstream servers configured
Dec 05 10:08:18 np0005546420.localdomain dnsmasq-dhcp[313573]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:08:18 np0005546420.localdomain dnsmasq[313573]: read /var/lib/neutron/dhcp/e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5/addn_hosts - 0 addresses
Dec 05 10:08:18 np0005546420.localdomain dnsmasq-dhcp[313573]: read /var/lib/neutron/dhcp/e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5/host
Dec 05 10:08:18 np0005546420.localdomain dnsmasq-dhcp[313573]: read /var/lib/neutron/dhcp/e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5/opts
Dec 05 10:08:18 np0005546420.localdomain ceph-mon[298353]: pgmap v235: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:08:18 np0005546420.localdomain ceph-mon[298353]: osdmap e128: 6 total, 6 up, 6 in
Dec 05 10:08:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e129 e129: 6 total, 6 up, 6 in
Dec 05 10:08:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:18.438 262769 INFO neutron.agent.dhcp.agent [None req-43475613-041b-46dc-8a40-aa36ed90d161 - - - - - -] DHCP configuration for ports {'5129b1f2-d589-4cc6-ae19-0af61ce172b9'} is completed
Dec 05 10:08:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:18.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:08:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:08:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:08:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:08:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:08:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:08:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:08:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:08:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:08:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:08:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:08:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:08:19 np0005546420.localdomain ceph-mon[298353]: osdmap e129: 6 total, 6 up, 6 in
Dec 05 10:08:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3255465608' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:08:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:08:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 600.0 total, 600.0 interval
                                                           Cumulative writes: 2245 writes, 23K keys, 2245 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.07 MB/s
                                                           Cumulative WAL: 2245 writes, 2245 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 2245 writes, 23K keys, 2245 commit groups, 1.0 writes per commit group, ingest: 42.90 MB, 0.07 MB/s
                                                           Interval WAL: 2245 writes, 2245 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    142.1      0.23              0.07        10    0.023       0      0       0.0       0.0
                                                             L6      1/0   15.74 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.6    187.0    167.9      0.92              0.35         9    0.102    107K   4516       0.0       0.0
                                                            Sum      1/0   15.74 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.6    149.0    162.6      1.15              0.42        19    0.061    107K   4516       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.6    149.5    163.2      1.15              0.42        18    0.064    107K   4516       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    187.0    167.9      0.92              0.35         9    0.102    107K   4516       0.0       0.0
                                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    144.7      0.23              0.07         9    0.026       0      0       0.0       0.0
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 600.0 total, 600.0 interval
                                                           Flush(GB): cumulative 0.032, interval 0.032
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.18 GB write, 0.31 MB/s write, 0.17 GB read, 0.29 MB/s read, 1.2 seconds
                                                           Interval compaction: 0.18 GB write, 0.31 MB/s write, 0.17 GB read, 0.29 MB/s read, 1.1 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x557fb868b350#2 capacity: 308.00 MB usage: 19.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000197 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(883,18.44 MB,5.98853%) FilterBlock(19,329.80 KB,0.104567%) IndexBlock(19,429.77 KB,0.136264%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 05 10:08:20 np0005546420.localdomain dnsmasq[313573]: read /var/lib/neutron/dhcp/e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5/addn_hosts - 0 addresses
Dec 05 10:08:20 np0005546420.localdomain dnsmasq-dhcp[313573]: read /var/lib/neutron/dhcp/e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5/host
Dec 05 10:08:20 np0005546420.localdomain podman[313591]: 2025-12-05 10:08:20.027002896 +0000 UTC m=+0.078560521 container kill f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:08:20 np0005546420.localdomain dnsmasq-dhcp[313573]: read /var/lib/neutron/dhcp/e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5/opts
Dec 05 10:08:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:20.342 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:20 np0005546420.localdomain ceph-mon[298353]: pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:08:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/28325255' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:08:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:20.434 262769 INFO neutron.agent.dhcp.agent [None req-99977279-cefa-4bb2-ac93-3901da2f8a94 - - - - - -] DHCP configuration for ports {'378e2d2a-fa97-4716-8485-1515051a81f0', '5129b1f2-d589-4cc6-ae19-0af61ce172b9'} is completed
Dec 05 10:08:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:20.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:20.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:20.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.066 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.066 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.067 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.067 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:08:21 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:21.071 2 INFO neutron.agent.securitygroups_rpc [None req-89312cbd-ae39-4130-8ecf-dec06673918d a052c73754704caaa399378c7e50192a f8da57e2736240a0ac7055e85adea6da - - default default] Security group member updated ['4a643f0b-9f81-4463-adc9-4f8f421f9506']
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.068 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:08:21 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:21Z|00160|binding|INFO|Removing iface tap378e2d2a-fa ovn-installed in OVS
Dec 05 10:08:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:21.173 159503 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port cdc067a2-0bb5-44a5-8dfc-73dbf4455fc7 with type ""
Dec 05 10:08:21 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:21Z|00161|binding|INFO|Removing lport 378e2d2a-fa97-4716-8485-1515051a81f0 ovn-installed in OVS
Dec 05 10:08:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:21.174 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90e0d4d8-d783-4708-8bda-a067618fd840, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=378e2d2a-fa97-4716-8485-1515051a81f0) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:08:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:21.177 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 378e2d2a-fa97-4716-8485-1515051a81f0 in datapath e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5 unbound from our chassis
Dec 05 10:08:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:21.180 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:08:21 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:21.181 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[7fff561d-050a-477f-886a-527f64e085eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.183 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:21 np0005546420.localdomain dnsmasq[313573]: exiting on receipt of SIGTERM
Dec 05 10:08:21 np0005546420.localdomain systemd[1]: libpod-f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337.scope: Deactivated successfully.
Dec 05 10:08:21 np0005546420.localdomain podman[313649]: 2025-12-05 10:08:21.31153334 +0000 UTC m=+0.059065663 container kill f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.323 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2819140810' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:08:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2819140810' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:08:21 np0005546420.localdomain podman[313661]: 2025-12-05 10:08:21.391168414 +0000 UTC m=+0.063994875 container died f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:08:21 np0005546420.localdomain systemd[1]: tmp-crun.7qSXmv.mount: Deactivated successfully.
Dec 05 10:08:21 np0005546420.localdomain podman[313661]: 2025-12-05 10:08:21.485997233 +0000 UTC m=+0.158823654 container cleanup f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:08:21 np0005546420.localdomain systemd[1]: libpod-conmon-f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337.scope: Deactivated successfully.
Dec 05 10:08:21 np0005546420.localdomain podman[313668]: 2025-12-05 10:08:21.511793364 +0000 UTC m=+0.170251134 container remove f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.524 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:21 np0005546420.localdomain kernel: device tap378e2d2a-fa left promiscuous mode
Dec 05 10:08:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:08:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/842063857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.542 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.556 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:08:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:21.562 262769 INFO neutron.agent.dhcp.agent [None req-91762fde-1d92-4cb8-86da-8673aff5c808 - - - - - -] Synchronizing state
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.699 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.700 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11690MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.701 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.701 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:08:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:21.747 262769 INFO neutron.agent.dhcp.agent [None req-1068a171-1e25-4f6b-ad42-ab36022961d5 - - - - - -] All active networks have been fetched through RPC.
Dec 05 10:08:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:21.748 262769 INFO neutron.agent.dhcp.agent [-] Starting network 1b1e26af-c55f-4efa-b472-1a071382b77b dhcp configuration
Dec 05 10:08:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:21.750 262769 INFO neutron.agent.dhcp.agent [-] Finished network 1b1e26af-c55f-4efa-b472-1a071382b77b dhcp configuration
Dec 05 10:08:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:21.750 262769 INFO neutron.agent.dhcp.agent [-] Starting network 72f0e76a-f297-4ffa-ae65-9b4a7eec2da6 dhcp configuration
Dec 05 10:08:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:21.751 262769 INFO neutron.agent.dhcp.agent [-] Finished network 72f0e76a-f297-4ffa-ae65-9b4a7eec2da6 dhcp configuration
Dec 05 10:08:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:21.751 262769 INFO neutron.agent.dhcp.agent [-] Starting network a27b6b20-a100-421d-a4e7-3180d0af0f28 dhcp configuration
Dec 05 10:08:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:21.751 262769 INFO neutron.agent.dhcp.agent [-] Finished network a27b6b20-a100-421d-a4e7-3180d0af0f28 dhcp configuration
Dec 05 10:08:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:21.752 262769 INFO neutron.agent.dhcp.agent [-] Starting network e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5 dhcp configuration
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.779 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.780 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:08:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:21.801 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:08:22 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:08:22 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2636451331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:08:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:22.277 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:08:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:22.283 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:08:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:22.304 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:08:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:22.307 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:08:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:22.308 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:08:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-441f7d9cf0179a14e9e2b89dc9b6affb54fad773b845c24ba289aa8c0f30be83-merged.mount: Deactivated successfully.
Dec 05 10:08:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f49bb1bf8009b9afe09f31f7490a78e90ddc2d55d2b1465406036ddd04bdd337-userdata-shm.mount: Deactivated successfully.
Dec 05 10:08:22 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2de2ec277e\x2d5ad2\x2d4ca0\x2da1eb\x2db591c41a0ef5.mount: Deactivated successfully.
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.369 262769 INFO neutron.agent.dhcp.agent [None req-391395c9-b1e3-45d9-9b82-efc75715e7e6 - - - - - -] Finished network e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5 dhcp configuration
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.370 262769 INFO neutron.agent.dhcp.agent [None req-1068a171-1e25-4f6b-ad42-ab36022961d5 - - - - - -] Synchronizing state complete
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.371 262769 INFO neutron.agent.dhcp.agent [None req-1068a171-1e25-4f6b-ad42-ab36022961d5 - - - - - -] Synchronizing state
Dec 05 10:08:22 np0005546420.localdomain ceph-mon[298353]: pgmap v239: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 767 B/s wr, 5 op/s
Dec 05 10:08:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/842063857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:08:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2636451331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.463 262769 INFO neutron.agent.dhcp.agent [None req-4cb334e6-0d30-48d7-8016-f3c2f6f9fa96 - - - - - -] DHCP configuration for ports {'5129b1f2-d589-4cc6-ae19-0af61ce172b9'} is completed
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.498 262769 INFO neutron.agent.dhcp.agent [None req-5f5be37f-f2ad-4772-b524-6f9efea16c83 - - - - - -] All active networks have been fetched through RPC.
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.499 262769 INFO neutron.agent.dhcp.agent [-] Starting network 1b1e26af-c55f-4efa-b472-1a071382b77b dhcp configuration
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.499 262769 INFO neutron.agent.dhcp.agent [-] Finished network 1b1e26af-c55f-4efa-b472-1a071382b77b dhcp configuration
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.499 262769 INFO neutron.agent.dhcp.agent [-] Starting network 72f0e76a-f297-4ffa-ae65-9b4a7eec2da6 dhcp configuration
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.500 262769 INFO neutron.agent.dhcp.agent [-] Finished network 72f0e76a-f297-4ffa-ae65-9b4a7eec2da6 dhcp configuration
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.500 262769 INFO neutron.agent.dhcp.agent [-] Starting network a27b6b20-a100-421d-a4e7-3180d0af0f28 dhcp configuration
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.500 262769 INFO neutron.agent.dhcp.agent [-] Finished network a27b6b20-a100-421d-a4e7-3180d0af0f28 dhcp configuration
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.501 262769 INFO neutron.agent.dhcp.agent [-] Starting network e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5 dhcp configuration
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.501 262769 INFO neutron.agent.dhcp.agent [-] Finished network e2ec277e-5ad2-4ca0-a1eb-b591c41a0ef5 dhcp configuration
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.501 262769 INFO neutron.agent.dhcp.agent [None req-5f5be37f-f2ad-4772-b524-6f9efea16c83 - - - - - -] Synchronizing state complete
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.576 262769 INFO neutron.agent.dhcp.agent [None req-88465355-74d7-478e-a1cf-b736b01c0fc1 - - - - - -] DHCP configuration for ports {'5129b1f2-d589-4cc6-ae19-0af61ce172b9'} is completed
Dec 05 10:08:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:22.855 262769 INFO neutron.agent.dhcp.agent [None req-861a926d-aae6-49fa-a598-f6e8c4773054 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:22 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:22.914 2 INFO neutron.agent.securitygroups_rpc [None req-42abf360-ac08-4368-8817-586fcc6d5d65 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']
Dec 05 10:08:23 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:23.012 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:08:23 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1098306563' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:08:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:08:23 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1098306563' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:08:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:08:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:08:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1098306563' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:08:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1098306563' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:08:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2945183899' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:08:23 np0005546420.localdomain systemd[1]: tmp-crun.Ii3IQw.mount: Deactivated successfully.
Dec 05 10:08:23 np0005546420.localdomain podman[313713]: 2025-12-05 10:08:23.511866009 +0000 UTC m=+0.086172985 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:08:23 np0005546420.localdomain podman[313713]: 2025-12-05 10:08:23.520346499 +0000 UTC m=+0.094653505 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:08:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:23.524 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:23 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:08:23 np0005546420.localdomain systemd[1]: tmp-crun.BHkPxj.mount: Deactivated successfully.
Dec 05 10:08:23 np0005546420.localdomain podman[313714]: 2025-12-05 10:08:23.603605334 +0000 UTC m=+0.172796083 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:08:23 np0005546420.localdomain podman[313714]: 2025-12-05 10:08:23.638664318 +0000 UTC m=+0.207855097 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 10:08:23 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:08:24 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:24.095 2 INFO neutron.agent.securitygroups_rpc [None req-cb152c80-0262-4769-8dfc-c51753d9b263 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']
Dec 05 10:08:24 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:24.175 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:24 np0005546420.localdomain ceph-mon[298353]: pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Dec 05 10:08:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2687793982' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:08:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:25.377 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:08:25 np0005546420.localdomain systemd[1]: tmp-crun.vdXG4C.mount: Deactivated successfully.
Dec 05 10:08:25 np0005546420.localdomain podman[313754]: 2025-12-05 10:08:25.523325193 +0000 UTC m=+0.093425517 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:08:25 np0005546420.localdomain podman[313754]: 2025-12-05 10:08:25.540398566 +0000 UTC m=+0.110498880 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 05 10:08:25 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:08:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e130 e130: 6 total, 6 up, 6 in
Dec 05 10:08:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:26.326 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:26 np0005546420.localdomain ceph-mon[298353]: pgmap v241: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Dec 05 10:08:26 np0005546420.localdomain ceph-mon[298353]: osdmap e130: 6 total, 6 up, 6 in
Dec 05 10:08:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/222050939' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:08:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/222050939' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:08:28 np0005546420.localdomain ceph-mon[298353]: pgmap v243: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.6 KiB/s wr, 58 op/s
Dec 05 10:08:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:08:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3748099700' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:08:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:08:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3748099700' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:08:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3748099700' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:08:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3748099700' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:08:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:30.380 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:30 np0005546420.localdomain ceph-mon[298353]: pgmap v244: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 51 op/s
Dec 05 10:08:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:30.803 262769 INFO neutron.agent.linux.ip_lib [None req-d78ea694-6b19-4020-a9ab-3f1d3dd37dc6 - - - - - -] Device tapf99ca6cb-96 cannot be used as it has no MAC address
Dec 05 10:08:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:30.834 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:30 np0005546420.localdomain kernel: device tapf99ca6cb-96 entered promiscuous mode
Dec 05 10:08:30 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929310.8433] manager: (tapf99ca6cb-96): new Generic device (/org/freedesktop/NetworkManager/Devices/31)
Dec 05 10:08:30 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:30Z|00162|binding|INFO|Claiming lport f99ca6cb-9619-47f3-ac99-41e1cab2e427 for this chassis.
Dec 05 10:08:30 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:30Z|00163|binding|INFO|f99ca6cb-9619-47f3-ac99-41e1cab2e427: Claiming unknown
Dec 05 10:08:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:30.846 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:30 np0005546420.localdomain systemd-udevd[313783]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:08:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:30.854 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-f71d6a33-64f7-47e6-a1b4-68f49a214977', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f71d6a33-64f7-47e6-a1b4-68f49a214977', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8da57e2736240a0ac7055e85adea6da', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=100893d0-2952-49dc-a3be-2eb1d64eecad, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=f99ca6cb-9619-47f3-ac99-41e1cab2e427) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:08:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:30.859 159503 INFO neutron.agent.ovn.metadata.agent [-] Port f99ca6cb-9619-47f3-ac99-41e1cab2e427 in datapath f71d6a33-64f7-47e6-a1b4-68f49a214977 bound to our chassis
Dec 05 10:08:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:30.860 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f71d6a33-64f7-47e6-a1b4-68f49a214977 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:08:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:30.861 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[3ceb6671-cb69-45d5-89e3-8fa9994fa4a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:08:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf99ca6cb-96: No such device
Dec 05 10:08:30 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:30Z|00164|binding|INFO|Setting lport f99ca6cb-9619-47f3-ac99-41e1cab2e427 ovn-installed in OVS
Dec 05 10:08:30 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:30Z|00165|binding|INFO|Setting lport f99ca6cb-9619-47f3-ac99-41e1cab2e427 up in Southbound
Dec 05 10:08:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf99ca6cb-96: No such device
Dec 05 10:08:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:30.883 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf99ca6cb-96: No such device
Dec 05 10:08:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf99ca6cb-96: No such device
Dec 05 10:08:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf99ca6cb-96: No such device
Dec 05 10:08:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf99ca6cb-96: No such device
Dec 05 10:08:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf99ca6cb-96: No such device
Dec 05 10:08:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf99ca6cb-96: No such device
Dec 05 10:08:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:30.924 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:30.950 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:31.364 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:31 np0005546420.localdomain ceph-mon[298353]: pgmap v245: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 2.7 KiB/s wr, 64 op/s
Dec 05 10:08:31 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:31Z|00166|binding|INFO|Removing iface tapf99ca6cb-96 ovn-installed in OVS
Dec 05 10:08:31 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:31Z|00167|binding|INFO|Removing lport f99ca6cb-9619-47f3-ac99-41e1cab2e427 ovn-installed in OVS
Dec 05 10:08:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:31.673 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:31 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:31.676 159503 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 8f1ec66f-ed71-4618-b707-d89e3d94eadf with type ""
Dec 05 10:08:31 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:31.677 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-f71d6a33-64f7-47e6-a1b4-68f49a214977', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f71d6a33-64f7-47e6-a1b4-68f49a214977', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8da57e2736240a0ac7055e85adea6da', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=100893d0-2952-49dc-a3be-2eb1d64eecad, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=f99ca6cb-9619-47f3-ac99-41e1cab2e427) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:08:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:31.679 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:31 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:31.681 159503 INFO neutron.agent.ovn.metadata.agent [-] Port f99ca6cb-9619-47f3-ac99-41e1cab2e427 in datapath f71d6a33-64f7-47e6-a1b4-68f49a214977 unbound from our chassis
Dec 05 10:08:31 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:31.682 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f71d6a33-64f7-47e6-a1b4-68f49a214977 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:08:31 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:31.683 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[d6014df5-5ae5-4642-8c66-fd46eba09f4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:08:31 np0005546420.localdomain podman[313854]: 
Dec 05 10:08:31 np0005546420.localdomain podman[313854]: 2025-12-05 10:08:31.865284674 +0000 UTC m=+0.097956476 container create 84652b8d366ff0d44f0a0173d9ddbd36f24653d6f5973528198ca663e1ce0520 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f71d6a33-64f7-47e6-a1b4-68f49a214977, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:08:31 np0005546420.localdomain systemd[1]: Started libpod-conmon-84652b8d366ff0d44f0a0173d9ddbd36f24653d6f5973528198ca663e1ce0520.scope.
Dec 05 10:08:31 np0005546420.localdomain podman[313854]: 2025-12-05 10:08:31.817815367 +0000 UTC m=+0.050487209 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:08:31 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:08:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97544e631288be9ae10224cf14df135a3cf7a7c3c64fd4e161dc4c9f993f994/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:31 np0005546420.localdomain podman[313854]: 2025-12-05 10:08:31.953442727 +0000 UTC m=+0.186114539 container init 84652b8d366ff0d44f0a0173d9ddbd36f24653d6f5973528198ca663e1ce0520 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f71d6a33-64f7-47e6-a1b4-68f49a214977, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:08:31 np0005546420.localdomain podman[313854]: 2025-12-05 10:08:31.964104485 +0000 UTC m=+0.196776277 container start 84652b8d366ff0d44f0a0173d9ddbd36f24653d6f5973528198ca663e1ce0520 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f71d6a33-64f7-47e6-a1b4-68f49a214977, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 10:08:31 np0005546420.localdomain dnsmasq[313872]: started, version 2.85 cachesize 150
Dec 05 10:08:31 np0005546420.localdomain dnsmasq[313872]: DNS service limited to local subnets
Dec 05 10:08:31 np0005546420.localdomain dnsmasq[313872]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:08:31 np0005546420.localdomain dnsmasq[313872]: warning: no upstream servers configured
Dec 05 10:08:31 np0005546420.localdomain dnsmasq-dhcp[313872]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:08:31 np0005546420.localdomain dnsmasq[313872]: read /var/lib/neutron/dhcp/f71d6a33-64f7-47e6-a1b4-68f49a214977/addn_hosts - 0 addresses
Dec 05 10:08:31 np0005546420.localdomain dnsmasq-dhcp[313872]: read /var/lib/neutron/dhcp/f71d6a33-64f7-47e6-a1b4-68f49a214977/host
Dec 05 10:08:31 np0005546420.localdomain dnsmasq-dhcp[313872]: read /var/lib/neutron/dhcp/f71d6a33-64f7-47e6-a1b4-68f49a214977/opts
Dec 05 10:08:32 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:32.060 262769 INFO neutron.agent.dhcp.agent [None req-cfe8db64-f74c-4827-bbf3-3a5c7444c099 - - - - - -] DHCP configuration for ports {'d09e1111-43e6-4a70-a060-6cb0c3bab62f'} is completed
Dec 05 10:08:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:32.175 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:32 np0005546420.localdomain dnsmasq[313872]: exiting on receipt of SIGTERM
Dec 05 10:08:32 np0005546420.localdomain podman[313890]: 2025-12-05 10:08:32.296926485 +0000 UTC m=+0.058875717 container kill 84652b8d366ff0d44f0a0173d9ddbd36f24653d6f5973528198ca663e1ce0520 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f71d6a33-64f7-47e6-a1b4-68f49a214977, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 05 10:08:32 np0005546420.localdomain systemd[1]: libpod-84652b8d366ff0d44f0a0173d9ddbd36f24653d6f5973528198ca663e1ce0520.scope: Deactivated successfully.
Dec 05 10:08:32 np0005546420.localdomain podman[313903]: 2025-12-05 10:08:32.3740478 +0000 UTC m=+0.058263128 container died 84652b8d366ff0d44f0a0173d9ddbd36f24653d6f5973528198ca663e1ce0520 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f71d6a33-64f7-47e6-a1b4-68f49a214977, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:08:32 np0005546420.localdomain podman[313903]: 2025-12-05 10:08:32.406516156 +0000 UTC m=+0.090731444 container cleanup 84652b8d366ff0d44f0a0173d9ddbd36f24653d6f5973528198ca663e1ce0520 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f71d6a33-64f7-47e6-a1b4-68f49a214977, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 10:08:32 np0005546420.localdomain systemd[1]: libpod-conmon-84652b8d366ff0d44f0a0173d9ddbd36f24653d6f5973528198ca663e1ce0520.scope: Deactivated successfully.
Dec 05 10:08:32 np0005546420.localdomain podman[313904]: 2025-12-05 10:08:32.454094716 +0000 UTC m=+0.132574448 container remove 84652b8d366ff0d44f0a0173d9ddbd36f24653d6f5973528198ca663e1ce0520 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f71d6a33-64f7-47e6-a1b4-68f49a214977, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:08:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:32.511 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:32 np0005546420.localdomain kernel: device tapf99ca6cb-96 left promiscuous mode
Dec 05 10:08:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:32.527 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/643600439' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:08:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/643600439' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:08:32 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:32.682 262769 INFO neutron.agent.dhcp.agent [None req-8b0649d5-c1f7-4498-aaff-0529f1a4cafd - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:32 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:32.683 262769 INFO neutron.agent.dhcp.agent [None req-8b0649d5-c1f7-4498-aaff-0529f1a4cafd - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c97544e631288be9ae10224cf14df135a3cf7a7c3c64fd4e161dc4c9f993f994-merged.mount: Deactivated successfully.
Dec 05 10:08:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84652b8d366ff0d44f0a0173d9ddbd36f24653d6f5973528198ca663e1ce0520-userdata-shm.mount: Deactivated successfully.
Dec 05 10:08:32 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2df71d6a33\x2d64f7\x2d47e6\x2da1b4\x2d68f49a214977.mount: Deactivated successfully.
Dec 05 10:08:33 np0005546420.localdomain ceph-mon[298353]: pgmap v246: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 2.5 KiB/s wr, 52 op/s
Dec 05 10:08:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:35.421 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:08:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:08:35 np0005546420.localdomain podman[313932]: 2025-12-05 10:08:35.528115556 +0000 UTC m=+0.078824249 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:08:35 np0005546420.localdomain podman[313932]: 2025-12-05 10:08:35.542398414 +0000 UTC m=+0.093107087 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:08:35 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:08:35 np0005546420.localdomain podman[313931]: 2025-12-05 10:08:35.636406858 +0000 UTC m=+0.187204074 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:08:35 np0005546420.localdomain podman[313931]: 2025-12-05 10:08:35.677449578 +0000 UTC m=+0.228246804 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64)
Dec 05 10:08:35 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:08:36 np0005546420.localdomain ceph-mon[298353]: pgmap v247: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 2.5 KiB/s wr, 52 op/s
Dec 05 10:08:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:36.368 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:36 np0005546420.localdomain dnsmasq[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/addn_hosts - 0 addresses
Dec 05 10:08:36 np0005546420.localdomain dnsmasq-dhcp[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/host
Dec 05 10:08:36 np0005546420.localdomain podman[313989]: 2025-12-05 10:08:36.795213437 +0000 UTC m=+0.058440874 container kill 1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-75b36aee-b9df-40d0-a03b-5c22192226ba, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:08:36 np0005546420.localdomain dnsmasq-dhcp[312823]: read /var/lib/neutron/dhcp/75b36aee-b9df-40d0-a03b-5c22192226ba/opts
Dec 05 10:08:36 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:36Z|00168|ovn_bfd|INFO|Disabled BFD on interface ovn-473cc8-0
Dec 05 10:08:36 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:36Z|00169|ovn_bfd|INFO|Disabled BFD on interface ovn-f5bb44-0
Dec 05 10:08:36 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:36Z|00170|ovn_bfd|INFO|Disabled BFD on interface ovn-40c64e-0
Dec 05 10:08:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:36.839 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:36.842 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:36.861 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:37.761 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:37 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:37Z|00171|binding|INFO|Releasing lport e5e0e1c3-ece0-4be2-b3cb-b13d88e8f23e from this chassis (sb_readonly=0)
Dec 05 10:08:37 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:37Z|00172|binding|INFO|Setting lport e5e0e1c3-ece0-4be2-b3cb-b13d88e8f23e down in Southbound
Dec 05 10:08:37 np0005546420.localdomain kernel: device tape5e0e1c3-ec left promiscuous mode
Dec 05 10:08:37 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:37.781 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-75b36aee-b9df-40d0-a03b-5c22192226ba', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75b36aee-b9df-40d0-a03b-5c22192226ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6225d95e0a924813958256bdb79de31f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4fbf3a80-5208-4b6c-a425-a7be197c965a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=e5e0e1c3-ece0-4be2-b3cb-b13d88e8f23e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:08:37 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:37.783 159503 INFO neutron.agent.ovn.metadata.agent [-] Port e5e0e1c3-ece0-4be2-b3cb-b13d88e8f23e in datapath 75b36aee-b9df-40d0-a03b-5c22192226ba unbound from our chassis
Dec 05 10:08:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:37.785 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:37 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:37.786 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75b36aee-b9df-40d0-a03b-5c22192226ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:08:37 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:37.787 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[f5431d30-ea02-4f51-8258-88727f0a5d3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:08:38 np0005546420.localdomain ceph-mon[298353]: pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 2.4 KiB/s wr, 59 op/s
Dec 05 10:08:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:08:39 np0005546420.localdomain podman[314013]: 2025-12-05 10:08:39.845667814 +0000 UTC m=+0.085251006 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 05 10:08:39 np0005546420.localdomain podman[314013]: 2025-12-05 10:08:39.917519289 +0000 UTC m=+0.157102461 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:08:39 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:08:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:40 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:40.231 2 INFO neutron.agent.securitygroups_rpc [None req-d408bbe4-2cf4-4918-b5cf-de5f65a1893c 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']
Dec 05 10:08:40 np0005546420.localdomain ceph-mon[298353]: pgmap v249: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 05 10:08:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:40.467 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:41.370 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:41 np0005546420.localdomain ceph-mon[298353]: pgmap v250: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 05 10:08:43 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:43.117 2 INFO neutron.agent.securitygroups_rpc [None req-c6e6df27-e7a4-4cda-a4a7-63c4cb2afad8 a052c73754704caaa399378c7e50192a f8da57e2736240a0ac7055e85adea6da - - default default] Security group member updated ['4a643f0b-9f81-4463-adc9-4f8f421f9506']
Dec 05 10:08:44 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:44.016 2 INFO neutron.agent.securitygroups_rpc [None req-7022ece7-3195-4730-b179-1dbc71a25f59 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']
Dec 05 10:08:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:44.095 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:44 np0005546420.localdomain ceph-mon[298353]: pgmap v251: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 341 B/s wr, 14 op/s
Dec 05 10:08:44 np0005546420.localdomain dnsmasq[312823]: exiting on receipt of SIGTERM
Dec 05 10:08:44 np0005546420.localdomain podman[314055]: 2025-12-05 10:08:44.61687679 +0000 UTC m=+0.060967592 container kill 1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-75b36aee-b9df-40d0-a03b-5c22192226ba, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:08:44 np0005546420.localdomain systemd[1]: libpod-1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b.scope: Deactivated successfully.
Dec 05 10:08:44 np0005546420.localdomain podman[314069]: 2025-12-05 10:08:44.692647264 +0000 UTC m=+0.053965047 container died 1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-75b36aee-b9df-40d0-a03b-5c22192226ba, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:08:44 np0005546420.localdomain systemd[1]: tmp-crun.Hj43os.mount: Deactivated successfully.
Dec 05 10:08:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b-userdata-shm.mount: Deactivated successfully.
Dec 05 10:08:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-961db4bc10e6a9075005b5190aaaa19d5fe4e7ba14f15ddddfc2d9351acbacdb-merged.mount: Deactivated successfully.
Dec 05 10:08:44 np0005546420.localdomain podman[314069]: 2025-12-05 10:08:44.743799583 +0000 UTC m=+0.105117326 container remove 1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-75b36aee-b9df-40d0-a03b-5c22192226ba, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:08:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:44.771 262769 INFO neutron.agent.dhcp.agent [None req-342babfb-ab2f-4655-b8f9-80a21ff95abb - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:44 np0005546420.localdomain systemd[1]: libpod-conmon-1e41d882ba0501c4ce55163a4ffa7752d08aa88ea0861f1a720e22ec74b8c48b.scope: Deactivated successfully.
Dec 05 10:08:45 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:45.031 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:45 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:45.035 262769 INFO neutron.agent.linux.ip_lib [None req-23db9282-c67c-471d-80e7-5a4a5103016c - - - - - -] Device tap8aa7c477-b5 cannot be used as it has no MAC address
Dec 05 10:08:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:45.060 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:45 np0005546420.localdomain kernel: device tap8aa7c477-b5 entered promiscuous mode
Dec 05 10:08:45 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929325.0712] manager: (tap8aa7c477-b5): new Generic device (/org/freedesktop/NetworkManager/Devices/32)
Dec 05 10:08:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:45.071 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:45 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:45Z|00173|binding|INFO|Claiming lport 8aa7c477-b5c3-4ffc-9418-1b88c349a897 for this chassis.
Dec 05 10:08:45 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:45Z|00174|binding|INFO|8aa7c477-b5c3-4ffc-9418-1b88c349a897: Claiming unknown
Dec 05 10:08:45 np0005546420.localdomain systemd-udevd[314102]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:08:45 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:45.084 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-8125ca1b-9a53-4731-ae3b-554463b15fa3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8125ca1b-9a53-4731-ae3b-554463b15fa3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b042ca58df6348e1a29311c5a517d4d4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b4cb835-abde-4370-85d0-e30964c864cb, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=8aa7c477-b5c3-4ffc-9418-1b88c349a897) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:08:45 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:45.086 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 8aa7c477-b5c3-4ffc-9418-1b88c349a897 in datapath 8125ca1b-9a53-4731-ae3b-554463b15fa3 bound to our chassis
Dec 05 10:08:45 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:45.087 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8125ca1b-9a53-4731-ae3b-554463b15fa3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:08:45 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:45.088 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[9e555168-100f-4659-9d3b-052da67f6677]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:08:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:45 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap8aa7c477-b5: No such device
Dec 05 10:08:45 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap8aa7c477-b5: No such device
Dec 05 10:08:45 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:45Z|00175|binding|INFO|Setting lport 8aa7c477-b5c3-4ffc-9418-1b88c349a897 ovn-installed in OVS
Dec 05 10:08:45 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:45Z|00176|binding|INFO|Setting lport 8aa7c477-b5c3-4ffc-9418-1b88c349a897 up in Southbound
Dec 05 10:08:45 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap8aa7c477-b5: No such device
Dec 05 10:08:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:45.150 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:45 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:45.150 2 INFO neutron.agent.securitygroups_rpc [None req-cae0c3b7-4067-4aed-9cef-3c85cca03ef5 a052c73754704caaa399378c7e50192a f8da57e2736240a0ac7055e85adea6da - - default default] Security group member updated ['4a643f0b-9f81-4463-adc9-4f8f421f9506']
Dec 05 10:08:45 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap8aa7c477-b5: No such device
Dec 05 10:08:45 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap8aa7c477-b5: No such device
Dec 05 10:08:45 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap8aa7c477-b5: No such device
Dec 05 10:08:45 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap8aa7c477-b5: No such device
Dec 05 10:08:45 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:45.167 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:45 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap8aa7c477-b5: No such device
Dec 05 10:08:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:45.175 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:45.202 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:45.328 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:45.468 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:45 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d75b36aee\x2db9df\x2d40d0\x2da03b\x2d5c22192226ba.mount: Deactivated successfully.
Dec 05 10:08:46 np0005546420.localdomain podman[314173]: 
Dec 05 10:08:46 np0005546420.localdomain podman[314173]: 2025-12-05 10:08:46.062074093 +0000 UTC m=+0.098481013 container create 00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8125ca1b-9a53-4731-ae3b-554463b15fa3, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:08:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:08:46 np0005546420.localdomain systemd[1]: Started libpod-conmon-00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7.scope.
Dec 05 10:08:46 np0005546420.localdomain podman[314173]: 2025-12-05 10:08:46.016125433 +0000 UTC m=+0.052532403 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:08:46 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:46.133 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:46 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:08:46 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32fa72482d12d00b4fe2ad4178e8dbc8aa25219272b60f0447f4ca52842d98b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:46 np0005546420.localdomain podman[314173]: 2025-12-05 10:08:46.15225861 +0000 UTC m=+0.188665530 container init 00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8125ca1b-9a53-4731-ae3b-554463b15fa3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:08:46 np0005546420.localdomain podman[314173]: 2025-12-05 10:08:46.162451352 +0000 UTC m=+0.198858302 container start 00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8125ca1b-9a53-4731-ae3b-554463b15fa3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:08:46 np0005546420.localdomain dnsmasq[314204]: started, version 2.85 cachesize 150
Dec 05 10:08:46 np0005546420.localdomain dnsmasq[314204]: DNS service limited to local subnets
Dec 05 10:08:46 np0005546420.localdomain dnsmasq[314204]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:08:46 np0005546420.localdomain dnsmasq[314204]: warning: no upstream servers configured
Dec 05 10:08:46 np0005546420.localdomain dnsmasq-dhcp[314204]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:08:46 np0005546420.localdomain dnsmasq[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/addn_hosts - 0 addresses
Dec 05 10:08:46 np0005546420.localdomain dnsmasq-dhcp[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/host
Dec 05 10:08:46 np0005546420.localdomain dnsmasq-dhcp[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/opts
Dec 05 10:08:46 np0005546420.localdomain podman[314187]: 2025-12-05 10:08:46.239711002 +0000 UTC m=+0.135313271 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:08:46 np0005546420.localdomain podman[314187]: 2025-12-05 10:08:46.281342219 +0000 UTC m=+0.176944478 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:08:46 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:08:46 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:46.301 2 INFO neutron.agent.securitygroups_rpc [None req-237dfc96-41d1-4f6c-9c3e-9577e60244ef 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']
Dec 05 10:08:46 np0005546420.localdomain ceph-mon[298353]: pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Dec 05 10:08:46 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:46.327 262769 INFO neutron.agent.dhcp.agent [None req-dc09144b-c92a-46dc-9293-e98a14ede43b - - - - - -] DHCP configuration for ports {'7e2c58cf-6e8a-4bc7-b2b7-212ff3515b90'} is completed
Dec 05 10:08:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:46.406 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:46 np0005546420.localdomain systemd[1]: tmp-crun.JzfOSS.mount: Deactivated successfully.
Dec 05 10:08:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:08:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:08:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:08:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154915 "" "Go-http-client/1.1"
Dec 05 10:08:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:08:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18723 "" "Go-http-client/1.1"
Dec 05 10:08:48 np0005546420.localdomain ceph-mon[298353]: pgmap v253: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Dec 05 10:08:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:08:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:08:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:08:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:08:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:08:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:08:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:08:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:08:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:08:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:08:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:08:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:08:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:50 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:08:50.105 2 INFO neutron.agent.securitygroups_rpc [None req-c03c3fa4-9507-44bb-bcaa-801354fb2705 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']
Dec 05 10:08:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:50.116 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:08:50 np0005546420.localdomain ceph-mon[298353]: pgmap v254: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:08:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:50.471 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:51.439 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:51 np0005546420.localdomain ceph-mon[298353]: pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:08:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:08:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:08:54 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:54.294 262769 INFO neutron.agent.linux.ip_lib [None req-6cf40305-c4fa-4978-a88a-569d7b712858 - - - - - -] Device tap787a3663-e2 cannot be used as it has no MAC address
Dec 05 10:08:54 np0005546420.localdomain podman[314214]: 2025-12-05 10:08:54.312601011 +0000 UTC m=+0.089906279 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:08:54 np0005546420.localdomain ceph-mon[298353]: pgmap v256: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:08:54 np0005546420.localdomain podman[314214]: 2025-12-05 10:08:54.324026571 +0000 UTC m=+0.101331799 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:08:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:54.325 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:54 np0005546420.localdomain kernel: device tap787a3663-e2 entered promiscuous mode
Dec 05 10:08:54 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929334.3356] manager: (tap787a3663-e2): new Generic device (/org/freedesktop/NetworkManager/Devices/33)
Dec 05 10:08:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:54.338 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:54Z|00177|binding|INFO|Claiming lport 787a3663-e2e6-4397-a8f0-2c7bb83ec455 for this chassis.
Dec 05 10:08:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:54Z|00178|binding|INFO|787a3663-e2e6-4397-a8f0-2c7bb83ec455: Claiming unknown
Dec 05 10:08:54 np0005546420.localdomain systemd-udevd[314255]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:08:54 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:08:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:54.358 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-fc1347c7-dde7-4dbb-b7ae-206826392a4e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc1347c7-dde7-4dbb-b7ae-206826392a4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b042ca58df6348e1a29311c5a517d4d4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4959d94d-4c8d-4944-8b0a-ec98fbcf7cbd, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=787a3663-e2e6-4397-a8f0-2c7bb83ec455) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:08:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:54.361 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 787a3663-e2e6-4397-a8f0-2c7bb83ec455 in datapath fc1347c7-dde7-4dbb-b7ae-206826392a4e bound to our chassis
Dec 05 10:08:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:54.363 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fc1347c7-dde7-4dbb-b7ae-206826392a4e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:08:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:08:54.364 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[8923e7f5-e433-408e-abe8-23c10afbc0c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:08:54 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap787a3663-e2: No such device
Dec 05 10:08:54 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap787a3663-e2: No such device
Dec 05 10:08:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:54Z|00179|binding|INFO|Setting lport 787a3663-e2e6-4397-a8f0-2c7bb83ec455 ovn-installed in OVS
Dec 05 10:08:54 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:08:54Z|00180|binding|INFO|Setting lport 787a3663-e2e6-4397-a8f0-2c7bb83ec455 up in Southbound
Dec 05 10:08:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:54.379 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:54 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap787a3663-e2: No such device
Dec 05 10:08:54 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap787a3663-e2: No such device
Dec 05 10:08:54 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap787a3663-e2: No such device
Dec 05 10:08:54 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap787a3663-e2: No such device
Dec 05 10:08:54 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap787a3663-e2: No such device
Dec 05 10:08:54 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap787a3663-e2: No such device
Dec 05 10:08:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:54.421 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:54 np0005546420.localdomain podman[314215]: 2025-12-05 10:08:54.421346566 +0000 UTC m=+0.197450428 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:08:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:54.447 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:54 np0005546420.localdomain podman[314215]: 2025-12-05 10:08:54.457412952 +0000 UTC m=+0.233516844 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:08:54 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:08:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:08:55 np0005546420.localdomain podman[314334]: 
Dec 05 10:08:55 np0005546420.localdomain podman[314334]: 2025-12-05 10:08:55.304091346 +0000 UTC m=+0.090039764 container create d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc1347c7-dde7-4dbb-b7ae-206826392a4e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 10:08:55 np0005546420.localdomain systemd[1]: Started libpod-conmon-d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912.scope.
Dec 05 10:08:55 np0005546420.localdomain podman[314334]: 2025-12-05 10:08:55.262804449 +0000 UTC m=+0.048752887 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:08:55 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:08:55 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2da5d9927f14e15ac49d57642ea0bad4504b49ac9bcaa6bdfd8a484c9fdbb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:08:55 np0005546420.localdomain podman[314334]: 2025-12-05 10:08:55.386353259 +0000 UTC m=+0.172301677 container init d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc1347c7-dde7-4dbb-b7ae-206826392a4e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:08:55 np0005546420.localdomain podman[314334]: 2025-12-05 10:08:55.394692415 +0000 UTC m=+0.180640843 container start d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc1347c7-dde7-4dbb-b7ae-206826392a4e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:08:55 np0005546420.localdomain dnsmasq[314351]: started, version 2.85 cachesize 150
Dec 05 10:08:55 np0005546420.localdomain dnsmasq[314351]: DNS service limited to local subnets
Dec 05 10:08:55 np0005546420.localdomain dnsmasq[314351]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:08:55 np0005546420.localdomain dnsmasq[314351]: warning: no upstream servers configured
Dec 05 10:08:55 np0005546420.localdomain dnsmasq-dhcp[314351]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:08:55 np0005546420.localdomain dnsmasq[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/addn_hosts - 0 addresses
Dec 05 10:08:55 np0005546420.localdomain dnsmasq-dhcp[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/host
Dec 05 10:08:55 np0005546420.localdomain dnsmasq-dhcp[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/opts
Dec 05 10:08:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:55.517 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:08:56 np0005546420.localdomain podman[314352]: 2025-12-05 10:08:56.25252112 +0000 UTC m=+0.079960513 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:08:56 np0005546420.localdomain podman[314352]: 2025-12-05 10:08:56.266480209 +0000 UTC m=+0.093919582 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3)
Dec 05 10:08:56 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:08:56 np0005546420.localdomain ceph-mon[298353]: pgmap v257: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:08:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:08:56.443 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:08:56 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:08:56.874 262769 INFO neutron.agent.dhcp.agent [None req-5ed16c66-f57b-494a-95b4-cf4e412bdc08 - - - - - -] DHCP configuration for ports {'3b133013-245c-4142-8964-61ad326f417d'} is completed
Dec 05 10:08:58 np0005546420.localdomain ceph-mon[298353]: pgmap v258: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:09:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:00 np0005546420.localdomain ceph-mon[298353]: pgmap v259: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:09:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:00.561 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:00 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:00.606 2 INFO neutron.agent.securitygroups_rpc [None req-bbd7d412-ea7d-41f8-b523-09fdeefd5b0e 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:01.445 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:01 np0005546420.localdomain sudo[314371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:09:01 np0005546420.localdomain sudo[314371]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:09:01 np0005546420.localdomain sudo[314371]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:01 np0005546420.localdomain sudo[314389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:09:01 np0005546420.localdomain sudo[314389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:09:01 np0005546420.localdomain ceph-mon[298353]: pgmap v260: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:09:01 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:01.737 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:00Z, description=, device_id=375f8419-370d-40e2-b39d-46b812d89e49, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a040cd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a040610>], id=aad97551-d5ab-42f3-bf8f-c4050210d9b1, ip_allocation=immediate, mac_address=fa:16:3e:67:52:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:08:46Z, description=, dns_domain=, id=fc1347c7-dde7-4dbb-b7ae-206826392a4e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--575689753, port_security_enabled=True, project_id=b042ca58df6348e1a29311c5a517d4d4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51564, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1585, status=ACTIVE, subnets=['18ecbdce-b870-4804-ac75-1070a8f3b665'], tags=[], tenant_id=b042ca58df6348e1a29311c5a517d4d4, updated_at=2025-12-05T10:08:52Z, vlan_transparent=None, network_id=fc1347c7-dde7-4dbb-b7ae-206826392a4e, port_security_enabled=False, project_id=b042ca58df6348e1a29311c5a517d4d4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1652, status=DOWN, tags=[], tenant_id=b042ca58df6348e1a29311c5a517d4d4, updated_at=2025-12-05T10:09:00Z on network fc1347c7-dde7-4dbb-b7ae-206826392a4e
Dec 05 10:09:01 np0005546420.localdomain dnsmasq[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/addn_hosts - 1 addresses
Dec 05 10:09:01 np0005546420.localdomain dnsmasq-dhcp[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/host
Dec 05 10:09:01 np0005546420.localdomain dnsmasq-dhcp[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/opts
Dec 05 10:09:01 np0005546420.localdomain podman[314433]: 2025-12-05 10:09:01.950313859 +0000 UTC m=+0.047459647 container kill d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc1347c7-dde7-4dbb-b7ae-206826392a4e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:09:02 np0005546420.localdomain sudo[314389]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:02 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:02.209 262769 INFO neutron.agent.dhcp.agent [None req-9f77cbe6-96bd-458e-b634-d9bf2d05330b - - - - - -] DHCP configuration for ports {'aad97551-d5ab-42f3-bf8f-c4050210d9b1'} is completed
Dec 05 10:09:02 np0005546420.localdomain sudo[314478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:09:02 np0005546420.localdomain sudo[314478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:09:02 np0005546420.localdomain sudo[314478]: pam_unix(sudo:session): session closed for user root
Dec 05 10:09:02 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:09:02 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:09:02 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:09:02 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:09:03 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:03.273 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:00Z, description=, device_id=375f8419-370d-40e2-b39d-46b812d89e49, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a053b50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a053cd0>], id=aad97551-d5ab-42f3-bf8f-c4050210d9b1, ip_allocation=immediate, mac_address=fa:16:3e:67:52:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:08:46Z, description=, dns_domain=, id=fc1347c7-dde7-4dbb-b7ae-206826392a4e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--575689753, port_security_enabled=True, project_id=b042ca58df6348e1a29311c5a517d4d4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51564, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1585, status=ACTIVE, subnets=['18ecbdce-b870-4804-ac75-1070a8f3b665'], tags=[], tenant_id=b042ca58df6348e1a29311c5a517d4d4, updated_at=2025-12-05T10:08:52Z, vlan_transparent=None, network_id=fc1347c7-dde7-4dbb-b7ae-206826392a4e, port_security_enabled=False, project_id=b042ca58df6348e1a29311c5a517d4d4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1652, status=DOWN, tags=[], tenant_id=b042ca58df6348e1a29311c5a517d4d4, updated_at=2025-12-05T10:09:00Z on network fc1347c7-dde7-4dbb-b7ae-206826392a4e
Dec 05 10:09:03 np0005546420.localdomain dnsmasq[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/addn_hosts - 1 addresses
Dec 05 10:09:03 np0005546420.localdomain dnsmasq-dhcp[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/host
Dec 05 10:09:03 np0005546420.localdomain podman[314512]: 2025-12-05 10:09:03.509685316 +0000 UTC m=+0.059247188 container kill d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc1347c7-dde7-4dbb-b7ae-206826392a4e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:09:03 np0005546420.localdomain dnsmasq-dhcp[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/opts
Dec 05 10:09:03 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:03.699 2 INFO neutron.agent.securitygroups_rpc [None req-ffa88cd4-0de4-4dfc-9628-3fa8bcfe8ab2 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:03 np0005546420.localdomain ceph-mon[298353]: pgmap v261: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:09:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/581164779' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:09:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/581164779' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:09:03 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:03.760 262769 INFO neutron.agent.dhcp.agent [None req-7cd0b934-9c00-48c6-ba0b-99a504527c89 - - - - - -] DHCP configuration for ports {'aad97551-d5ab-42f3-bf8f-c4050210d9b1'} is completed
Dec 05 10:09:04 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:04.082 2 INFO neutron.agent.securitygroups_rpc [None req-5cd34cc1-c456-444f-81be-b85c66f6c284 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:04.128 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:09:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:04.128 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:09:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:04.129 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:09:04 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:04.689 2 INFO neutron.agent.securitygroups_rpc [None req-282fa99c-63a9-4315-814e-64b88518f264 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:04 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:04.733 2 INFO neutron.agent.securitygroups_rpc [None req-5cd34cc1-c456-444f-81be-b85c66f6c284 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:05.611 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:05 np0005546420.localdomain dnsmasq[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/addn_hosts - 0 addresses
Dec 05 10:09:05 np0005546420.localdomain systemd[1]: tmp-crun.jdNWuU.mount: Deactivated successfully.
Dec 05 10:09:05 np0005546420.localdomain dnsmasq-dhcp[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/host
Dec 05 10:09:05 np0005546420.localdomain dnsmasq-dhcp[314351]: read /var/lib/neutron/dhcp/fc1347c7-dde7-4dbb-b7ae-206826392a4e/opts
Dec 05 10:09:05 np0005546420.localdomain podman[314550]: 2025-12-05 10:09:05.643335389 +0000 UTC m=+0.114936257 container kill d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc1347c7-dde7-4dbb-b7ae-206826392a4e, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:09:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:09:05 np0005546420.localdomain podman[314565]: 2025-12-05 10:09:05.779704933 +0000 UTC m=+0.101068712 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:09:05 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:05.787 2 INFO neutron.agent.securitygroups_rpc [None req-0567dbfd-40f6-475b-b2e7-71873c340d0d 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:09:05 np0005546420.localdomain podman[314565]: 2025-12-05 10:09:05.796393435 +0000 UTC m=+0.117757194 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:09:05 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:09:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:05.847 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:05 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:05Z|00181|binding|INFO|Releasing lport 787a3663-e2e6-4397-a8f0-2c7bb83ec455 from this chassis (sb_readonly=0)
Dec 05 10:09:05 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:05Z|00182|binding|INFO|Setting lport 787a3663-e2e6-4397-a8f0-2c7bb83ec455 down in Southbound
Dec 05 10:09:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:05.856 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:05 np0005546420.localdomain kernel: device tap787a3663-e2 left promiscuous mode
Dec 05 10:09:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:05.860 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-fc1347c7-dde7-4dbb-b7ae-206826392a4e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc1347c7-dde7-4dbb-b7ae-206826392a4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b042ca58df6348e1a29311c5a517d4d4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4959d94d-4c8d-4944-8b0a-ec98fbcf7cbd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=787a3663-e2e6-4397-a8f0-2c7bb83ec455) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:09:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:05.863 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 787a3663-e2e6-4397-a8f0-2c7bb83ec455 in datapath fc1347c7-dde7-4dbb-b7ae-206826392a4e unbound from our chassis
Dec 05 10:09:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:05.869 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc1347c7-dde7-4dbb-b7ae-206826392a4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:09:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:05.871 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[09b47be6-f543-4373-8ad4-20f42e7280db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:09:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:05.872 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:05 np0005546420.localdomain podman[314594]: 2025-12-05 10:09:05.891517393 +0000 UTC m=+0.091514349 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter)
Dec 05 10:09:05 np0005546420.localdomain podman[314594]: 2025-12-05 10:09:05.908426182 +0000 UTC m=+0.108423148 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:09:05 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:09:06 np0005546420.localdomain ceph-mon[298353]: pgmap v262: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:09:06 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:09:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:06.448 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:07 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:07.790 2 INFO neutron.agent.securitygroups_rpc [None req-0b1ad4f5-35a0-4b3a-82f6-351d119f03fa 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:07 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:07.854 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:08 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:08.172 2 INFO neutron.agent.securitygroups_rpc [None req-602f23e4-2e9f-45fa-b722-3e4dfab19d2f 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:08 np0005546420.localdomain ceph-mon[298353]: pgmap v263: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:09:08 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e131 e131: 6 total, 6 up, 6 in
Dec 05 10:09:08 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:08.448 2 INFO neutron.agent.securitygroups_rpc [None req-2d23d8bb-26bf-4cf0-b76a-8223dd2d3e9e b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['4a9e1e3d-9deb-4500-8cd3-c46454c40952']
Dec 05 10:09:09 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:09.190 2 INFO neutron.agent.securitygroups_rpc [None req-b4988939-0efe-4cdb-87b1-b975b62ce81e b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['4a9e1e3d-9deb-4500-8cd3-c46454c40952']
Dec 05 10:09:09 np0005546420.localdomain ceph-mon[298353]: osdmap e131: 6 total, 6 up, 6 in
Dec 05 10:09:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e132 e132: 6 total, 6 up, 6 in
Dec 05 10:09:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:09:10 np0005546420.localdomain ceph-mon[298353]: pgmap v265: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 05 10:09:10 np0005546420.localdomain podman[314615]: 2025-12-05 10:09:10.525102745 +0000 UTC m=+0.097880513 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:09:10 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:10.565 2 INFO neutron.agent.securitygroups_rpc [None req-f720baac-31b8-404b-9e10-284a5cc83368 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:10 np0005546420.localdomain podman[314615]: 2025-12-05 10:09:10.593413881 +0000 UTC m=+0.166191639 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 05 10:09:10 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:09:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:10.613 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:11 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:11.398 2 INFO neutron.agent.securitygroups_rpc [None req-07455aeb-b972-44df-83ba-a97d019f246c b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']
Dec 05 10:09:11 np0005546420.localdomain ceph-mon[298353]: osdmap e132: 6 total, 6 up, 6 in
Dec 05 10:09:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:11.483 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:11 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:11.744 2 INFO neutron.agent.securitygroups_rpc [None req-70979621-f888-4de2-82c4-1edb603fa26e b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']
Dec 05 10:09:12 np0005546420.localdomain ceph-mon[298353]: pgmap v267: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 2.6 KiB/s wr, 43 op/s
Dec 05 10:09:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e133 e133: 6 total, 6 up, 6 in
Dec 05 10:09:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:13.308 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:13.309 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:09:13 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:13.403 2 INFO neutron.agent.securitygroups_rpc [None req-c4f876c6-2e34-4bd7-ac97-65e12853c2c6 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']
Dec 05 10:09:13 np0005546420.localdomain ceph-mon[298353]: osdmap e133: 6 total, 6 up, 6 in
Dec 05 10:09:13 np0005546420.localdomain ceph-mon[298353]: pgmap v269: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 5.5 KiB/s wr, 90 op/s
Dec 05 10:09:13 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:13.809 2 INFO neutron.agent.securitygroups_rpc [None req-5baf0550-700d-4270-8f26-698d0c93f8dd 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']
Dec 05 10:09:14 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:14.128 2 INFO neutron.agent.securitygroups_rpc [None req-dcb4f2a3-25e0-41ca-809b-3ef76d8b07ac b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']
Dec 05 10:09:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e134 e134: 6 total, 6 up, 6 in
Dec 05 10:09:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:14.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:14.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:14.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:15 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:15.052 2 INFO neutron.agent.securitygroups_rpc [None req-e013afb9-bd7a-41a0-8c1d-6822c775054e 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']
Dec 05 10:09:15 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:15.086 2 INFO neutron.agent.securitygroups_rpc [None req-b2db1434-9482-4b8c-b5b8-dd323cecd7fe 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:15 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:15.158 2 INFO neutron.agent.securitygroups_rpc [None req-2f7535df-8614-4bd1-9847-9f7ca31afb50 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']
Dec 05 10:09:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:15.616 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:15 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:15.633 2 INFO neutron.agent.securitygroups_rpc [None req-77179fa3-d33d-4e1c-8e3f-5a8b2508ceb3 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']
Dec 05 10:09:15 np0005546420.localdomain ceph-mon[298353]: osdmap e134: 6 total, 6 up, 6 in
Dec 05 10:09:15 np0005546420.localdomain ceph-mon[298353]: pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 5.5 KiB/s wr, 90 op/s
Dec 05 10:09:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:15.886 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:15.887 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:09:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:15.888 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:09:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:15.908 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:09:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:15.909 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:15.909 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 10:09:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:15.936 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 10:09:16 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:16.332 2 INFO neutron.agent.securitygroups_rpc [None req-6f1bd688-2005-49d6-a3a1-1fee2f80d98c 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:16.379 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:15Z, description=, device_id=375f8419-370d-40e2-b39d-46b812d89e49, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0a4ac0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0d87c0>], id=583f71be-1e80-4540-98d4-4f34ddb36cc9, ip_allocation=immediate, mac_address=fa:16:3e:27:8b:71, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:08:41Z, description=, dns_domain=, id=8125ca1b-9a53-4731-ae3b-554463b15fa3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-1240413012, port_security_enabled=True, project_id=b042ca58df6348e1a29311c5a517d4d4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60242, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1570, status=ACTIVE, subnets=['bdbfaa0c-0f66-4135-a9ae-3e805db285d7'], tags=[], tenant_id=b042ca58df6348e1a29311c5a517d4d4, updated_at=2025-12-05T10:08:44Z, vlan_transparent=None, network_id=8125ca1b-9a53-4731-ae3b-554463b15fa3, port_security_enabled=False, project_id=b042ca58df6348e1a29311c5a517d4d4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1722, status=DOWN, tags=[], tenant_id=b042ca58df6348e1a29311c5a517d4d4, updated_at=2025-12-05T10:09:15Z on network 8125ca1b-9a53-4731-ae3b-554463b15fa3
Dec 05 10:09:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:09:16 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:16.505 2 INFO neutron.agent.securitygroups_rpc [None req-ead10424-0bad-48ec-bd99-83fac4a5ddf3 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']
Dec 05 10:09:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:16.523 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:16 np0005546420.localdomain podman[314638]: 2025-12-05 10:09:16.537105113 +0000 UTC m=+0.110420128 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:09:16 np0005546420.localdomain podman[314638]: 2025-12-05 10:09:16.57644822 +0000 UTC m=+0.149763295 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:09:16 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:09:16 np0005546420.localdomain systemd[1]: tmp-crun.fZAgR4.mount: Deactivated successfully.
Dec 05 10:09:16 np0005546420.localdomain podman[314674]: 2025-12-05 10:09:16.61488308 +0000 UTC m=+0.060745625 container kill 00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8125ca1b-9a53-4731-ae3b-554463b15fa3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 05 10:09:16 np0005546420.localdomain dnsmasq[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/addn_hosts - 1 addresses
Dec 05 10:09:16 np0005546420.localdomain dnsmasq-dhcp[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/host
Dec 05 10:09:16 np0005546420.localdomain dnsmasq-dhcp[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/opts
Dec 05 10:09:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:16.871 262769 INFO neutron.agent.dhcp.agent [None req-6377054d-b2e3-458e-b5cd-fe6a10ffd956 - - - - - -] DHCP configuration for ports {'583f71be-1e80-4540-98d4-4f34ddb36cc9'} is completed
Dec 05 10:09:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1030265780' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:09:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:09:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:09:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:09:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156739 "" "Go-http-client/1.1"
Dec 05 10:09:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:09:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19213 "" "Go-http-client/1.1"
Dec 05 10:09:17 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:17.552 2 INFO neutron.agent.securitygroups_rpc [None req-80d3fbb2-b4b3-4e6d-9a94-ccedba8b6568 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']
Dec 05 10:09:18 np0005546420.localdomain ceph-mon[298353]: pgmap v272: 177 pgs: 177 active+clean; 152 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 3.1 MiB/s rd, 300 KiB/s wr, 185 op/s
Dec 05 10:09:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:18.347 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:15Z, description=, device_id=375f8419-370d-40e2-b39d-46b812d89e49, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a1cb490>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a1cbd00>], id=583f71be-1e80-4540-98d4-4f34ddb36cc9, ip_allocation=immediate, mac_address=fa:16:3e:27:8b:71, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:08:41Z, description=, dns_domain=, id=8125ca1b-9a53-4731-ae3b-554463b15fa3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-1240413012, port_security_enabled=True, project_id=b042ca58df6348e1a29311c5a517d4d4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60242, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1570, status=ACTIVE, subnets=['bdbfaa0c-0f66-4135-a9ae-3e805db285d7'], tags=[], tenant_id=b042ca58df6348e1a29311c5a517d4d4, updated_at=2025-12-05T10:08:44Z, vlan_transparent=None, network_id=8125ca1b-9a53-4731-ae3b-554463b15fa3, port_security_enabled=False, project_id=b042ca58df6348e1a29311c5a517d4d4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1722, status=DOWN, tags=[], tenant_id=b042ca58df6348e1a29311c5a517d4d4, updated_at=2025-12-05T10:09:15Z on network 8125ca1b-9a53-4731-ae3b-554463b15fa3
Dec 05 10:09:18 np0005546420.localdomain dnsmasq[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/addn_hosts - 1 addresses
Dec 05 10:09:18 np0005546420.localdomain dnsmasq-dhcp[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/host
Dec 05 10:09:18 np0005546420.localdomain podman[314711]: 2025-12-05 10:09:18.581709425 +0000 UTC m=+0.064474189 container kill 00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8125ca1b-9a53-4731-ae3b-554463b15fa3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 05 10:09:18 np0005546420.localdomain dnsmasq-dhcp[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/opts
Dec 05 10:09:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:18.820 262769 INFO neutron.agent.dhcp.agent [None req-af43a0af-75d5-4c87-af61-4407038fde1e - - - - - -] DHCP configuration for ports {'583f71be-1e80-4540-98d4-4f34ddb36cc9'} is completed
Dec 05 10:09:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:09:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:09:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:09:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:09:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:09:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:09:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:09:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:09:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:09:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:09:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:09:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:09:18 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:18.973 2 INFO neutron.agent.securitygroups_rpc [None req-9a9508f4-ec27-42b4-bc60-575687ed92dd b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']
Dec 05 10:09:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/4231959628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:09:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:09:19 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3068400174' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:09:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:09:19 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3068400174' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:09:19 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:19.472 2 INFO neutron.agent.securitygroups_rpc [None req-13594d5e-1730-4625-a1f2-65b4d178b28d 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:19.899 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:19 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:19.922 2 INFO neutron.agent.securitygroups_rpc [None req-6dab5cbb-8c73-47ff-8076-4d4b7501b55c b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']
Dec 05 10:09:20 np0005546420.localdomain ceph-mon[298353]: pgmap v273: 177 pgs: 177 active+clean; 152 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 651 KiB/s rd, 254 KiB/s wr, 115 op/s
Dec 05 10:09:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3068400174' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:09:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3068400174' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:09:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:20 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:20.354 2 INFO neutron.agent.securitygroups_rpc [None req-29f332b2-6fbc-4386-b09c-47bdef692327 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:20 np0005546420.localdomain dnsmasq[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/addn_hosts - 0 addresses
Dec 05 10:09:20 np0005546420.localdomain dnsmasq-dhcp[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/host
Dec 05 10:09:20 np0005546420.localdomain podman[314747]: 2025-12-05 10:09:20.36469449 +0000 UTC m=+0.058294749 container kill 00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8125ca1b-9a53-4731-ae3b-554463b15fa3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:09:20 np0005546420.localdomain dnsmasq-dhcp[314204]: read /var/lib/neutron/dhcp/8125ca1b-9a53-4731-ae3b-554463b15fa3/opts
Dec 05 10:09:20 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:20Z|00183|binding|INFO|Releasing lport 8aa7c477-b5c3-4ffc-9418-1b88c349a897 from this chassis (sb_readonly=0)
Dec 05 10:09:20 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:20Z|00184|binding|INFO|Setting lport 8aa7c477-b5c3-4ffc-9418-1b88c349a897 down in Southbound
Dec 05 10:09:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:20.533 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:20 np0005546420.localdomain kernel: device tap8aa7c477-b5 left promiscuous mode
Dec 05 10:09:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:20.545 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-8125ca1b-9a53-4731-ae3b-554463b15fa3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8125ca1b-9a53-4731-ae3b-554463b15fa3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b042ca58df6348e1a29311c5a517d4d4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b4cb835-abde-4370-85d0-e30964c864cb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=8aa7c477-b5c3-4ffc-9418-1b88c349a897) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:09:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:20.547 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 8aa7c477-b5c3-4ffc-9418-1b88c349a897 in datapath 8125ca1b-9a53-4731-ae3b-554463b15fa3 unbound from our chassis
Dec 05 10:09:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:20.549 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8125ca1b-9a53-4731-ae3b-554463b15fa3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:09:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:20.551 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ec47a9-d237-4904-b893-4bd67adb00e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:09:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:20.556 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:20.557 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:20.618 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e135 e135: 6 total, 6 up, 6 in
Dec 05 10:09:20 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:20.789 2 INFO neutron.agent.securitygroups_rpc [None req-12e581bc-a85c-4e19-8176-998990ad9ff0 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:20.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:20.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:21.560 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:21 np0005546420.localdomain ceph-mon[298353]: osdmap e135: 6 total, 6 up, 6 in
Dec 05 10:09:21 np0005546420.localdomain ceph-mon[298353]: pgmap v275: 177 pgs: 177 active+clean; 150 MiB data, 799 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 2.6 MiB/s wr, 117 op/s
Dec 05 10:09:21 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:21.804 2 INFO neutron.agent.securitygroups_rpc [None req-7b7cfca1-161f-4e8b-b5b6-927b633a76ad b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['1b34fa2e-94a2-4613-9c80-5b5c7ff4a67a']
Dec 05 10:09:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:21.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:22.835 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:09:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:22.836 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:09:22 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:22.850 2 INFO neutron.agent.securitygroups_rpc [None req-07a1f17e-674a-4fbf-8a05-24dee40ce8c0 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:22.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:22.874 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:22.890 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:09:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:22.890 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:09:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:22.890 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:09:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:22.891 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:09:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:22.891 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:09:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:09:23 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1876584302' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:09:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:23.362 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:09:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:23.526 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:09:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:23.527 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11682MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:09:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:23.528 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:09:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:23.528 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:09:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:23.778 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:09:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:23.778 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:09:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:23.870 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:09:24 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:24.098 2 INFO neutron.agent.securitygroups_rpc [None req-5a43dc0f-43ca-475b-a61a-0ce576c54084 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']
Dec 05 10:09:24 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:24.134 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:09:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/335592851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:09:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:24.323 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:09:24 np0005546420.localdomain ceph-mon[298353]: pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 799 MiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 2.5 MiB/s wr, 129 op/s
Dec 05 10:09:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1876584302' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:09:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/335592851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:09:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:24.333 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:09:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:24.353 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:09:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:24.356 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:09:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:24.356 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:09:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:24.358 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:24.358 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 10:09:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:09:24 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:24.420 2 INFO neutron.agent.securitygroups_rpc [None req-777dc251-951f-44b9-a95f-17b8e4f65f88 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['b457efc9-4d11-469a-b4d4-526496f62886']
Dec 05 10:09:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:09:24 np0005546420.localdomain podman[314814]: 2025-12-05 10:09:24.525761996 +0000 UTC m=+0.097092870 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:09:24 np0005546420.localdomain podman[314814]: 2025-12-05 10:09:24.559031317 +0000 UTC m=+0.130362201 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:09:24 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:09:24 np0005546420.localdomain podman[314830]: 2025-12-05 10:09:24.621206634 +0000 UTC m=+0.086299559 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:09:24 np0005546420.localdomain podman[314830]: 2025-12-05 10:09:24.651454512 +0000 UTC m=+0.116547417 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 10:09:24 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:09:24 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:24.794 2 INFO neutron.agent.securitygroups_rpc [None req-dba7a388-5a0b-4bb8-97c5-15372240e74f 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:25 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:25.069 2 INFO neutron.agent.securitygroups_rpc [None req-6c302ac3-77ee-4afa-9762-b4a5f48fbd2f 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']
Dec 05 10:09:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:25.114 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:25 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:25.125 2 INFO neutron.agent.securitygroups_rpc [None req-3c0d38e2-3a8d-4cf5-a3f0-f0f035f094ce b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['b457efc9-4d11-469a-b4d4-526496f62886']
Dec 05 10:09:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:25.618 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:25 np0005546420.localdomain podman[314871]: 2025-12-05 10:09:25.621846551 +0000 UTC m=+0.073017791 container kill d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc1347c7-dde7-4dbb-b7ae-206826392a4e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:09:25 np0005546420.localdomain dnsmasq[314351]: exiting on receipt of SIGTERM
Dec 05 10:09:25 np0005546420.localdomain systemd[1]: libpod-d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912.scope: Deactivated successfully.
Dec 05 10:09:25 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:25.660 2 INFO neutron.agent.securitygroups_rpc [None req-c2343547-ec5f-4fdd-903e-6d652b2fffe4 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:25 np0005546420.localdomain podman[314885]: 2025-12-05 10:09:25.69874223 +0000 UTC m=+0.063630473 container died d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc1347c7-dde7-4dbb-b7ae-206826392a4e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:09:25 np0005546420.localdomain systemd[1]: tmp-crun.8RqWGm.mount: Deactivated successfully.
Dec 05 10:09:25 np0005546420.localdomain podman[314885]: 2025-12-05 10:09:25.742468271 +0000 UTC m=+0.107356474 container cleanup d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc1347c7-dde7-4dbb-b7ae-206826392a4e, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:09:25 np0005546420.localdomain systemd[1]: libpod-conmon-d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912.scope: Deactivated successfully.
Dec 05 10:09:25 np0005546420.localdomain podman[314887]: 2025-12-05 10:09:25.784990506 +0000 UTC m=+0.140324126 container remove d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc1347c7-dde7-4dbb-b7ae-206826392a4e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:09:26 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:26.017 262769 INFO neutron.agent.dhcp.agent [None req-268c8100-2d1a-455d-885c-20a7829eceea - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:26 np0005546420.localdomain ceph-mon[298353]: pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 799 MiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 05 10:09:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2689589833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:09:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1282669236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:09:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1282669236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:09:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:26.371 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:09:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:09:26 np0005546420.localdomain podman[314917]: 2025-12-05 10:09:26.5052248 +0000 UTC m=+0.082933254 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 10:09:26 np0005546420.localdomain podman[314917]: 2025-12-05 10:09:26.546469396 +0000 UTC m=+0.124177810 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:09:26 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:09:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:26.603 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-da2da5d9927f14e15ac49d57642ea0bad4504b49ac9bcaa6bdfd8a484c9fdbb6-merged.mount: Deactivated successfully.
Dec 05 10:09:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d2fd8502d2bbcc62e40b949104baa77042f0237537aea1115032b3a60a31a912-userdata-shm.mount: Deactivated successfully.
Dec 05 10:09:26 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2dfc1347c7\x2ddde7\x2d4dbb\x2db7ae\x2d206826392a4e.mount: Deactivated successfully.
Dec 05 10:09:26 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:26.628 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:26 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:26.867 262769 INFO neutron.agent.linux.ip_lib [None req-0694c510-4af8-4344-8906-06191abcbaa1 - - - - - -] Device tap5b69a2a3-42 cannot be used as it has no MAC address
Dec 05 10:09:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:26.890 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:26 np0005546420.localdomain kernel: device tap5b69a2a3-42 entered promiscuous mode
Dec 05 10:09:26 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929366.9008] manager: (tap5b69a2a3-42): new Generic device (/org/freedesktop/NetworkManager/Devices/34)
Dec 05 10:09:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:26.903 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:26Z|00185|binding|INFO|Claiming lport 5b69a2a3-4249-467c-a2e2-1581d347ef3a for this chassis.
Dec 05 10:09:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:26Z|00186|binding|INFO|5b69a2a3-4249-467c-a2e2-1581d347ef3a: Claiming unknown
Dec 05 10:09:26 np0005546420.localdomain systemd-udevd[314945]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:09:26 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:26.912 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:26.918 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=450c4425-387d-4017-af0b-c45c767acc7c, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5b69a2a3-4249-467c-a2e2-1581d347ef3a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:09:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:26.920 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5b69a2a3-4249-467c-a2e2-1581d347ef3a in datapath c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde bound to our chassis
Dec 05 10:09:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:26.922 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 512b3204-43dd-4eb3-9caa-3cf144c4013b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:09:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:26.922 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:09:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:26.923 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c935ca-7023-4f36-84e3-c039c6dfef87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:09:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:26.950 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:26Z|00187|binding|INFO|Setting lport 5b69a2a3-4249-467c-a2e2-1581d347ef3a ovn-installed in OVS
Dec 05 10:09:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:26Z|00188|binding|INFO|Setting lport 5b69a2a3-4249-467c-a2e2-1581d347ef3a up in Southbound
Dec 05 10:09:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:26.954 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:26.988 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:27.019 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:27 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:27.197 2 INFO neutron.agent.securitygroups_rpc [None req-693f3af2-582b-4d0d-bc83-33366519a3f6 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['d823dd66-f2af-41ad-b25d-23924a6bc812']
Dec 05 10:09:27 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:27.237 2 INFO neutron.agent.securitygroups_rpc [None req-03ae0f1a-18c3-4217-838d-d302f124e4eb 5eec71af41824815ba824bf807f8179b 70f3c241260c4833846cef3d99a05e88 - - default default] Security group member updated ['f7939a6f-eb31-4565-924a-cc0204206297']
Dec 05 10:09:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3993192341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:09:27 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:27.871 2 INFO neutron.agent.securitygroups_rpc [None req-bce7fb8d-315f-4229-8c78-5151cefbd755 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['d823dd66-f2af-41ad-b25d-23924a6bc812']
Dec 05 10:09:27 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:27.888 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:27 np0005546420.localdomain podman[315001]: 
Dec 05 10:09:27 np0005546420.localdomain podman[315001]: 2025-12-05 10:09:27.916746471 +0000 UTC m=+0.093810759 container create efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:09:27 np0005546420.localdomain systemd[1]: Started libpod-conmon-efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa.scope.
Dec 05 10:09:27 np0005546420.localdomain podman[315001]: 2025-12-05 10:09:27.871027899 +0000 UTC m=+0.048092237 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:09:27 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:09:27 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3acc3c446b9cd222dc5e41d899830f339855cc7f96b50e8027543e2723109a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:27 np0005546420.localdomain podman[315001]: 2025-12-05 10:09:27.98843605 +0000 UTC m=+0.165500428 container init efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:09:27 np0005546420.localdomain podman[315001]: 2025-12-05 10:09:27.99787684 +0000 UTC m=+0.174941138 container start efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:09:28 np0005546420.localdomain dnsmasq[315020]: started, version 2.85 cachesize 150
Dec 05 10:09:28 np0005546420.localdomain dnsmasq[315020]: DNS service limited to local subnets
Dec 05 10:09:28 np0005546420.localdomain dnsmasq[315020]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:09:28 np0005546420.localdomain dnsmasq[315020]: warning: no upstream servers configured
Dec 05 10:09:28 np0005546420.localdomain dnsmasq-dhcp[315020]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:09:28 np0005546420.localdomain dnsmasq[315020]: read /var/lib/neutron/dhcp/c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde/addn_hosts - 0 addresses
Dec 05 10:09:28 np0005546420.localdomain dnsmasq-dhcp[315020]: read /var/lib/neutron/dhcp/c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde/host
Dec 05 10:09:28 np0005546420.localdomain dnsmasq-dhcp[315020]: read /var/lib/neutron/dhcp/c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde/opts
Dec 05 10:09:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:28.069 262769 INFO neutron.agent.dhcp.agent [None req-fe535417-35c6-4d66-8096-91343506e29e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:26Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f633d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f63370>], id=74acc778-378f-43a8-95cf-200e3a44a8d1, ip_allocation=immediate, mac_address=fa:16:3e:0e:8b:d5, name=tempest-PortsTestJSON-1916740175, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:23Z, description=, dns_domain=, id=c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1629814323, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58911, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1754, status=ACTIVE, subnets=['bbc28f96-47d9-4daa-ad0f-fc46041e627d'], tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:09:24Z, vlan_transparent=None, network_id=c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1770, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:09:26Z on network c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde
Dec 05 10:09:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:28.163 262769 INFO neutron.agent.dhcp.agent [None req-6d25c2d2-5a56-4e13-a61f-c8080ed1bf9c - - - - - -] DHCP configuration for ports {'e7cb19d4-8e5d-41df-8e44-68c39c7afbcf'} is completed
Dec 05 10:09:28 np0005546420.localdomain podman[315037]: 2025-12-05 10:09:28.305350642 +0000 UTC m=+0.047951192 container kill efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:09:28 np0005546420.localdomain dnsmasq[315020]: read /var/lib/neutron/dhcp/c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde/addn_hosts - 1 addresses
Dec 05 10:09:28 np0005546420.localdomain dnsmasq-dhcp[315020]: read /var/lib/neutron/dhcp/c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde/host
Dec 05 10:09:28 np0005546420.localdomain dnsmasq-dhcp[315020]: read /var/lib/neutron/dhcp/c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde/opts
Dec 05 10:09:28 np0005546420.localdomain ceph-mon[298353]: pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 1.9 MiB/s wr, 54 op/s
Dec 05 10:09:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:28.484 262769 INFO neutron.agent.dhcp.agent [None req-9efb2626-92ad-4aba-8fda-a89719b63f6a - - - - - -] DHCP configuration for ports {'74acc778-378f-43a8-95cf-200e3a44a8d1'} is completed
Dec 05 10:09:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:28.571 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:28 np0005546420.localdomain dnsmasq[315020]: read /var/lib/neutron/dhcp/c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde/addn_hosts - 0 addresses
Dec 05 10:09:28 np0005546420.localdomain dnsmasq-dhcp[315020]: read /var/lib/neutron/dhcp/c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde/host
Dec 05 10:09:28 np0005546420.localdomain dnsmasq-dhcp[315020]: read /var/lib/neutron/dhcp/c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde/opts
Dec 05 10:09:28 np0005546420.localdomain podman[315074]: 2025-12-05 10:09:28.652885593 +0000 UTC m=+0.063736006 container kill efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:09:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:28.838 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:09:29 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:29.441 2 INFO neutron.agent.securitygroups_rpc [None req-eae8b647-71bf-47cb-9ee2-d55d01c759ad 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:29 np0005546420.localdomain dnsmasq[315020]: exiting on receipt of SIGTERM
Dec 05 10:09:29 np0005546420.localdomain podman[315112]: 2025-12-05 10:09:29.709243598 +0000 UTC m=+0.068330766 container kill efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 05 10:09:29 np0005546420.localdomain systemd[1]: libpod-efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa.scope: Deactivated successfully.
Dec 05 10:09:29 np0005546420.localdomain podman[315126]: 2025-12-05 10:09:29.780070911 +0000 UTC m=+0.054978197 container died efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 05 10:09:29 np0005546420.localdomain podman[315126]: 2025-12-05 10:09:29.812450225 +0000 UTC m=+0.087357461 container cleanup efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:09:29 np0005546420.localdomain systemd[1]: libpod-conmon-efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa.scope: Deactivated successfully.
Dec 05 10:09:29 np0005546420.localdomain podman[315128]: 2025-12-05 10:09:29.858646931 +0000 UTC m=+0.126294555 container remove efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:09:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:29.872 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:29 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:29Z|00189|binding|INFO|Releasing lport 5b69a2a3-4249-467c-a2e2-1581d347ef3a from this chassis (sb_readonly=0)
Dec 05 10:09:29 np0005546420.localdomain kernel: device tap5b69a2a3-42 left promiscuous mode
Dec 05 10:09:29 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:29Z|00190|binding|INFO|Setting lport 5b69a2a3-4249-467c-a2e2-1581d347ef3a down in Southbound
Dec 05 10:09:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:29.881 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=450c4425-387d-4017-af0b-c45c767acc7c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5b69a2a3-4249-467c-a2e2-1581d347ef3a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:09:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:29.884 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5b69a2a3-4249-467c-a2e2-1581d347ef3a in datapath c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde unbound from our chassis
Dec 05 10:09:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:29.886 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c3c8cacb-d983-47ae-80d6-c8f3bb1fdfde, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:09:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:29.887 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ac813e-9e77-45a2-a691-c2d630652000]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:09:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:29.895 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d3acc3c446b9cd222dc5e41d899830f339855cc7f96b50e8027543e2723109a0-merged.mount: Deactivated successfully.
Dec 05 10:09:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-efc997556307bd64057426dab5e2bb0fe42fbeaa02d35a1ce7a601b8354916aa-userdata-shm.mount: Deactivated successfully.
Dec 05 10:09:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:30.171 262769 INFO neutron.agent.dhcp.agent [None req-fd399baf-ad11-4223-ac57-46ff709414b3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:30 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2dc3c8cacb\x2dd983\x2d47ae\x2d80d6\x2dc8f3bb1fdfde.mount: Deactivated successfully.
Dec 05 10:09:30 np0005546420.localdomain ceph-mon[298353]: pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 1.9 MiB/s wr, 54 op/s
Dec 05 10:09:30 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:30.454 2 INFO neutron.agent.securitygroups_rpc [None req-05412d32-f512-457f-b3b7-2412701ff157 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']
Dec 05 10:09:30 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:30.460 2 INFO neutron.agent.securitygroups_rpc [None req-3f0eca8f-f70c-4289-bab2-b0e4a184fb52 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:30 np0005546420.localdomain podman[315173]: 2025-12-05 10:09:30.529583154 +0000 UTC m=+0.060419335 container kill 00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8125ca1b-9a53-4731-ae3b-554463b15fa3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 10:09:30 np0005546420.localdomain dnsmasq[314204]: exiting on receipt of SIGTERM
Dec 05 10:09:30 np0005546420.localdomain systemd[1]: libpod-00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7.scope: Deactivated successfully.
Dec 05 10:09:30 np0005546420.localdomain podman[315186]: 2025-12-05 10:09:30.601446539 +0000 UTC m=+0.058408353 container died 00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8125ca1b-9a53-4731-ae3b-554463b15fa3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:09:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:30.663 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:30 np0005546420.localdomain podman[315186]: 2025-12-05 10:09:30.677191422 +0000 UTC m=+0.134153186 container cleanup 00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8125ca1b-9a53-4731-ae3b-554463b15fa3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:09:30 np0005546420.localdomain systemd[1]: libpod-conmon-00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7.scope: Deactivated successfully.
Dec 05 10:09:30 np0005546420.localdomain podman[315188]: 2025-12-05 10:09:30.702142707 +0000 UTC m=+0.148701463 container remove 00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8125ca1b-9a53-4731-ae3b-554463b15fa3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:09:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:30.715 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:30 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:30.910 2 INFO neutron.agent.securitygroups_rpc [None req-a7e7c5af-f839-4053-b22d-d720a4983b2b 5eec71af41824815ba824bf807f8179b 70f3c241260c4833846cef3d99a05e88 - - default default] Security group member updated ['f7939a6f-eb31-4565-924a-cc0204206297']
Dec 05 10:09:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:30.910 262769 INFO neutron.agent.dhcp.agent [None req-e03c9b10-f297-4250-9daa-e881cff325c7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:30.911 262769 INFO neutron.agent.dhcp.agent [None req-e03c9b10-f297-4250-9daa-e881cff325c7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:30 np0005546420.localdomain systemd[1]: tmp-crun.q1vFZ2.mount: Deactivated successfully.
Dec 05 10:09:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-32fa72482d12d00b4fe2ad4178e8dbc8aa25219272b60f0447f4ca52842d98b2-merged.mount: Deactivated successfully.
Dec 05 10:09:30 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00069fb676b72d54a2a5497881c168d7c72e180d7718b6ca62557de998719ab7-userdata-shm.mount: Deactivated successfully.
Dec 05 10:09:30 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d8125ca1b\x2d9a53\x2d4731\x2dae3b\x2d554463b15fa3.mount: Deactivated successfully.
Dec 05 10:09:31 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:31.014 2 INFO neutron.agent.securitygroups_rpc [None req-f4184f38-8487-436e-bbf0-7c582f628481 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']
Dec 05 10:09:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:31.029 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:31 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:31.138 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:31 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:31.200 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:31 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:31.403 2 INFO neutron.agent.securitygroups_rpc [None req-304c582d-a7b0-44d0-b355-f13cb8cd14b5 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']
Dec 05 10:09:31 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:31.531 2 INFO neutron.agent.securitygroups_rpc [None req-4fc067df-7f69-4cc4-bbb3-12ab0516a717 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:31.605 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e136 e136: 6 total, 6 up, 6 in
Dec 05 10:09:31 np0005546420.localdomain ceph-mon[298353]: pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 05 10:09:31 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:31.917 2 INFO neutron.agent.securitygroups_rpc [None req-6b25967d-8b19-48ec-be6b-a391f88e56ec b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']
Dec 05 10:09:32 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:32.190 2 INFO neutron.agent.securitygroups_rpc [None req-d7e347b7-2902-4a8d-8b05-84e3cf6b8a04 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:32 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:32.255 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:32 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:32.355 2 INFO neutron.agent.securitygroups_rpc [None req-bc302c88-25bf-415a-9ece-bc84897b3bcd b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']
Dec 05 10:09:32 np0005546420.localdomain ceph-mon[298353]: osdmap e136: 6 total, 6 up, 6 in
Dec 05 10:09:32 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:32.739 2 INFO neutron.agent.securitygroups_rpc [None req-ce798883-5bd6-4fdd-b0b2-80ccf684cae7 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']
Dec 05 10:09:32 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:32.934 2 INFO neutron.agent.securitygroups_rpc [None req-78849ced-a6e2-4c6b-9614-71334bc7a8a0 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:33 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:33.049 2 INFO neutron.agent.securitygroups_rpc [None req-7235ade3-2e79-4200-8541-afc31859d671 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:33 np0005546420.localdomain ceph-mon[298353]: pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 05 10:09:33 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e137 e137: 6 total, 6 up, 6 in
Dec 05 10:09:33 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:33.837 2 INFO neutron.agent.securitygroups_rpc [None req-3cf9815e-5a3f-4b4e-977e-5bb4bc2def96 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:33 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:33.872 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:33 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:33.893 2 INFO neutron.agent.securitygroups_rpc [None req-ba1a05d7-37ec-4f0e-9155-41d8f8a97604 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['356b635b-6a2c-4448-9058-ed20bc39caf9']
Dec 05 10:09:34 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:34.243 2 INFO neutron.agent.securitygroups_rpc [None req-f40f6030-be3c-4d04-a30b-4ef8ffdee0d1 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:34 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:34.325 2 INFO neutron.agent.securitygroups_rpc [None req-785d1cea-a7c3-4660-ad42-92893080a7f7 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:09:34 np0005546420.localdomain ceph-mon[298353]: osdmap e137: 6 total, 6 up, 6 in
Dec 05 10:09:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e138 e138: 6 total, 6 up, 6 in
Dec 05 10:09:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:35 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:35.348 2 INFO neutron.agent.securitygroups_rpc [None req-e470bad7-133a-4174-b852-1f35387ae7ea 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:35 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:35.537 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:09:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:35.694 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:35 np0005546420.localdomain ceph-mon[298353]: osdmap e138: 6 total, 6 up, 6 in
Dec 05 10:09:35 np0005546420.localdomain ceph-mon[298353]: pgmap v285: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.5 KiB/s wr, 19 op/s
Dec 05 10:09:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:09:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:09:36 np0005546420.localdomain podman[315216]: 2025-12-05 10:09:36.517643967 +0000 UTC m=+0.086615019 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:09:36 np0005546420.localdomain podman[315215]: 2025-12-05 10:09:36.571180489 +0000 UTC m=+0.139925303 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 10:09:36 np0005546420.localdomain podman[315216]: 2025-12-05 10:09:36.581487505 +0000 UTC m=+0.150458547 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:09:36 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:09:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:36.609 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:36 np0005546420.localdomain podman[315215]: 2025-12-05 10:09:36.64227514 +0000 UTC m=+0.211019954 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9)
Dec 05 10:09:36 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:09:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e139 e139: 6 total, 6 up, 6 in
Dec 05 10:09:37 np0005546420.localdomain ceph-mon[298353]: osdmap e139: 6 total, 6 up, 6 in
Dec 05 10:09:37 np0005546420.localdomain ceph-mon[298353]: pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 6.3 KiB/s wr, 80 op/s
Dec 05 10:09:38 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:38.693 262769 INFO neutron.agent.linux.ip_lib [None req-8d1d1efb-8159-45c9-9845-9f86ebe27b11 - - - - - -] Device tap33f4e9e6-e2 cannot be used as it has no MAC address
Dec 05 10:09:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:38.717 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:38 np0005546420.localdomain kernel: device tap33f4e9e6-e2 entered promiscuous mode
Dec 05 10:09:38 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929378.7292] manager: (tap33f4e9e6-e2): new Generic device (/org/freedesktop/NetworkManager/Devices/35)
Dec 05 10:09:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:38.731 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:38 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:38Z|00191|binding|INFO|Claiming lport 33f4e9e6-e266-4ca4-8d43-59348a40730f for this chassis.
Dec 05 10:09:38 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:38Z|00192|binding|INFO|33f4e9e6-e266-4ca4-8d43-59348a40730f: Claiming unknown
Dec 05 10:09:38 np0005546420.localdomain systemd-udevd[315267]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:09:38 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:38.743 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-845f9b11-7635-438e-8940-471dc1d869ab', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-845f9b11-7635-438e-8940-471dc1d869ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74a17deba84b470ebe240fab8c99b64c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70152e9f-e80f-4dd0-9444-6db4274556d0, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=33f4e9e6-e266-4ca4-8d43-59348a40730f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:09:38 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:38.746 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 33f4e9e6-e266-4ca4-8d43-59348a40730f in datapath 845f9b11-7635-438e-8940-471dc1d869ab bound to our chassis
Dec 05 10:09:38 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:38.748 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 845f9b11-7635-438e-8940-471dc1d869ab or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:09:38 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:38.748 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[6375676c-c671-4504-a5b1-9a4183f167b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:09:38 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:38Z|00193|binding|INFO|Setting lport 33f4e9e6-e266-4ca4-8d43-59348a40730f ovn-installed in OVS
Dec 05 10:09:38 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:38Z|00194|binding|INFO|Setting lport 33f4e9e6-e266-4ca4-8d43-59348a40730f up in Southbound
Dec 05 10:09:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:38.763 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:38 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap33f4e9e6-e2: No such device
Dec 05 10:09:38 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap33f4e9e6-e2: No such device
Dec 05 10:09:38 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap33f4e9e6-e2: No such device
Dec 05 10:09:38 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap33f4e9e6-e2: No such device
Dec 05 10:09:38 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap33f4e9e6-e2: No such device
Dec 05 10:09:38 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap33f4e9e6-e2: No such device
Dec 05 10:09:38 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap33f4e9e6-e2: No such device
Dec 05 10:09:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:38.800 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:38 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap33f4e9e6-e2: No such device
Dec 05 10:09:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:38.833 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:38 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:09:38 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/865059015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:09:38 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:09:38 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/865059015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:09:38 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/865059015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:09:38 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/865059015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:09:39 np0005546420.localdomain podman[315338]: 
Dec 05 10:09:39 np0005546420.localdomain podman[315338]: 2025-12-05 10:09:39.716845468 +0000 UTC m=+0.088353471 container create fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:09:39 np0005546420.localdomain systemd[1]: Started libpod-conmon-fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f.scope.
Dec 05 10:09:39 np0005546420.localdomain podman[315338]: 2025-12-05 10:09:39.671444955 +0000 UTC m=+0.042952988 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:09:39 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:09:39 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01548a1cacb78e767711e8c7798fff45cbe6d1e108bad0fb2594f2dad075b00d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:39 np0005546420.localdomain podman[315338]: 2025-12-05 10:09:39.817519056 +0000 UTC m=+0.189027069 container init fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:09:39 np0005546420.localdomain podman[315338]: 2025-12-05 10:09:39.828011528 +0000 UTC m=+0.199519531 container start fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 10:09:39 np0005546420.localdomain dnsmasq[315356]: started, version 2.85 cachesize 150
Dec 05 10:09:39 np0005546420.localdomain dnsmasq[315356]: DNS service limited to local subnets
Dec 05 10:09:39 np0005546420.localdomain dnsmasq[315356]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:09:39 np0005546420.localdomain dnsmasq[315356]: warning: no upstream servers configured
Dec 05 10:09:39 np0005546420.localdomain dnsmasq-dhcp[315356]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:09:39 np0005546420.localdomain dnsmasq[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/addn_hosts - 0 addresses
Dec 05 10:09:39 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/host
Dec 05 10:09:39 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/opts
Dec 05 10:09:39 np0005546420.localdomain ceph-mon[298353]: pgmap v288: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 5.8 KiB/s wr, 74 op/s
Dec 05 10:09:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e140 e140: 6 total, 6 up, 6 in
Dec 05 10:09:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:40.730 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:09:41 np0005546420.localdomain podman[315357]: 2025-12-05 10:09:41.525225223 +0000 UTC m=+0.097527263 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 05 10:09:41 np0005546420.localdomain podman[315357]: 2025-12-05 10:09:41.596589082 +0000 UTC m=+0.168891132 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:09:41 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:09:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:41.610 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:41 np0005546420.localdomain ceph-mon[298353]: osdmap e140: 6 total, 6 up, 6 in
Dec 05 10:09:41 np0005546420.localdomain ceph-mon[298353]: pgmap v290: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 8.0 KiB/s wr, 113 op/s
Dec 05 10:09:41 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:41.929 262769 INFO neutron.agent.dhcp.agent [None req-bdc4f03b-a0cb-41c8-9705-a9c855d90341 - - - - - -] DHCP configuration for ports {'3f501436-e178-4c9d-87d8-87f749edcf64'} is completed
Dec 05 10:09:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:42.867 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:0b:c8 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a18433-2c66-4fbf-a647-567b3b059428, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=83bc4f64-fd82-46c7-b04c-f3f7fd3b951d) old=Port_Binding(mac=['fa:16:3e:2a:0b:c8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:09:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:42.869 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 83bc4f64-fd82-46c7-b04c-f3f7fd3b951d in datapath 202b91b7-8951-4676-ac11-d844ae9d9927 updated
Dec 05 10:09:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:42.871 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 202b91b7-8951-4676-ac11-d844ae9d9927, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:09:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:42.872 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[c20ee056-01ed-4052-b7c6-29ef6dbf9f20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:09:43 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:43.610 2 INFO neutron.agent.securitygroups_rpc [None req-9167f274-b528-4220-b67e-800e25b79970 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:09:43 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:43.699 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:42Z, description=, device_id=8fc4e302-9729-47ac-a171-f4a1207e0aaa, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a030d00>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a030c40>], id=278ac9bf-882f-4450-ae33-f61c71ded21e, ip_allocation=immediate, mac_address=fa:16:3e:c0:cc:c2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:35Z, description=, dns_domain=, id=845f9b11-7635-438e-8940-471dc1d869ab, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1588803316, port_security_enabled=True, project_id=74a17deba84b470ebe240fab8c99b64c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38970, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1833, status=ACTIVE, subnets=['a3c726a0-6a99-4101-98eb-345e0fe5fbae'], tags=[], tenant_id=74a17deba84b470ebe240fab8c99b64c, updated_at=2025-12-05T10:09:37Z, vlan_transparent=None, network_id=845f9b11-7635-438e-8940-471dc1d869ab, port_security_enabled=False, project_id=74a17deba84b470ebe240fab8c99b64c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1888, status=DOWN, tags=[], tenant_id=74a17deba84b470ebe240fab8c99b64c, updated_at=2025-12-05T10:09:43Z on network 845f9b11-7635-438e-8940-471dc1d869ab
Dec 05 10:09:43 np0005546420.localdomain dnsmasq[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/addn_hosts - 1 addresses
Dec 05 10:09:43 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/host
Dec 05 10:09:43 np0005546420.localdomain podman[315397]: 2025-12-05 10:09:43.917407587 +0000 UTC m=+0.062900301 container kill fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:09:43 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/opts
Dec 05 10:09:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:44.303 262769 INFO neutron.agent.dhcp.agent [None req-22c76a31-1e3b-46fe-b91f-31bc0abefd03 - - - - - -] DHCP configuration for ports {'278ac9bf-882f-4450-ae33-f61c71ded21e'} is completed
Dec 05 10:09:44 np0005546420.localdomain ceph-mon[298353]: pgmap v291: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 6.5 KiB/s wr, 107 op/s
Dec 05 10:09:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:44.592 262769 INFO neutron.agent.linux.ip_lib [None req-8f63a96e-63ac-4400-aef8-2520123933f7 - - - - - -] Device tapb5e9fd71-55 cannot be used as it has no MAC address
Dec 05 10:09:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:44.616 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:44 np0005546420.localdomain kernel: device tapb5e9fd71-55 entered promiscuous mode
Dec 05 10:09:44 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929384.6266] manager: (tapb5e9fd71-55): new Generic device (/org/freedesktop/NetworkManager/Devices/36)
Dec 05 10:09:44 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:44Z|00195|binding|INFO|Claiming lport b5e9fd71-558f-40a5-adb7-f40dbd99bcd9 for this chassis.
Dec 05 10:09:44 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:44Z|00196|binding|INFO|b5e9fd71-558f-40a5-adb7-f40dbd99bcd9: Claiming unknown
Dec 05 10:09:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:44.626 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:44 np0005546420.localdomain systemd-udevd[315429]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:09:44 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:44.639 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-e0418100-5fa8-4c07-a37e-c2897c03948c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0418100-5fa8-4c07-a37e-c2897c03948c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6db8cbac53645ef9430332056699027', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa582ea3-98cf-49be-95bb-f29666f1056e, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=b5e9fd71-558f-40a5-adb7-f40dbd99bcd9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:09:44 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:44.641 159503 INFO neutron.agent.ovn.metadata.agent [-] Port b5e9fd71-558f-40a5-adb7-f40dbd99bcd9 in datapath e0418100-5fa8-4c07-a37e-c2897c03948c bound to our chassis
Dec 05 10:09:44 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:44.643 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 72533c3b-5924-45f0-be91-cf4a13365cff IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:09:44 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:44.643 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e0418100-5fa8-4c07-a37e-c2897c03948c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:09:44 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:09:44.644 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[46b5bebc-77a8-4d79-b545-e8577bdba7cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:09:44 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb5e9fd71-55: No such device
Dec 05 10:09:44 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb5e9fd71-55: No such device
Dec 05 10:09:44 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:44Z|00197|binding|INFO|Setting lport b5e9fd71-558f-40a5-adb7-f40dbd99bcd9 ovn-installed in OVS
Dec 05 10:09:44 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:09:44Z|00198|binding|INFO|Setting lport b5e9fd71-558f-40a5-adb7-f40dbd99bcd9 up in Southbound
Dec 05 10:09:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:44.669 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:44.671 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:44 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb5e9fd71-55: No such device
Dec 05 10:09:44 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb5e9fd71-55: No such device
Dec 05 10:09:44 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb5e9fd71-55: No such device
Dec 05 10:09:44 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb5e9fd71-55: No such device
Dec 05 10:09:44 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb5e9fd71-55: No such device
Dec 05 10:09:44 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapb5e9fd71-55: No such device
Dec 05 10:09:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:44.710 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:44.740 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:44 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:44.923 2 INFO neutron.agent.securitygroups_rpc [None req-5113bc8c-3b86-4908-a95b-dea9f14ab9cb 71a1dc47c8964743be7069896a3eb55e f6db8cbac53645ef9430332056699027 - - default default] Security group member updated ['007023a4-4e47-4b10-af03-71f50f200b06']
Dec 05 10:09:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:45 np0005546420.localdomain podman[315500]: 
Dec 05 10:09:45 np0005546420.localdomain podman[315500]: 2025-12-05 10:09:45.567682072 +0000 UTC m=+0.084884695 container create 409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0418100-5fa8-4c07-a37e-c2897c03948c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:09:45 np0005546420.localdomain systemd[1]: Started libpod-conmon-409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410.scope.
Dec 05 10:09:45 np0005546420.localdomain systemd[1]: tmp-crun.NmpzAD.mount: Deactivated successfully.
Dec 05 10:09:45 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:09:45 np0005546420.localdomain podman[315500]: 2025-12-05 10:09:45.529991505 +0000 UTC m=+0.047194208 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:09:45 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:45.629 2 INFO neutron.agent.securitygroups_rpc [None req-6135443b-d502-4e12-9a62-2eade4e6d3ea 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:09:45 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00fd4a05f0957247d9fd379c1c8df76d88d437c953f23ba9bc0e4fffd5bd9e82/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:09:45 np0005546420.localdomain podman[315500]: 2025-12-05 10:09:45.641255989 +0000 UTC m=+0.158458632 container init 409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0418100-5fa8-4c07-a37e-c2897c03948c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:09:45 np0005546420.localdomain podman[315500]: 2025-12-05 10:09:45.65073434 +0000 UTC m=+0.167936983 container start 409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0418100-5fa8-4c07-a37e-c2897c03948c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 05 10:09:45 np0005546420.localdomain dnsmasq[315517]: started, version 2.85 cachesize 150
Dec 05 10:09:45 np0005546420.localdomain dnsmasq[315517]: DNS service limited to local subnets
Dec 05 10:09:45 np0005546420.localdomain dnsmasq[315517]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:09:45 np0005546420.localdomain dnsmasq[315517]: warning: no upstream servers configured
Dec 05 10:09:45 np0005546420.localdomain dnsmasq-dhcp[315517]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:09:45 np0005546420.localdomain dnsmasq[315517]: read /var/lib/neutron/dhcp/e0418100-5fa8-4c07-a37e-c2897c03948c/addn_hosts - 0 addresses
Dec 05 10:09:45 np0005546420.localdomain dnsmasq-dhcp[315517]: read /var/lib/neutron/dhcp/e0418100-5fa8-4c07-a37e-c2897c03948c/host
Dec 05 10:09:45 np0005546420.localdomain dnsmasq-dhcp[315517]: read /var/lib/neutron/dhcp/e0418100-5fa8-4c07-a37e-c2897c03948c/opts
Dec 05 10:09:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e141 e141: 6 total, 6 up, 6 in
Dec 05 10:09:45 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:45.713 262769 INFO neutron.agent.dhcp.agent [None req-5d63a748-23ce-4a09-a660-52c8c60fe5b0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:44Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a002850>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0d5cd0>], id=103b16e3-dd8c-4a06-bd26-11df5a0374f2, ip_allocation=immediate, mac_address=fa:16:3e:6c:d9:28, name=tempest-TagsExtTest-1675039667, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:39Z, description=, dns_domain=, id=e0418100-5fa8-4c07-a37e-c2897c03948c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TagsExtTest-test-network-1362827690, port_security_enabled=True, project_id=f6db8cbac53645ef9430332056699027, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=164, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1867, status=ACTIVE, subnets=['5cf3a27b-a0fd-4c95-83d5-f872ed42fc7c'], tags=[], tenant_id=f6db8cbac53645ef9430332056699027, updated_at=2025-12-05T10:09:39Z, vlan_transparent=None, network_id=e0418100-5fa8-4c07-a37e-c2897c03948c, port_security_enabled=True, project_id=f6db8cbac53645ef9430332056699027, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['007023a4-4e47-4b10-af03-71f50f200b06'], standard_attr_id=1891, status=DOWN, tags=[], tenant_id=f6db8cbac53645ef9430332056699027, updated_at=2025-12-05T10:09:44Z on network e0418100-5fa8-4c07-a37e-c2897c03948c
Dec 05 10:09:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:45.764 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:45 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:45.779 262769 INFO neutron.agent.dhcp.agent [None req-919bcb53-e39c-42bc-be88-270c27ff9fcf - - - - - -] DHCP configuration for ports {'a63da9a9-4675-49be-a11b-242a6646014e'} is completed
Dec 05 10:09:45 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:45.831 2 INFO neutron.agent.securitygroups_rpc [None req-fe53869a-662d-4fc8-99be-d69dbe8fa9ee 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:45 np0005546420.localdomain dnsmasq[315517]: read /var/lib/neutron/dhcp/e0418100-5fa8-4c07-a37e-c2897c03948c/addn_hosts - 1 addresses
Dec 05 10:09:45 np0005546420.localdomain dnsmasq-dhcp[315517]: read /var/lib/neutron/dhcp/e0418100-5fa8-4c07-a37e-c2897c03948c/host
Dec 05 10:09:45 np0005546420.localdomain dnsmasq-dhcp[315517]: read /var/lib/neutron/dhcp/e0418100-5fa8-4c07-a37e-c2897c03948c/opts
Dec 05 10:09:45 np0005546420.localdomain podman[315536]: 2025-12-05 10:09:45.929702057 +0000 UTC m=+0.047594211 container kill 409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0418100-5fa8-4c07-a37e-c2897c03948c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:09:46 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:46.138 262769 INFO neutron.agent.dhcp.agent [None req-73a2e4f0-9528-4239-b70a-8ee8a22cd801 - - - - - -] DHCP configuration for ports {'103b16e3-dd8c-4a06-bd26-11df5a0374f2'} is completed
Dec 05 10:09:46 np0005546420.localdomain ceph-mon[298353]: pgmap v292: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 2.0 KiB/s wr, 48 op/s
Dec 05 10:09:46 np0005546420.localdomain ceph-mon[298353]: osdmap e141: 6 total, 6 up, 6 in
Dec 05 10:09:46 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:46.532 2 INFO neutron.agent.securitygroups_rpc [None req-d97a5b35-e515-4748-b589-0a6808590190 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:09:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:46.611 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:46 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:46.693 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:42Z, description=, device_id=8fc4e302-9729-47ac-a171-f4a1207e0aaa, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f0ba90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f0b8b0>], id=278ac9bf-882f-4450-ae33-f61c71ded21e, ip_allocation=immediate, mac_address=fa:16:3e:c0:cc:c2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:35Z, description=, dns_domain=, id=845f9b11-7635-438e-8940-471dc1d869ab, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1588803316, port_security_enabled=True, project_id=74a17deba84b470ebe240fab8c99b64c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38970, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1833, status=ACTIVE, subnets=['a3c726a0-6a99-4101-98eb-345e0fe5fbae'], tags=[], tenant_id=74a17deba84b470ebe240fab8c99b64c, updated_at=2025-12-05T10:09:37Z, vlan_transparent=None, network_id=845f9b11-7635-438e-8940-471dc1d869ab, port_security_enabled=False, project_id=74a17deba84b470ebe240fab8c99b64c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1888, status=DOWN, tags=[], tenant_id=74a17deba84b470ebe240fab8c99b64c, updated_at=2025-12-05T10:09:43Z on network 845f9b11-7635-438e-8940-471dc1d869ab
Dec 05 10:09:46 np0005546420.localdomain dnsmasq[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/addn_hosts - 1 addresses
Dec 05 10:09:46 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/host
Dec 05 10:09:46 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/opts
Dec 05 10:09:46 np0005546420.localdomain podman[315574]: 2025-12-05 10:09:46.91947538 +0000 UTC m=+0.057021990 container kill fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:09:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:09:47 np0005546420.localdomain podman[315587]: 2025-12-05 10:09:47.048716825 +0000 UTC m=+0.096188162 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 10:09:47 np0005546420.localdomain podman[315587]: 2025-12-05 10:09:47.066433109 +0000 UTC m=+0.113904426 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:09:47 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:09:47 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:47.144 2 INFO neutron.agent.securitygroups_rpc [None req-473cdf56-0e01-4e1c-a4f3-92d7eeef2d53 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:09:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:09:47 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:47.212 262769 INFO neutron.agent.dhcp.agent [None req-1c1d71e7-ab0b-4c45-b2dd-cd69940558cd - - - - - -] DHCP configuration for ports {'278ac9bf-882f-4450-ae33-f61c71ded21e'} is completed
Dec 05 10:09:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:09:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156739 "" "Go-http-client/1.1"
Dec 05 10:09:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:09:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19204 "" "Go-http-client/1.1"
Dec 05 10:09:47 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:47.899 2 INFO neutron.agent.securitygroups_rpc [None req-9d4ec5a5-1538-4ac4-baa6-e4ad3b85127c 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:09:48 np0005546420.localdomain ceph-mon[298353]: pgmap v294: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 2.1 KiB/s wr, 50 op/s
Dec 05 10:09:48 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:09:48 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:48.711 2 INFO neutron.agent.securitygroups_rpc [None req-27722011-69f1-466e-9f91-5c5a0d5b82ea b1808a8aad0b4360880c390bd8362a00 74a17deba84b470ebe240fab8c99b64c - - default default] Security group member updated ['f197cddb-ae8c-4586-ad83-8ebb4f64e04c']
Dec 05 10:09:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:48.799 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:47Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f0ba60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f0bbb0>], id=a284f771-eb37-4dbb-bfb4-51ad9b695818, ip_allocation=immediate, mac_address=fa:16:3e:fb:32:26, name=tempest-FloatingIPNegativeTestJSON-506296083, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:35Z, description=, dns_domain=, id=845f9b11-7635-438e-8940-471dc1d869ab, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1588803316, port_security_enabled=True, project_id=74a17deba84b470ebe240fab8c99b64c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38970, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1833, status=ACTIVE, subnets=['a3c726a0-6a99-4101-98eb-345e0fe5fbae'], tags=[], tenant_id=74a17deba84b470ebe240fab8c99b64c, updated_at=2025-12-05T10:09:37Z, vlan_transparent=None, network_id=845f9b11-7635-438e-8940-471dc1d869ab, port_security_enabled=True, project_id=74a17deba84b470ebe240fab8c99b64c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f197cddb-ae8c-4586-ad83-8ebb4f64e04c'], standard_attr_id=1911, status=DOWN, tags=[], tenant_id=74a17deba84b470ebe240fab8c99b64c, updated_at=2025-12-05T10:09:48Z on network 845f9b11-7635-438e-8940-471dc1d869ab
Dec 05 10:09:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:09:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:09:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:09:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:09:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:09:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:09:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:09:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:09:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:09:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:09:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:09:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:09:49 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:49.004 2 INFO neutron.agent.securitygroups_rpc [None req-957875af-a66a-4345-aa18-8b11fd11a0a9 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:49 np0005546420.localdomain dnsmasq[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/addn_hosts - 2 addresses
Dec 05 10:09:49 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/host
Dec 05 10:09:49 np0005546420.localdomain podman[315630]: 2025-12-05 10:09:49.024482745 +0000 UTC m=+0.064169639 container kill fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:09:49 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/opts
Dec 05 10:09:49 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:09:49.296 262769 INFO neutron.agent.dhcp.agent [None req-76b5734e-72f6-4408-bfb0-4bb475b57c4f - - - - - -] DHCP configuration for ports {'a284f771-eb37-4dbb-bfb4-51ad9b695818'} is completed
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.439930) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389440042, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2503, "num_deletes": 256, "total_data_size": 3815327, "memory_usage": 3862304, "flush_reason": "Manual Compaction"}
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "format": "json"}]: dispatch
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389457223, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 2476976, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22726, "largest_seqno": 25224, "table_properties": {"data_size": 2467689, "index_size": 5792, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20212, "raw_average_key_size": 21, "raw_value_size": 2448692, "raw_average_value_size": 2577, "num_data_blocks": 249, "num_entries": 950, "num_filter_entries": 950, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929217, "oldest_key_time": 1764929217, "file_creation_time": 1764929389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 17348 microseconds, and 6965 cpu microseconds.
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.457281) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 2476976 bytes OK
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.457310) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.459517) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.459538) EVENT_LOG_v1 {"time_micros": 1764929389459532, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.459561) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 3804078, prev total WAL file size 3804078, number of live WAL files 2.
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.460628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end)
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(2418KB)], [39(15MB)]
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389460671, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 18984750, "oldest_snapshot_seqno": -1}
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 12493 keys, 17029062 bytes, temperature: kUnknown
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389553473, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 17029062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16957378, "index_size": 39327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31301, "raw_key_size": 333975, "raw_average_key_size": 26, "raw_value_size": 16744208, "raw_average_value_size": 1340, "num_data_blocks": 1498, "num_entries": 12493, "num_filter_entries": 12493, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.554150) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 17029062 bytes
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.556263) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.2 rd, 183.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 15.7 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(14.5) write-amplify(6.9) OK, records in: 13030, records dropped: 537 output_compression: NoCompression
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.556301) EVENT_LOG_v1 {"time_micros": 1764929389556285, "job": 22, "event": "compaction_finished", "compaction_time_micros": 92988, "compaction_time_cpu_micros": 44912, "output_level": 6, "num_output_files": 1, "total_output_size": 17029062, "num_input_records": 13030, "num_output_records": 12493, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389557092, "job": 22, "event": "table_file_deletion", "file_number": 41}
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389560267, "job": 22, "event": "table_file_deletion", "file_number": 39}
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.460521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.560304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.560310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.560313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.560317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:09:49 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:09:49.560319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:09:49 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:49.845 2 INFO neutron.agent.securitygroups_rpc [None req-cce2b7c3-75f1-4ea1-897e-6d61b0525126 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:09:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:50 np0005546420.localdomain ceph-mon[298353]: pgmap v295: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 47 op/s
Dec 05 10:09:50 np0005546420.localdomain ceph-mon[298353]: mgrmap e49: np0005546419.zhsnqq(active, since 8m), standbys: np0005546420.aoeylc, np0005546421.sukfea
Dec 05 10:09:50 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:50.644 2 INFO neutron.agent.securitygroups_rpc [None req-029f581c-71e2-4ce7-b914-2c137b7b6c2a 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:09:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:50.818 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:51 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:51.411 2 INFO neutron.agent.securitygroups_rpc [None req-af3b553c-8339-4731-9e0d-445f98eb7889 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:09:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:51.616 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:51 np0005546420.localdomain ceph-mon[298353]: pgmap v296: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 2.5 KiB/s wr, 13 op/s
Dec 05 10:09:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "64fc08e4-0b7c-4366-893e-01887e7442f6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:09:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "64fc08e4-0b7c-4366-893e-01887e7442f6", "format": "json"}]: dispatch
Dec 05 10:09:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:09:53 np0005546420.localdomain ceph-mon[298353]: pgmap v297: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 2.5 KiB/s wr, 0 op/s
Dec 05 10:09:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:09:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:09:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:09:55 np0005546420.localdomain podman[315651]: 2025-12-05 10:09:55.517444988 +0000 UTC m=+0.088642781 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:09:55 np0005546420.localdomain podman[315651]: 2025-12-05 10:09:55.534336596 +0000 UTC m=+0.105534379 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:09:55 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:09:55 np0005546420.localdomain podman[315652]: 2025-12-05 10:09:55.619158878 +0000 UTC m=+0.184582944 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 10:09:55 np0005546420.localdomain podman[315652]: 2025-12-05 10:09:55.652565583 +0000 UTC m=+0.217989679 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:09:55 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:09:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:55.856 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:56 np0005546420.localdomain ceph-mon[298353]: pgmap v298: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 2.5 KiB/s wr, 0 op/s
Dec 05 10:09:56 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:56.435 2 INFO neutron.agent.securitygroups_rpc [None req-399fedc7-2e19-4b21-a4fb-b5420b7d54a3 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:09:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:09:56.616 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:09:57 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "64fc08e4-0b7c-4366-893e-01887e7442f6", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 05 10:09:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:09:57 np0005546420.localdomain podman[315692]: 2025-12-05 10:09:57.510074815 +0000 UTC m=+0.078809578 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:09:57 np0005546420.localdomain podman[315692]: 2025-12-05 10:09:57.546318077 +0000 UTC m=+0.115052890 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:09:57 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:57.552 2 INFO neutron.agent.securitygroups_rpc [None req-53ce10df-a7a5-49c1-9080-2e2f4825bd95 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:09:57 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:09:58 np0005546420.localdomain ceph-mon[298353]: pgmap v299: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 4.4 KiB/s wr, 1 op/s
Dec 05 10:09:58 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "64fc08e4-0b7c-4366-893e-01887e7442f6", "format": "json"}]: dispatch
Dec 05 10:09:58 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "64fc08e4-0b7c-4366-893e-01887e7442f6", "force": true, "format": "json"}]: dispatch
Dec 05 10:09:59 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:59.281 2 INFO neutron.agent.securitygroups_rpc [None req-1271ce57-15dc-48ec-9676-311c69f911c2 b1808a8aad0b4360880c390bd8362a00 74a17deba84b470ebe240fab8c99b64c - - default default] Security group member updated ['f197cddb-ae8c-4586-ad83-8ebb4f64e04c']
Dec 05 10:09:59 np0005546420.localdomain dnsmasq[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/addn_hosts - 1 addresses
Dec 05 10:09:59 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/host
Dec 05 10:09:59 np0005546420.localdomain podman[315728]: 2025-12-05 10:09:59.553989226 +0000 UTC m=+0.065420479 container kill fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 10:09:59 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/opts
Dec 05 10:09:59 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:59.650 2 INFO neutron.agent.securitygroups_rpc [None req-c144ec7e-ee0e-4dfd-b220-2497e51dbdcc 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:09:59 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:09:59.664 2 INFO neutron.agent.securitygroups_rpc [None req-aea036ab-5bba-4602-a174-e43a3c6f9829 71a1dc47c8964743be7069896a3eb55e f6db8cbac53645ef9430332056699027 - - default default] Security group member updated ['007023a4-4e47-4b10-af03-71f50f200b06']
Dec 05 10:10:00 np0005546420.localdomain dnsmasq[315517]: read /var/lib/neutron/dhcp/e0418100-5fa8-4c07-a37e-c2897c03948c/addn_hosts - 0 addresses
Dec 05 10:10:00 np0005546420.localdomain dnsmasq-dhcp[315517]: read /var/lib/neutron/dhcp/e0418100-5fa8-4c07-a37e-c2897c03948c/host
Dec 05 10:10:00 np0005546420.localdomain podman[315766]: 2025-12-05 10:10:00.005675552 +0000 UTC m=+0.062772318 container kill 409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0418100-5fa8-4c07-a37e-c2897c03948c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:10:00 np0005546420.localdomain dnsmasq-dhcp[315517]: read /var/lib/neutron/dhcp/e0418100-5fa8-4c07-a37e-c2897c03948c/opts
Dec 05 10:10:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:00 np0005546420.localdomain ceph-mon[298353]: pgmap v300: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 4.2 KiB/s wr, 1 op/s
Dec 05 10:10:00 np0005546420.localdomain ceph-mon[298353]: mgrmap e50: np0005546419.zhsnqq(active, since 8m), standbys: np0005546420.aoeylc, np0005546421.sukfea
Dec 05 10:10:00 np0005546420.localdomain ceph-mon[298353]: overall HEALTH_OK
Dec 05 10:10:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:00.896 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:01 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1162ea5e-f099-4d47-a2b8-fe2def6b3849", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:10:01 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1162ea5e-f099-4d47-a2b8-fe2def6b3849", "format": "json"}]: dispatch
Dec 05 10:10:01 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:10:01 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:01Z|00199|binding|INFO|Removing iface tapb5e9fd71-55 ovn-installed in OVS
Dec 05 10:10:01 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:01Z|00200|binding|INFO|Removing lport b5e9fd71-558f-40a5-adb7-f40dbd99bcd9 ovn-installed in OVS
Dec 05 10:10:01 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:01.546 159503 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 72533c3b-5924-45f0-be91-cf4a13365cff with type ""
Dec 05 10:10:01 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:01.548 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-e0418100-5fa8-4c07-a37e-c2897c03948c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0418100-5fa8-4c07-a37e-c2897c03948c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6db8cbac53645ef9430332056699027', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa582ea3-98cf-49be-95bb-f29666f1056e, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=b5e9fd71-558f-40a5-adb7-f40dbd99bcd9) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:01.547 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:01 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:01.550 159503 INFO neutron.agent.ovn.metadata.agent [-] Port b5e9fd71-558f-40a5-adb7-f40dbd99bcd9 in datapath e0418100-5fa8-4c07-a37e-c2897c03948c unbound from our chassis
Dec 05 10:10:01 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:01.553 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e0418100-5fa8-4c07-a37e-c2897c03948c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:01.553 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:01 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:01.554 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4b0ff5-e2e5-400c-9d54-f1bc8cffb05d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:01 np0005546420.localdomain dnsmasq[315517]: exiting on receipt of SIGTERM
Dec 05 10:10:01 np0005546420.localdomain podman[315803]: 2025-12-05 10:10:01.56220849 +0000 UTC m=+0.067709048 container kill 409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0418100-5fa8-4c07-a37e-c2897c03948c, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:10:01 np0005546420.localdomain systemd[1]: libpod-409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410.scope: Deactivated successfully.
Dec 05 10:10:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:01.618 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:01 np0005546420.localdomain podman[315815]: 2025-12-05 10:10:01.638071847 +0000 UTC m=+0.059484736 container died 409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0418100-5fa8-4c07-a37e-c2897c03948c, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:01 np0005546420.localdomain systemd[1]: tmp-crun.MoDK0E.mount: Deactivated successfully.
Dec 05 10:10:01 np0005546420.localdomain podman[315815]: 2025-12-05 10:10:01.675919778 +0000 UTC m=+0.097332607 container cleanup 409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0418100-5fa8-4c07-a37e-c2897c03948c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:10:01 np0005546420.localdomain systemd[1]: libpod-conmon-409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410.scope: Deactivated successfully.
Dec 05 10:10:01 np0005546420.localdomain podman[315817]: 2025-12-05 10:10:01.717244656 +0000 UTC m=+0.132354861 container remove 409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e0418100-5fa8-4c07-a37e-c2897c03948c, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:10:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:01.728 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:01 np0005546420.localdomain kernel: device tapb5e9fd71-55 left promiscuous mode
Dec 05 10:10:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:01.746 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:01 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:01.760 262769 INFO neutron.agent.dhcp.agent [None req-6ec4a429-4946-4eb9-a98e-d2100b2df774 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:01 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:01.761 262769 INFO neutron.agent.dhcp.agent [None req-6ec4a429-4946-4eb9-a98e-d2100b2df774 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:02 np0005546420.localdomain dnsmasq[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/addn_hosts - 0 addresses
Dec 05 10:10:02 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/host
Dec 05 10:10:02 np0005546420.localdomain podman[315861]: 2025-12-05 10:10:02.223098003 +0000 UTC m=+0.062459856 container kill fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:10:02 np0005546420.localdomain dnsmasq-dhcp[315356]: read /var/lib/neutron/dhcp/845f9b11-7635-438e-8940-471dc1d869ab/opts
Dec 05 10:10:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:02.226 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:02 np0005546420.localdomain ceph-mon[298353]: pgmap v301: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 10 KiB/s wr, 3 op/s
Dec 05 10:10:02 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:02.432 2 INFO neutron.agent.securitygroups_rpc [None req-f945fec7-1421-4326-98f0-4d78b816214c 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:10:02 np0005546420.localdomain kernel: device tap33f4e9e6-e2 left promiscuous mode
Dec 05 10:10:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:02.560 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-00fd4a05f0957247d9fd379c1c8df76d88d437c953f23ba9bc0e4fffd5bd9e82-merged.mount: Deactivated successfully.
Dec 05 10:10:02 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-409ee524d0968878698f7cc8972931d5f94eb6174a993b47bf217a54bcaba410-userdata-shm.mount: Deactivated successfully.
Dec 05 10:10:02 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2de0418100\x2d5fa8\x2d4c07\x2da37e\x2dc2897c03948c.mount: Deactivated successfully.
Dec 05 10:10:02 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:02Z|00201|binding|INFO|Releasing lport 33f4e9e6-e266-4ca4-8d43-59348a40730f from this chassis (sb_readonly=0)
Dec 05 10:10:02 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:02Z|00202|binding|INFO|Setting lport 33f4e9e6-e266-4ca4-8d43-59348a40730f down in Southbound
Dec 05 10:10:02 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:02.576 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-845f9b11-7635-438e-8940-471dc1d869ab', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-845f9b11-7635-438e-8940-471dc1d869ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74a17deba84b470ebe240fab8c99b64c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70152e9f-e80f-4dd0-9444-6db4274556d0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=33f4e9e6-e266-4ca4-8d43-59348a40730f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:02 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:02.578 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 33f4e9e6-e266-4ca4-8d43-59348a40730f in datapath 845f9b11-7635-438e-8940-471dc1d869ab unbound from our chassis
Dec 05 10:10:02 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:02.580 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 845f9b11-7635-438e-8940-471dc1d869ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:02 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:02.581 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4887aa-1ccf-4952-899e-3f954b1eb5a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:02.582 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:02.728 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:02 np0005546420.localdomain sudo[315882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:10:02 np0005546420.localdomain sudo[315882]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:10:02 np0005546420.localdomain sudo[315882]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:02 np0005546420.localdomain sudo[315900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:10:02 np0005546420.localdomain sudo[315900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:10:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:10:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4132078698' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:10:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:10:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4132078698' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:10:03 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:03.406 2 INFO neutron.agent.securitygroups_rpc [None req-cdcb0b2b-f06b-4051-a380-4fb87451cfb7 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:10:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4132078698' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:10:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4132078698' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:10:03 np0005546420.localdomain sudo[315900]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:03 np0005546420.localdomain sudo[315951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:10:03 np0005546420.localdomain sudo[315951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:10:03 np0005546420.localdomain sudo[315951]: pam_unix(sudo:session): session closed for user root
Dec 05 10:10:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:04.128 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:10:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:04.129 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:10:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:04.129 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:10:04 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:04.378 2 INFO neutron.agent.securitygroups_rpc [None req-79e664b4-556e-437e-b1e4-1ebb8a566d0d 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']
Dec 05 10:10:04 np0005546420.localdomain ceph-mon[298353]: pgmap v302: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 8.4 KiB/s wr, 3 op/s
Dec 05 10:10:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:10:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:10:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:10:04 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:10:04 np0005546420.localdomain dnsmasq[315356]: exiting on receipt of SIGTERM
Dec 05 10:10:04 np0005546420.localdomain podman[315985]: 2025-12-05 10:10:04.532569851 +0000 UTC m=+0.068106061 container kill fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:10:04 np0005546420.localdomain systemd[1]: libpod-fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f.scope: Deactivated successfully.
Dec 05 10:10:04 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:04.561 262769 INFO neutron.agent.linux.ip_lib [None req-a2cac35f-28f2-4d69-a0ae-571fe75bfa5c - - - - - -] Device taped52ce20-56 cannot be used as it has no MAC address
Dec 05 10:10:04 np0005546420.localdomain podman[316005]: 2025-12-05 10:10:04.610821032 +0000 UTC m=+0.060133536 container died fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:04.615 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:04 np0005546420.localdomain kernel: device taped52ce20-56 entered promiscuous mode
Dec 05 10:10:04 np0005546420.localdomain systemd[1]: tmp-crun.fFzx0K.mount: Deactivated successfully.
Dec 05 10:10:04 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:04Z|00203|binding|INFO|Claiming lport ed52ce20-56e8-46ee-8853-eb3d75d6d734 for this chassis.
Dec 05 10:10:04 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:04Z|00204|binding|INFO|ed52ce20-56e8-46ee-8853-eb3d75d6d734: Claiming unknown
Dec 05 10:10:04 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929404.6320] manager: (taped52ce20-56): new Generic device (/org/freedesktop/NetworkManager/Devices/37)
Dec 05 10:10:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:04.631 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:04 np0005546420.localdomain systemd-udevd[316037]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:10:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:04.641 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0ebdf5f32df4c2586aaaef64b02a0b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bfdaf546-7b94-482d-a2ec-8ccf9bd890db, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=ed52ce20-56e8-46ee-8853-eb3d75d6d734) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:04.643 159503 INFO neutron.agent.ovn.metadata.agent [-] Port ed52ce20-56e8-46ee-8853-eb3d75d6d734 in datapath bdc81de5-fea1-4b62-bab3-e90dc44a5fd9 bound to our chassis
Dec 05 10:10:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:04.646 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bdc81de5-fea1-4b62-bab3-e90dc44a5fd9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:10:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:04.646 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad8d564-c93f-4271-b03a-946a678c340f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:04 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on taped52ce20-56: No such device
Dec 05 10:10:04 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on taped52ce20-56: No such device
Dec 05 10:10:04 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:04Z|00205|binding|INFO|Setting lport ed52ce20-56e8-46ee-8853-eb3d75d6d734 ovn-installed in OVS
Dec 05 10:10:04 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:04Z|00206|binding|INFO|Setting lport ed52ce20-56e8-46ee-8853-eb3d75d6d734 up in Southbound
Dec 05 10:10:04 np0005546420.localdomain podman[316005]: 2025-12-05 10:10:04.671819802 +0000 UTC m=+0.121132266 container cleanup fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:10:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:04.671 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:04 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on taped52ce20-56: No such device
Dec 05 10:10:04 np0005546420.localdomain systemd[1]: libpod-conmon-fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f.scope: Deactivated successfully.
Dec 05 10:10:04 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on taped52ce20-56: No such device
Dec 05 10:10:04 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on taped52ce20-56: No such device
Dec 05 10:10:04 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on taped52ce20-56: No such device
Dec 05 10:10:04 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on taped52ce20-56: No such device
Dec 05 10:10:04 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on taped52ce20-56: No such device
Dec 05 10:10:04 np0005546420.localdomain podman[316007]: 2025-12-05 10:10:04.699345467 +0000 UTC m=+0.141935755 container remove fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-845f9b11-7635-438e-8940-471dc1d869ab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:10:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:04.707 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:04.727 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:04 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:04.779 262769 INFO neutron.agent.dhcp.agent [None req-ed5b888e-4d60-4ee1-8cc6-bebf9024add9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:04 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:04.992 2 INFO neutron.agent.securitygroups_rpc [None req-c4286b11-826c-4b9b-b924-7f63ea6cc9ba 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:10:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-01548a1cacb78e767711e8c7798fff45cbe6d1e108bad0fb2594f2dad075b00d-merged.mount: Deactivated successfully.
Dec 05 10:10:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa7b158f630fb15220668dd7e3f191e325c5219145c75a5b6846697b9da41e6f-userdata-shm.mount: Deactivated successfully.
Dec 05 10:10:05 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d845f9b11\x2d7635\x2d438e\x2d8940\x2d471dc1d869ab.mount: Deactivated successfully.
Dec 05 10:10:05 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "1162ea5e-f099-4d47-a2b8-fe2def6b3849", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 05 10:10:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:10:05 np0005546420.localdomain podman[316111]: 
Dec 05 10:10:05 np0005546420.localdomain podman[316111]: 2025-12-05 10:10:05.596252151 +0000 UTC m=+0.088804445 container create a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:10:05 np0005546420.localdomain systemd[1]: Started libpod-conmon-a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05.scope.
Dec 05 10:10:05 np0005546420.localdomain systemd[1]: tmp-crun.PGKf4D.mount: Deactivated successfully.
Dec 05 10:10:05 np0005546420.localdomain podman[316111]: 2025-12-05 10:10:05.554013035 +0000 UTC m=+0.046565349 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:10:05 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:10:05 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf8e334a334371c8edf4d4acf9732a6ef3c75c27d316f7167a05d2a9f5af4a6c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:05 np0005546420.localdomain podman[316111]: 2025-12-05 10:10:05.673289274 +0000 UTC m=+0.165841558 container init a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:10:05 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:05.678 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:05 np0005546420.localdomain podman[316111]: 2025-12-05 10:10:05.683752866 +0000 UTC m=+0.176305150 container start a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:10:05 np0005546420.localdomain dnsmasq[316129]: started, version 2.85 cachesize 150
Dec 05 10:10:05 np0005546420.localdomain dnsmasq[316129]: DNS service limited to local subnets
Dec 05 10:10:05 np0005546420.localdomain dnsmasq[316129]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:10:05 np0005546420.localdomain dnsmasq[316129]: warning: no upstream servers configured
Dec 05 10:10:05 np0005546420.localdomain dnsmasq-dhcp[316129]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:10:05 np0005546420.localdomain dnsmasq[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/addn_hosts - 0 addresses
Dec 05 10:10:05 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/host
Dec 05 10:10:05 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/opts
Dec 05 10:10:05 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:05.848 262769 INFO neutron.agent.dhcp.agent [None req-36647a9f-1be7-44b8-8ce4-036c9a9c8d3c - - - - - -] DHCP configuration for ports {'b04ffdcb-2e49-4f4d-abe1-de0c051f683f'} is completed
Dec 05 10:10:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:05.940 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:06.310 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:06 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:06.317 2 INFO neutron.agent.securitygroups_rpc [None req-aed78872-6dc7-4af4-91c3-35a3e4865f15 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:10:06 np0005546420.localdomain ceph-mon[298353]: pgmap v303: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 8.4 KiB/s wr, 3 op/s
Dec 05 10:10:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:06.620 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:10:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:10:07 np0005546420.localdomain podman[316131]: 2025-12-05 10:10:07.523672507 +0000 UTC m=+0.093394186 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:10:07 np0005546420.localdomain podman[316131]: 2025-12-05 10:10:07.531561929 +0000 UTC m=+0.101283648 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:10:07 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:10:07 np0005546420.localdomain podman[316130]: 2025-12-05 10:10:07.620814247 +0000 UTC m=+0.189710031 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Dec 05 10:10:07 np0005546420.localdomain podman[316130]: 2025-12-05 10:10:07.633840717 +0000 UTC m=+0.202736501 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Dec 05 10:10:07 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:10:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:07.861 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:07.863 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated
Dec 05 10:10:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:07.865 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:07.866 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[5f46b8e3-085d-45a6-9f64-a2ee32d6db63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:08 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:08.568 2 INFO neutron.agent.securitygroups_rpc [None req-258a12e6-efff-47e2-a44d-bc0a6db9eb2b 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:10:08 np0005546420.localdomain ceph-mon[298353]: pgmap v304: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 9.4 KiB/s wr, 3 op/s
Dec 05 10:10:08 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:08.810 2 INFO neutron.agent.securitygroups_rpc [None req-cb726af7-74e7-48fe-b89d-484b3da12128 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:10:09 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:09.286 2 INFO neutron.agent.securitygroups_rpc [None req-07bf7634-d21a-495b-aa67-caa041a79950 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:10:09 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:09.339 2 INFO neutron.agent.securitygroups_rpc [None req-d5cbd288-2520-4e7e-92e0-afbba81cd445 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:10:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1162ea5e-f099-4d47-a2b8-fe2def6b3849", "format": "json"}]: dispatch
Dec 05 10:10:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1162ea5e-f099-4d47-a2b8-fe2def6b3849", "force": true, "format": "json"}]: dispatch
Dec 05 10:10:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:10 np0005546420.localdomain ceph-mon[298353]: pgmap v305: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 7.2 KiB/s wr, 2 op/s
Dec 05 10:10:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:10.986 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:11.714 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d4ffb057-c627-44bd-a232-379e570c56f8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:10:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d4ffb057-c627-44bd-a232-379e570c56f8", "format": "json"}]: dispatch
Dec 05 10:10:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:10:11 np0005546420.localdomain ceph-mon[298353]: pgmap v306: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 13 KiB/s wr, 5 op/s
Dec 05 10:10:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:10:12 np0005546420.localdomain systemd[1]: tmp-crun.vsMERH.mount: Deactivated successfully.
Dec 05 10:10:12 np0005546420.localdomain podman[316173]: 2025-12-05 10:10:12.52983526 +0000 UTC m=+0.092158157 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:10:12 np0005546420.localdomain podman[316173]: 2025-12-05 10:10:12.601444647 +0000 UTC m=+0.163767504 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:12 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:10:12 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:12.617 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:10:11Z, description=, device_id=88b20a08-c0cc-4c80-9ba7-0822c11a052d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ead280>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ead400>], id=1968dfe6-33e8-423d-bd50-c206f4f02299, ip_allocation=immediate, mac_address=fa:16:3e:9e:1c:07, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:59Z, description=, dns_domain=, id=bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-421118809, port_security_enabled=True, project_id=b0ebdf5f32df4c2586aaaef64b02a0b5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50009, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1942, status=ACTIVE, subnets=['34022c23-3643-472e-a43f-8a06fcedcf55'], tags=[], tenant_id=b0ebdf5f32df4c2586aaaef64b02a0b5, updated_at=2025-12-05T10:10:02Z, vlan_transparent=None, network_id=bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, port_security_enabled=False, project_id=b0ebdf5f32df4c2586aaaef64b02a0b5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1983, status=DOWN, tags=[], tenant_id=b0ebdf5f32df4c2586aaaef64b02a0b5, updated_at=2025-12-05T10:10:12Z on network bdc81de5-fea1-4b62-bab3-e90dc44a5fd9
Dec 05 10:10:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e142 e142: 6 total, 6 up, 6 in
Dec 05 10:10:12 np0005546420.localdomain podman[316213]: 2025-12-05 10:10:12.861032131 +0000 UTC m=+0.068796502 container kill a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:10:12 np0005546420.localdomain dnsmasq[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/addn_hosts - 1 addresses
Dec 05 10:10:12 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/host
Dec 05 10:10:12 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/opts
Dec 05 10:10:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:12.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:12.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:10:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:10:13 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:13.125 262769 INFO neutron.agent.dhcp.agent [None req-d9de3538-1f37-44be-b139-b76efaae1791 - - - - - -] DHCP configuration for ports {'1968dfe6-33e8-423d-bd50-c206f4f02299'} is completed
Dec 05 10:10:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:13.549 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:13.551 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated
Dec 05 10:10:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:13.553 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:13.555 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[72c539b7-a87a-40a0-b732-be8b7d61ee34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:13 np0005546420.localdomain ceph-mon[298353]: osdmap e142: 6 total, 6 up, 6 in
Dec 05 10:10:13 np0005546420.localdomain ceph-mon[298353]: pgmap v308: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 8.0 KiB/s wr, 4 op/s
Dec 05 10:10:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d4ffb057-c627-44bd-a232-379e570c56f8", "format": "json"}]: dispatch
Dec 05 10:10:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d4ffb057-c627-44bd-a232-379e570c56f8", "force": true, "format": "json"}]: dispatch
Dec 05 10:10:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e143 e143: 6 total, 6 up, 6 in
Dec 05 10:10:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:14.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:15.397 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:15.504 262769 INFO neutron.agent.linux.ip_lib [None req-98558c05-c9f7-410a-83fd-88cc57f186ca - - - - - -] Device tapfa974c5a-64 cannot be used as it has no MAC address
Dec 05 10:10:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:15.536 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:15 np0005546420.localdomain kernel: device tapfa974c5a-64 entered promiscuous mode
Dec 05 10:10:15 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929415.5463] manager: (tapfa974c5a-64): new Generic device (/org/freedesktop/NetworkManager/Devices/38)
Dec 05 10:10:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:15.550 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:10:11Z, description=, device_id=88b20a08-c0cc-4c80-9ba7-0822c11a052d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a8d4f40>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a8d4ca0>], id=1968dfe6-33e8-423d-bd50-c206f4f02299, ip_allocation=immediate, mac_address=fa:16:3e:9e:1c:07, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:59Z, description=, dns_domain=, id=bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-421118809, port_security_enabled=True, project_id=b0ebdf5f32df4c2586aaaef64b02a0b5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50009, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1942, status=ACTIVE, subnets=['34022c23-3643-472e-a43f-8a06fcedcf55'], tags=[], tenant_id=b0ebdf5f32df4c2586aaaef64b02a0b5, updated_at=2025-12-05T10:10:02Z, vlan_transparent=None, network_id=bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, port_security_enabled=False, project_id=b0ebdf5f32df4c2586aaaef64b02a0b5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1983, status=DOWN, tags=[], tenant_id=b0ebdf5f32df4c2586aaaef64b02a0b5, updated_at=2025-12-05T10:10:12Z on network bdc81de5-fea1-4b62-bab3-e90dc44a5fd9
Dec 05 10:10:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:15.550 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:15 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:15Z|00207|binding|INFO|Claiming lport fa974c5a-64bf-4671-a776-ad34f0f9de1f for this chassis.
Dec 05 10:10:15 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:15Z|00208|binding|INFO|fa974c5a-64bf-4671-a776-ad34f0f9de1f: Claiming unknown
Dec 05 10:10:15 np0005546420.localdomain systemd-udevd[316243]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:10:15 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:15.564 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-8adbc708-ce3a-4885-a714-2e1429dac54a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8adbc708-ce3a-4885-a714-2e1429dac54a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21ac3546-112a-41e6-a0c1-08447db40291, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=fa974c5a-64bf-4671-a776-ad34f0f9de1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:15 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:15.566 159503 INFO neutron.agent.ovn.metadata.agent [-] Port fa974c5a-64bf-4671-a776-ad34f0f9de1f in datapath 8adbc708-ce3a-4885-a714-2e1429dac54a bound to our chassis
Dec 05 10:10:15 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:15.568 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8adbc708-ce3a-4885-a714-2e1429dac54a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:10:15 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:15.569 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[659adf59-456f-4607-b72d-1faffb9ebcf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:15 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:15Z|00209|binding|INFO|Setting lport fa974c5a-64bf-4671-a776-ad34f0f9de1f ovn-installed in OVS
Dec 05 10:10:15 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:15Z|00210|binding|INFO|Setting lport fa974c5a-64bf-4671-a776-ad34f0f9de1f up in Southbound
Dec 05 10:10:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:15.579 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:15.601 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:15.649 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:15.680 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:15 np0005546420.localdomain podman[316272]: 2025-12-05 10:10:15.797314196 +0000 UTC m=+0.054495014 container kill a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:10:15 np0005546420.localdomain dnsmasq[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/addn_hosts - 1 addresses
Dec 05 10:10:15 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/host
Dec 05 10:10:15 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/opts
Dec 05 10:10:15 np0005546420.localdomain systemd[1]: tmp-crun.x3Q7io.mount: Deactivated successfully.
Dec 05 10:10:15 np0005546420.localdomain ceph-mon[298353]: osdmap e143: 6 total, 6 up, 6 in
Dec 05 10:10:15 np0005546420.localdomain ceph-mon[298353]: pgmap v310: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 8.5 KiB/s wr, 4 op/s
Dec 05 10:10:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:15.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:15.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:10:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:15.874 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:10:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:16.044 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:16.069 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:10:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:16.235 262769 INFO neutron.agent.dhcp.agent [None req-563766d4-5d6e-4543-9b7c-c89cfe694e75 - - - - - -] DHCP configuration for ports {'1968dfe6-33e8-423d-bd50-c206f4f02299'} is completed
Dec 05 10:10:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:16.510 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:16.512 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated
Dec 05 10:10:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:16.516 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:16.517 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[55d2d51a-de7d-4cd9-8626-7da7e252c480]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:16 np0005546420.localdomain podman[316335]: 
Dec 05 10:10:16 np0005546420.localdomain podman[316335]: 2025-12-05 10:10:16.628894206 +0000 UTC m=+0.094960594 container create eb59ae13596ebae8864c300d3988eef99fa7034a00c637b25b55ee5c930c92e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8adbc708-ce3a-4885-a714-2e1429dac54a, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:10:16 np0005546420.localdomain podman[316335]: 2025-12-05 10:10:16.585769922 +0000 UTC m=+0.051836330 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:10:16 np0005546420.localdomain systemd[1]: Started libpod-conmon-eb59ae13596ebae8864c300d3988eef99fa7034a00c637b25b55ee5c930c92e0.scope.
Dec 05 10:10:16 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:10:16 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8589040cd66bb18a73c0b82d0bfb1a82d5ebf6c2d57b5f8f4dfcc3db952819/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:16.717 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:16 np0005546420.localdomain podman[316335]: 2025-12-05 10:10:16.725289803 +0000 UTC m=+0.191356191 container init eb59ae13596ebae8864c300d3988eef99fa7034a00c637b25b55ee5c930c92e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8adbc708-ce3a-4885-a714-2e1429dac54a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 05 10:10:16 np0005546420.localdomain podman[316335]: 2025-12-05 10:10:16.738797367 +0000 UTC m=+0.204863755 container start eb59ae13596ebae8864c300d3988eef99fa7034a00c637b25b55ee5c930c92e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8adbc708-ce3a-4885-a714-2e1429dac54a, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 05 10:10:16 np0005546420.localdomain dnsmasq[316352]: started, version 2.85 cachesize 150
Dec 05 10:10:16 np0005546420.localdomain dnsmasq[316352]: DNS service limited to local subnets
Dec 05 10:10:16 np0005546420.localdomain dnsmasq[316352]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:10:16 np0005546420.localdomain dnsmasq[316352]: warning: no upstream servers configured
Dec 05 10:10:16 np0005546420.localdomain dnsmasq-dhcp[316352]: DHCPv6, static leases only on 2001:db8:0:ffff::, lease time 1d
Dec 05 10:10:16 np0005546420.localdomain dnsmasq[316352]: read /var/lib/neutron/dhcp/8adbc708-ce3a-4885-a714-2e1429dac54a/addn_hosts - 0 addresses
Dec 05 10:10:16 np0005546420.localdomain dnsmasq-dhcp[316352]: read /var/lib/neutron/dhcp/8adbc708-ce3a-4885-a714-2e1429dac54a/host
Dec 05 10:10:16 np0005546420.localdomain dnsmasq-dhcp[316352]: read /var/lib/neutron/dhcp/8adbc708-ce3a-4885-a714-2e1429dac54a/opts
Dec 05 10:10:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/753455293' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:10:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/753455293' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:10:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:16.910 262769 INFO neutron.agent.dhcp.agent [None req-6cec20cb-d491-4b47-a6b1-52b6d7a9c55c - - - - - -] DHCP configuration for ports {'05cd1907-0431-4118-a38e-f87e94a50ea9'} is completed
Dec 05 10:10:16 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:16.985 2 INFO neutron.agent.securitygroups_rpc [None req-8748f31b-df2d-474f-9884-ef12f2b800a6 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:10:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:17.065 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:10:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:10:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:10:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156739 "" "Go-http-client/1.1"
Dec 05 10:10:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:10:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19198 "" "Go-http-client/1.1"
Dec 05 10:10:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:10:17 np0005546420.localdomain podman[316353]: 2025-12-05 10:10:17.509757417 +0000 UTC m=+0.086664509 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 10:10:17 np0005546420.localdomain podman[316353]: 2025-12-05 10:10:17.546143253 +0000 UTC m=+0.123050325 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:17 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:10:17 np0005546420.localdomain systemd[1]: tmp-crun.hqWPY9.mount: Deactivated successfully.
Dec 05 10:10:17 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:17.781 2 INFO neutron.agent.securitygroups_rpc [None req-b54cea94-c31f-4e99-a73a-744908bef462 9ad212ec28c94cd483f6945e5dc23284 b0ebdf5f32df4c2586aaaef64b02a0b5 - - default default] Security group member updated ['44ae700d-5ab8-470d-aa76-cd2d76fd251c']
Dec 05 10:10:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:17.829 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:10:17Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f1bb20>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f156d0>], id=9aac186c-00b0-4722-9f7b-cdec7b628816, ip_allocation=immediate, mac_address=fa:16:3e:d1:00:ce, name=tempest-FloatingIPAdminTestJSON-203309483, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:59Z, description=, dns_domain=, id=bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-421118809, port_security_enabled=True, project_id=b0ebdf5f32df4c2586aaaef64b02a0b5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50009, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1942, status=ACTIVE, subnets=['34022c23-3643-472e-a43f-8a06fcedcf55'], tags=[], tenant_id=b0ebdf5f32df4c2586aaaef64b02a0b5, updated_at=2025-12-05T10:10:02Z, vlan_transparent=None, network_id=bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, port_security_enabled=True, project_id=b0ebdf5f32df4c2586aaaef64b02a0b5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['44ae700d-5ab8-470d-aa76-cd2d76fd251c'], standard_attr_id=2006, status=DOWN, tags=[], tenant_id=b0ebdf5f32df4c2586aaaef64b02a0b5, updated_at=2025-12-05T10:10:17Z on network bdc81de5-fea1-4b62-bab3-e90dc44a5fd9
Dec 05 10:10:17 np0005546420.localdomain ceph-mon[298353]: pgmap v311: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 30 op/s
Dec 05 10:10:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:10:17 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:17.935 2 INFO neutron.agent.securitygroups_rpc [None req-5e3bb1c5-2806-4588-aff7-12e0b9f5ff87 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:10:18 np0005546420.localdomain podman[316389]: 2025-12-05 10:10:18.075215894 +0000 UTC m=+0.069850754 container kill a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 10:10:18 np0005546420.localdomain dnsmasq[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/addn_hosts - 2 addresses
Dec 05 10:10:18 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/host
Dec 05 10:10:18 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/opts
Dec 05 10:10:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:18.527 262769 INFO neutron.agent.dhcp.agent [None req-45b8f267-aa9b-4948-b248-c7305b8855a9 - - - - - -] DHCP configuration for ports {'9aac186c-00b0-4722-9f7b-cdec7b628816'} is completed
Dec 05 10:10:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:18.677 262769 INFO neutron.agent.linux.ip_lib [None req-4575bad5-6f2e-4e00-9b04-d1d5a0daac19 - - - - - -] Device tap204e5718-4e cannot be used as it has no MAC address
Dec 05 10:10:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:18.701 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:18 np0005546420.localdomain kernel: device tap204e5718-4e entered promiscuous mode
Dec 05 10:10:18 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929418.7098] manager: (tap204e5718-4e): new Generic device (/org/freedesktop/NetworkManager/Devices/39)
Dec 05 10:10:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:18.709 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:18 np0005546420.localdomain systemd-udevd[316421]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:10:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:18.720 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:18 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap204e5718-4e: No such device
Dec 05 10:10:18 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap204e5718-4e: No such device
Dec 05 10:10:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:18.750 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:18 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap204e5718-4e: No such device
Dec 05 10:10:18 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap204e5718-4e: No such device
Dec 05 10:10:18 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap204e5718-4e: No such device
Dec 05 10:10:18 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap204e5718-4e: No such device
Dec 05 10:10:18 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap204e5718-4e: No such device
Dec 05 10:10:18 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap204e5718-4e: No such device
Dec 05 10:10:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:18.784 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:18.816 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:10:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:10:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:10:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:10:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:10:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:10:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:10:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:10:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:10:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:10:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:10:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:10:18 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8943cd11-7849-44cf-bf5f-d6cc1442fa22", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:10:18 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8943cd11-7849-44cf-bf5f-d6cc1442fa22", "format": "json"}]: dispatch
Dec 05 10:10:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2413831405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:10:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2887821790' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:10:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2887821790' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:10:19 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:19.214 2 INFO neutron.agent.securitygroups_rpc [None req-e87faffe-ea7d-4797-b288-333e1167da33 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:10:19 np0005546420.localdomain podman[316493]: 
Dec 05 10:10:19 np0005546420.localdomain podman[316493]: 2025-12-05 10:10:19.624716207 +0000 UTC m=+0.087223197 container create b10385d5dbb48e1f070d838d144e5ba453d22d130c5e7d6f7c1a2c4793fe7ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8cfb9f43-5e24-4629-b24c-11601df0ba07, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:10:19 np0005546420.localdomain systemd[1]: Started libpod-conmon-b10385d5dbb48e1f070d838d144e5ba453d22d130c5e7d6f7c1a2c4793fe7ee2.scope.
Dec 05 10:10:19 np0005546420.localdomain podman[316493]: 2025-12-05 10:10:19.582614975 +0000 UTC m=+0.045121985 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:10:19 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:10:19 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64103d0071b36d488727395552320b3ed0361e3d74ee430759abe1c425a2fbcc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:19 np0005546420.localdomain podman[316493]: 2025-12-05 10:10:19.696459568 +0000 UTC m=+0.158966548 container init b10385d5dbb48e1f070d838d144e5ba453d22d130c5e7d6f7c1a2c4793fe7ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8cfb9f43-5e24-4629-b24c-11601df0ba07, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:10:19 np0005546420.localdomain podman[316493]: 2025-12-05 10:10:19.705262938 +0000 UTC m=+0.167769928 container start b10385d5dbb48e1f070d838d144e5ba453d22d130c5e7d6f7c1a2c4793fe7ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8cfb9f43-5e24-4629-b24c-11601df0ba07, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 10:10:19 np0005546420.localdomain dnsmasq[316511]: started, version 2.85 cachesize 150
Dec 05 10:10:19 np0005546420.localdomain dnsmasq[316511]: DNS service limited to local subnets
Dec 05 10:10:19 np0005546420.localdomain dnsmasq[316511]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:10:19 np0005546420.localdomain dnsmasq[316511]: warning: no upstream servers configured
Dec 05 10:10:19 np0005546420.localdomain dnsmasq-dhcp[316511]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:10:19 np0005546420.localdomain dnsmasq[316511]: read /var/lib/neutron/dhcp/8cfb9f43-5e24-4629-b24c-11601df0ba07/addn_hosts - 0 addresses
Dec 05 10:10:19 np0005546420.localdomain dnsmasq-dhcp[316511]: read /var/lib/neutron/dhcp/8cfb9f43-5e24-4629-b24c-11601df0ba07/host
Dec 05 10:10:19 np0005546420.localdomain dnsmasq-dhcp[316511]: read /var/lib/neutron/dhcp/8cfb9f43-5e24-4629-b24c-11601df0ba07/opts
Dec 05 10:10:19 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:19.853 262769 INFO neutron.agent.dhcp.agent [None req-0c909e13-7fcb-49f5-815f-0b144d5ee5a3 - - - - - -] DHCP configuration for ports {'20ed8d83-f54e-4209-a3dd-860ee6446910'} is completed
Dec 05 10:10:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3886367906' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:10:19 np0005546420.localdomain ceph-mon[298353]: pgmap v312: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 5.2 KiB/s wr, 26 op/s
Dec 05 10:10:20 np0005546420.localdomain dnsmasq[316511]: exiting on receipt of SIGTERM
Dec 05 10:10:20 np0005546420.localdomain podman[316527]: 2025-12-05 10:10:20.029568017 +0000 UTC m=+0.064085367 container kill b10385d5dbb48e1f070d838d144e5ba453d22d130c5e7d6f7c1a2c4793fe7ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8cfb9f43-5e24-4629-b24c-11601df0ba07, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 10:10:20 np0005546420.localdomain systemd[1]: libpod-b10385d5dbb48e1f070d838d144e5ba453d22d130c5e7d6f7c1a2c4793fe7ee2.scope: Deactivated successfully.
Dec 05 10:10:20 np0005546420.localdomain podman[316542]: 2025-12-05 10:10:20.101351768 +0000 UTC m=+0.056057480 container died b10385d5dbb48e1f070d838d144e5ba453d22d130c5e7d6f7c1a2c4793fe7ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8cfb9f43-5e24-4629-b24c-11601df0ba07, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 05 10:10:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:20 np0005546420.localdomain podman[316542]: 2025-12-05 10:10:20.141211091 +0000 UTC m=+0.095916763 container cleanup b10385d5dbb48e1f070d838d144e5ba453d22d130c5e7d6f7c1a2c4793fe7ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8cfb9f43-5e24-4629-b24c-11601df0ba07, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:10:20 np0005546420.localdomain systemd[1]: libpod-conmon-b10385d5dbb48e1f070d838d144e5ba453d22d130c5e7d6f7c1a2c4793fe7ee2.scope: Deactivated successfully.
Dec 05 10:10:20 np0005546420.localdomain podman[316543]: 2025-12-05 10:10:20.179189776 +0000 UTC m=+0.127493751 container remove b10385d5dbb48e1f070d838d144e5ba453d22d130c5e7d6f7c1a2c4793fe7ee2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8cfb9f43-5e24-4629-b24c-11601df0ba07, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:10:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:20.193 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:20 np0005546420.localdomain kernel: device tap204e5718-4e left promiscuous mode
Dec 05 10:10:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:20.207 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:20.232 262769 INFO neutron.agent.dhcp.agent [None req-023ad2d3-b705-4e21-b89c-52da552fa6ad - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:20.233 262769 INFO neutron.agent.dhcp.agent [None req-023ad2d3-b705-4e21-b89c-52da552fa6ad - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:20 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:20.571 2 INFO neutron.agent.securitygroups_rpc [None req-494eb1d6-141d-4851-b653-0372decd0bc2 71e3fab65ffb4bc788d27178ac1efd8e 285f12c8420045f3a7f55b60a915ce1e - - default default] Security group member updated ['a333efca-ba92-4e4d-867f-f07512d20795']
Dec 05 10:10:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-64103d0071b36d488727395552320b3ed0361e3d74ee430759abe1c425a2fbcc-merged.mount: Deactivated successfully.
Dec 05 10:10:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b10385d5dbb48e1f070d838d144e5ba453d22d130c5e7d6f7c1a2c4793fe7ee2-userdata-shm.mount: Deactivated successfully.
Dec 05 10:10:20 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d8cfb9f43\x2d5e24\x2d4629\x2db24c\x2d11601df0ba07.mount: Deactivated successfully.
Dec 05 10:10:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e144 e144: 6 total, 6 up, 6 in
Dec 05 10:10:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:20.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:20.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:21.096 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:10:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3171003982' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:10:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:10:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3171003982' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:10:21 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:21.373 2 INFO neutron.agent.securitygroups_rpc [None req-494eb1d6-141d-4851-b653-0372decd0bc2 71e3fab65ffb4bc788d27178ac1efd8e 285f12c8420045f3a7f55b60a915ce1e - - default default] Security group member updated ['a333efca-ba92-4e4d-867f-f07512d20795']
Dec 05 10:10:21 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:21.682 2 INFO neutron.agent.securitygroups_rpc [None req-c2056dc6-19e5-4308-8fd7-399bc3f94206 9ad212ec28c94cd483f6945e5dc23284 b0ebdf5f32df4c2586aaaef64b02a0b5 - - default default] Security group member updated ['44ae700d-5ab8-470d-aa76-cd2d76fd251c']
Dec 05 10:10:21 np0005546420.localdomain ceph-mon[298353]: osdmap e144: 6 total, 6 up, 6 in
Dec 05 10:10:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8943cd11-7849-44cf-bf5f-d6cc1442fa22", "format": "json"}]: dispatch
Dec 05 10:10:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8943cd11-7849-44cf-bf5f-d6cc1442fa22", "force": true, "format": "json"}]: dispatch
Dec 05 10:10:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3171003982' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:10:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3171003982' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:10:21 np0005546420.localdomain ceph-mon[298353]: pgmap v314: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 14 KiB/s wr, 86 op/s
Dec 05 10:10:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:21.720 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:21.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:21.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:21 np0005546420.localdomain dnsmasq[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/addn_hosts - 1 addresses
Dec 05 10:10:21 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/host
Dec 05 10:10:21 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/opts
Dec 05 10:10:21 np0005546420.localdomain podman[316590]: 2025-12-05 10:10:21.967487385 +0000 UTC m=+0.063149438 container kill a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 05 10:10:22 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:22.310 2 INFO neutron.agent.securitygroups_rpc [None req-ce37c36a-8998-41ea-86f9-f18c08b96615 71e3fab65ffb4bc788d27178ac1efd8e 285f12c8420045f3a7f55b60a915ce1e - - default default] Security group member updated ['a333efca-ba92-4e4d-867f-f07512d20795']
Dec 05 10:10:22 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:22.529 2 INFO neutron.agent.securitygroups_rpc [None req-63c416d3-65f7-42d7-b358-fdac5e40015f 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:10:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:22.719 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:22.721 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated
Dec 05 10:10:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:22.724 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:22 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:22.724 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[6dddc112-2e07-4bf6-a78d-22a40136ab9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:22 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:22.786 2 INFO neutron.agent.securitygroups_rpc [None req-abc2a8d3-3f72-47ac-804d-613bd305ad59 71e3fab65ffb4bc788d27178ac1efd8e 285f12c8420045f3a7f55b60a915ce1e - - default default] Security group member updated ['a333efca-ba92-4e4d-867f-f07512d20795']
Dec 05 10:10:23 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:23.488 262769 INFO neutron.agent.linux.ip_lib [None req-575d3843-2c2f-46f2-8acb-fc3ed7d476fa - - - - - -] Device tap2a6ae835-ab cannot be used as it has no MAC address
Dec 05 10:10:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:23.537 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:23 np0005546420.localdomain dnsmasq[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/addn_hosts - 0 addresses
Dec 05 10:10:23 np0005546420.localdomain kernel: device tap2a6ae835-ab entered promiscuous mode
Dec 05 10:10:23 np0005546420.localdomain podman[316632]: 2025-12-05 10:10:23.544345997 +0000 UTC m=+0.077970713 container kill a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:23 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/host
Dec 05 10:10:23 np0005546420.localdomain dnsmasq-dhcp[316129]: read /var/lib/neutron/dhcp/bdc81de5-fea1-4b62-bab3-e90dc44a5fd9/opts
Dec 05 10:10:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:23.548 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:23 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:23Z|00211|binding|INFO|Claiming lport 2a6ae835-ab9b-436d-8db4-6c60970021c3 for this chassis.
Dec 05 10:10:23 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:23Z|00212|binding|INFO|2a6ae835-ab9b-436d-8db4-6c60970021c3: Claiming unknown
Dec 05 10:10:23 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929423.5492] manager: (tap2a6ae835-ab): new Generic device (/org/freedesktop/NetworkManager/Devices/40)
Dec 05 10:10:23 np0005546420.localdomain systemd-udevd[316651]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:10:23 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap2a6ae835-ab: No such device
Dec 05 10:10:23 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap2a6ae835-ab: No such device
Dec 05 10:10:23 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:23Z|00213|binding|INFO|Setting lport 2a6ae835-ab9b-436d-8db4-6c60970021c3 ovn-installed in OVS
Dec 05 10:10:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:23.602 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:23 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap2a6ae835-ab: No such device
Dec 05 10:10:23 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap2a6ae835-ab: No such device
Dec 05 10:10:23 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap2a6ae835-ab: No such device
Dec 05 10:10:23 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:23Z|00214|binding|INFO|Setting lport 2a6ae835-ab9b-436d-8db4-6c60970021c3 up in Southbound
Dec 05 10:10:23 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap2a6ae835-ab: No such device
Dec 05 10:10:23 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:23.619 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-7a30a14f-dcb4-4313-aa6d-f0a221710e8c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a30a14f-dcb4-4313-aa6d-f0a221710e8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9cf04b3-2249-4a53-a450-caf560b4d303, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=2a6ae835-ab9b-436d-8db4-6c60970021c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:23 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:23.620 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 2a6ae835-ab9b-436d-8db4-6c60970021c3 in datapath 7a30a14f-dcb4-4313-aa6d-f0a221710e8c bound to our chassis
Dec 05 10:10:23 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:23.620 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7a30a14f-dcb4-4313-aa6d-f0a221710e8c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:10:23 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:23.621 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa1d9f5-879b-4949-90b3-b6b6d4d2cb66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:23 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap2a6ae835-ab: No such device
Dec 05 10:10:23 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap2a6ae835-ab: No such device
Dec 05 10:10:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:23.656 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:23.686 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:24 np0005546420.localdomain ceph-mon[298353]: pgmap v315: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 13 KiB/s wr, 86 op/s
Dec 05 10:10:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:24.394 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:24.395 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:10:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:24.396 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:24 np0005546420.localdomain podman[316728]: 
Dec 05 10:10:24 np0005546420.localdomain podman[316728]: 2025-12-05 10:10:24.595294777 +0000 UTC m=+0.076029084 container create 078ac6be22b32a8ac6c1e9fc24bbbc19ef3dfafd08f76e950be55040e366079c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7a30a14f-dcb4-4313-aa6d-f0a221710e8c, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:24 np0005546420.localdomain systemd[1]: Started libpod-conmon-078ac6be22b32a8ac6c1e9fc24bbbc19ef3dfafd08f76e950be55040e366079c.scope.
Dec 05 10:10:24 np0005546420.localdomain podman[316728]: 2025-12-05 10:10:24.548227424 +0000 UTC m=+0.028961791 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:10:24 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:10:24 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378bd9e40d548114375c08cc6fe611d8597fb27cce18e880726ab6ff5b39f0fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:24 np0005546420.localdomain podman[316728]: 2025-12-05 10:10:24.669296137 +0000 UTC m=+0.150030444 container init 078ac6be22b32a8ac6c1e9fc24bbbc19ef3dfafd08f76e950be55040e366079c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7a30a14f-dcb4-4313-aa6d-f0a221710e8c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:10:24 np0005546420.localdomain podman[316728]: 2025-12-05 10:10:24.678401356 +0000 UTC m=+0.159135673 container start 078ac6be22b32a8ac6c1e9fc24bbbc19ef3dfafd08f76e950be55040e366079c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7a30a14f-dcb4-4313-aa6d-f0a221710e8c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 05 10:10:24 np0005546420.localdomain dnsmasq[316747]: started, version 2.85 cachesize 150
Dec 05 10:10:24 np0005546420.localdomain dnsmasq[316747]: DNS service limited to local subnets
Dec 05 10:10:24 np0005546420.localdomain dnsmasq[316747]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:10:24 np0005546420.localdomain dnsmasq[316747]: warning: no upstream servers configured
Dec 05 10:10:24 np0005546420.localdomain dnsmasq-dhcp[316747]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:10:24 np0005546420.localdomain dnsmasq[316747]: read /var/lib/neutron/dhcp/7a30a14f-dcb4-4313-aa6d-f0a221710e8c/addn_hosts - 0 addresses
Dec 05 10:10:24 np0005546420.localdomain dnsmasq-dhcp[316747]: read /var/lib/neutron/dhcp/7a30a14f-dcb4-4313-aa6d-f0a221710e8c/host
Dec 05 10:10:24 np0005546420.localdomain dnsmasq-dhcp[316747]: read /var/lib/neutron/dhcp/7a30a14f-dcb4-4313-aa6d-f0a221710e8c/opts
Dec 05 10:10:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:24.766 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:24 np0005546420.localdomain kernel: device taped52ce20-56 left promiscuous mode
Dec 05 10:10:24 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:24Z|00215|binding|INFO|Releasing lport ed52ce20-56e8-46ee-8853-eb3d75d6d734 from this chassis (sb_readonly=0)
Dec 05 10:10:24 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:24Z|00216|binding|INFO|Setting lport ed52ce20-56e8-46ee-8853-eb3d75d6d734 down in Southbound
Dec 05 10:10:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:24.789 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0ebdf5f32df4c2586aaaef64b02a0b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bfdaf546-7b94-482d-a2ec-8ccf9bd890db, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=ed52ce20-56e8-46ee-8853-eb3d75d6d734) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:24.792 159503 INFO neutron.agent.ovn.metadata.agent [-] Port ed52ce20-56e8-46ee-8853-eb3d75d6d734 in datapath bdc81de5-fea1-4b62-bab3-e90dc44a5fd9 unbound from our chassis
Dec 05 10:10:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:24.794 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:24.795 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b5dc1b-3876-4dad-a7fb-e83f60f79d2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:24.797 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:24 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:24.815 262769 INFO neutron.agent.dhcp.agent [None req-411b1930-5d2f-4838-94b9-f64066346b5d - - - - - -] DHCP configuration for ports {'e094f12d-b144-4649-a08e-7e485f18818f'} is completed
Dec 05 10:10:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:24.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:10:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:24.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:10:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:24.890 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:10:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:24.890 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:10:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:24.890 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:10:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:24.890 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:10:25 np0005546420.localdomain dnsmasq[316747]: exiting on receipt of SIGTERM
Dec 05 10:10:25 np0005546420.localdomain podman[316768]: 2025-12-05 10:10:25.065306995 +0000 UTC m=+0.060251109 container kill 078ac6be22b32a8ac6c1e9fc24bbbc19ef3dfafd08f76e950be55040e366079c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7a30a14f-dcb4-4313-aa6d-f0a221710e8c, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:10:25 np0005546420.localdomain systemd[1]: libpod-078ac6be22b32a8ac6c1e9fc24bbbc19ef3dfafd08f76e950be55040e366079c.scope: Deactivated successfully.
Dec 05 10:10:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:25 np0005546420.localdomain podman[316801]: 2025-12-05 10:10:25.13944239 +0000 UTC m=+0.053863034 container died 078ac6be22b32a8ac6c1e9fc24bbbc19ef3dfafd08f76e950be55040e366079c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7a30a14f-dcb4-4313-aa6d-f0a221710e8c, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:10:25 np0005546420.localdomain podman[316801]: 2025-12-05 10:10:25.239997525 +0000 UTC m=+0.154418109 container remove 078ac6be22b32a8ac6c1e9fc24bbbc19ef3dfafd08f76e950be55040e366079c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7a30a14f-dcb4-4313-aa6d-f0a221710e8c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:10:25 np0005546420.localdomain systemd[1]: libpod-conmon-078ac6be22b32a8ac6c1e9fc24bbbc19ef3dfafd08f76e950be55040e366079c.scope: Deactivated successfully.
Dec 05 10:10:25 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:25Z|00217|binding|INFO|Releasing lport 2a6ae835-ab9b-436d-8db4-6c60970021c3 from this chassis (sb_readonly=0)
Dec 05 10:10:25 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:25Z|00218|binding|INFO|Setting lport 2a6ae835-ab9b-436d-8db4-6c60970021c3 down in Southbound
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.257 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:25 np0005546420.localdomain kernel: device tap2a6ae835-ab left promiscuous mode
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.278 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:25.279 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-7a30a14f-dcb4-4313-aa6d-f0a221710e8c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a30a14f-dcb4-4313-aa6d-f0a221710e8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9cf04b3-2249-4a53-a450-caf560b4d303, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=2a6ae835-ab9b-436d-8db4-6c60970021c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:25.282 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 2a6ae835-ab9b-436d-8db4-6c60970021c3 in datapath 7a30a14f-dcb4-4313-aa6d-f0a221710e8c unbound from our chassis
Dec 05 10:10:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:25.283 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7a30a14f-dcb4-4313-aa6d-f0a221710e8c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:10:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:25.284 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[14c28360-1375-480e-b5e5-f7e43afc73d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:10:25 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/443525387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.340 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:10:25 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "eda6e56d-69d6-4ce5-9979-46195e7612b1", "format": "json"}]: dispatch
Dec 05 10:10:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3180875306' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:10:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/443525387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:10:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:25.516 262769 INFO neutron.agent.dhcp.agent [None req-1f8b9a6e-6239-4705-9648-3051038064a2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:25.517 262769 INFO neutron.agent.dhcp.agent [None req-1f8b9a6e-6239-4705-9648-3051038064a2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.564 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.565 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11636MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.566 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.566 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:10:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-378bd9e40d548114375c08cc6fe611d8597fb27cce18e880726ab6ff5b39f0fc-merged.mount: Deactivated successfully.
Dec 05 10:10:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-078ac6be22b32a8ac6c1e9fc24bbbc19ef3dfafd08f76e950be55040e366079c-userdata-shm.mount: Deactivated successfully.
Dec 05 10:10:25 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d7a30a14f\x2ddcb4\x2d4313\x2daa6d\x2df0a221710e8c.mount: Deactivated successfully.
Dec 05 10:10:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.637 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.638 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:10:25 np0005546420.localdomain podman[316828]: 2025-12-05 10:10:25.722210147 +0000 UTC m=+0.086759482 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:10:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:10:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:25.730 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:25 np0005546420.localdomain podman[316828]: 2025-12-05 10:10:25.73858215 +0000 UTC m=+0.103131445 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.742 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:10:25 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.769 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.770 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.789 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:10:25 np0005546420.localdomain podman[316851]: 2025-12-05 10:10:25.818182791 +0000 UTC m=+0.080818500 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 05 10:10:25 np0005546420.localdomain podman[316851]: 2025-12-05 10:10:25.853451513 +0000 UTC m=+0.116087212 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 05 10:10:25 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.867 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:10:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:25.889 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:10:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:26.133 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:10:26 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/938139066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:10:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:26.340 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:10:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:26.346 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:10:26 np0005546420.localdomain ceph-mon[298353]: pgmap v316: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 11 KiB/s wr, 73 op/s
Dec 05 10:10:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/938139066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:10:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:26.369 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:10:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:26.371 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:10:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:26.371 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:10:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:26.570 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:26.721 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:26 np0005546420.localdomain dnsmasq[316129]: exiting on receipt of SIGTERM
Dec 05 10:10:26 np0005546420.localdomain podman[316907]: 2025-12-05 10:10:26.849748946 +0000 UTC m=+0.060446175 container kill a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:10:26 np0005546420.localdomain systemd[1]: libpod-a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05.scope: Deactivated successfully.
Dec 05 10:10:26 np0005546420.localdomain podman[316920]: 2025-12-05 10:10:26.927269604 +0000 UTC m=+0.062895411 container died a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:10:26 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05-userdata-shm.mount: Deactivated successfully.
Dec 05 10:10:26 np0005546420.localdomain podman[316920]: 2025-12-05 10:10:26.966286811 +0000 UTC m=+0.101912568 container cleanup a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:10:26 np0005546420.localdomain systemd[1]: libpod-conmon-a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05.scope: Deactivated successfully.
Dec 05 10:10:27 np0005546420.localdomain podman[316922]: 2025-12-05 10:10:27.014003215 +0000 UTC m=+0.137009294 container remove a2b427c171461f6b4e80072cb79fbfaf4acf8fec58035aee65878dfba7e4da05 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bdc81de5-fea1-4b62-bab3-e90dc44a5fd9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:27.016 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:27.018 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated
Dec 05 10:10:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:27.020 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:27.021 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a903212f-7648-4c99-9864-f74a49e73e4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:27 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:27.045 262769 INFO neutron.agent.dhcp.agent [None req-6f2f635b-1652-4c96-920c-8d43a8e95071 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:27 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:27.320 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:10:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/260112736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:10:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1353221168' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:10:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1353221168' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:10:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:27.397 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:10:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:10:27 np0005546420.localdomain podman[316951]: 2025-12-05 10:10:27.753483389 +0000 UTC m=+0.083676648 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3)
Dec 05 10:10:27 np0005546420.localdomain podman[316951]: 2025-12-05 10:10:27.771321476 +0000 UTC m=+0.101514765 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:27 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:10:27 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8e334a334371c8edf4d4acf9732a6ef3c75c27d316f7167a05d2a9f5af4a6c-merged.mount: Deactivated successfully.
Dec 05 10:10:27 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2dbdc81de5\x2dfea1\x2d4b62\x2dbab3\x2de90dc44a5fd9.mount: Deactivated successfully.
Dec 05 10:10:27 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:27.877 2 INFO neutron.agent.securitygroups_rpc [None req-16d6d17d-d3bb-489f-9b72-a5d5257b3158 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:10:27 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:27.936 262769 INFO neutron.agent.linux.ip_lib [None req-5f168491-ee39-4e24-83b4-e77a6fc3377b - - - - - -] Device tapf76514dd-d7 cannot be used as it has no MAC address
Dec 05 10:10:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:27.959 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:27 np0005546420.localdomain kernel: device tapf76514dd-d7 entered promiscuous mode
Dec 05 10:10:27 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929427.9664] manager: (tapf76514dd-d7): new Generic device (/org/freedesktop/NetworkManager/Devices/41)
Dec 05 10:10:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:27.968 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:27 np0005546420.localdomain systemd-udevd[316980]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:10:27 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:27Z|00219|binding|INFO|Claiming lport f76514dd-d742-4eac-9b43-a1be050fd678 for this chassis.
Dec 05 10:10:27 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:27Z|00220|binding|INFO|f76514dd-d742-4eac-9b43-a1be050fd678: Claiming unknown
Dec 05 10:10:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:27.982 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73051091-e669-449c-b6ce-0fa3950b46d0, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=f76514dd-d742-4eac-9b43-a1be050fd678) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:27.984 159503 INFO neutron.agent.ovn.metadata.agent [-] Port f76514dd-d742-4eac-9b43-a1be050fd678 in datapath 0f11084e-99c9-47ba-aac5-b3f38c139d59 bound to our chassis
Dec 05 10:10:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:27.986 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0f11084e-99c9-47ba-aac5-b3f38c139d59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:10:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:27.988 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[1f37ead7-a1d7-4dd3-be7c-c3b7957fb97d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:27 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf76514dd-d7: No such device
Dec 05 10:10:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:27.999 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf76514dd-d7: No such device
Dec 05 10:10:28 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:28Z|00221|binding|INFO|Setting lport f76514dd-d742-4eac-9b43-a1be050fd678 ovn-installed in OVS
Dec 05 10:10:28 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:28Z|00222|binding|INFO|Setting lport f76514dd-d742-4eac-9b43-a1be050fd678 up in Southbound
Dec 05 10:10:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf76514dd-d7: No such device
Dec 05 10:10:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:28.006 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf76514dd-d7: No such device
Dec 05 10:10:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf76514dd-d7: No such device
Dec 05 10:10:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf76514dd-d7: No such device
Dec 05 10:10:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf76514dd-d7: No such device
Dec 05 10:10:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapf76514dd-d7: No such device
Dec 05 10:10:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:28.030 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:28.054 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:28 np0005546420.localdomain ceph-mon[298353]: pgmap v317: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 9.5 KiB/s wr, 53 op/s
Dec 05 10:10:28 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:28.937 2 INFO neutron.agent.securitygroups_rpc [None req-0c16f2e5-5e31-4773-aa99-a445d9488a1a 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:10:28 np0005546420.localdomain podman[317051]: 
Dec 05 10:10:28 np0005546420.localdomain podman[317051]: 2025-12-05 10:10:28.949506049 +0000 UTC m=+0.076814148 container create e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:28 np0005546420.localdomain systemd[1]: Started libpod-conmon-e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b.scope.
Dec 05 10:10:29 np0005546420.localdomain podman[317051]: 2025-12-05 10:10:28.9055362 +0000 UTC m=+0.032844319 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:10:29 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:10:29 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7364f89ae7926caa5ef5a36889a30f378d3da08577750537068ff2038ade099/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:29 np0005546420.localdomain podman[317051]: 2025-12-05 10:10:29.021328632 +0000 UTC m=+0.148636721 container init e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:10:29 np0005546420.localdomain podman[317051]: 2025-12-05 10:10:29.030890426 +0000 UTC m=+0.158198515 container start e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:10:29 np0005546420.localdomain dnsmasq[317069]: started, version 2.85 cachesize 150
Dec 05 10:10:29 np0005546420.localdomain dnsmasq[317069]: DNS service limited to local subnets
Dec 05 10:10:29 np0005546420.localdomain dnsmasq[317069]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:10:29 np0005546420.localdomain dnsmasq[317069]: warning: no upstream servers configured
Dec 05 10:10:29 np0005546420.localdomain dnsmasq-dhcp[317069]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:10:29 np0005546420.localdomain dnsmasq[317069]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 0 addresses
Dec 05 10:10:29 np0005546420.localdomain dnsmasq-dhcp[317069]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:10:29 np0005546420.localdomain dnsmasq-dhcp[317069]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:10:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "eda6e56d-69d6-4ce5-9979-46195e7612b1_65643c99-e4ca-4ae3-b879-234a19b593b8", "force": true, "format": "json"}]: dispatch
Dec 05 10:10:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "eda6e56d-69d6-4ce5-9979-46195e7612b1", "force": true, "format": "json"}]: dispatch
Dec 05 10:10:29 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:29.984 262769 INFO neutron.agent.dhcp.agent [None req-248438fc-3861-4605-9754-d7dd88c406d5 - - - - - -] DHCP configuration for ports {'02057f88-1a3f-4ab4-a0ca-05b7fb70a51d', '3e74704f-5b87-479b-a0f2-9cc31811fac6'} is completed
Dec 05 10:10:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:30 np0005546420.localdomain ceph-mon[298353]: pgmap v318: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 9.5 KiB/s wr, 53 op/s
Dec 05 10:10:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:30.661 262769 INFO neutron.agent.linux.ip_lib [None req-162eac94-13d6-45c3-9107-fb535f714878 - - - - - -] Device tap5dab9d54-e6 cannot be used as it has no MAC address
Dec 05 10:10:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:30.729 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:30 np0005546420.localdomain kernel: device tap5dab9d54-e6 entered promiscuous mode
Dec 05 10:10:30 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929430.7392] manager: (tap5dab9d54-e6): new Generic device (/org/freedesktop/NetworkManager/Devices/42)
Dec 05 10:10:30 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:30Z|00223|binding|INFO|Claiming lport 5dab9d54-e6ae-4a75-b908-163b44a64d04 for this chassis.
Dec 05 10:10:30 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:30Z|00224|binding|INFO|5dab9d54-e6ae-4a75-b908-163b44a64d04: Claiming unknown
Dec 05 10:10:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:30.740 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:30 np0005546420.localdomain systemd-udevd[316982]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:10:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:30.750 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebc61650-8998-47fd-a8ee-981cc6780af7, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5dab9d54-e6ae-4a75-b908-163b44a64d04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:30.752 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5dab9d54-e6ae-4a75-b908-163b44a64d04 in datapath 0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6 bound to our chassis
Dec 05 10:10:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:30.754 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:10:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:30.755 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[2a271c5b-79ec-4cde-8343-fb3069b32d3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5dab9d54-e6: No such device
Dec 05 10:10:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5dab9d54-e6: No such device
Dec 05 10:10:30 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:30Z|00225|binding|INFO|Setting lport 5dab9d54-e6ae-4a75-b908-163b44a64d04 ovn-installed in OVS
Dec 05 10:10:30 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:30Z|00226|binding|INFO|Setting lport 5dab9d54-e6ae-4a75-b908-163b44a64d04 up in Southbound
Dec 05 10:10:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:30.782 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5dab9d54-e6: No such device
Dec 05 10:10:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5dab9d54-e6: No such device
Dec 05 10:10:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5dab9d54-e6: No such device
Dec 05 10:10:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5dab9d54-e6: No such device
Dec 05 10:10:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5dab9d54-e6: No such device
Dec 05 10:10:30 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5dab9d54-e6: No such device
Dec 05 10:10:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:30.823 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:30.854 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:31 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:31.041 2 INFO neutron.agent.securitygroups_rpc [None req-4ed9fe1a-c7d3-4fb9-9cde-3c0633a09029 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['15591b44-9c5d-4fa1-bdbd-86617728aba4']
Dec 05 10:10:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:31.136 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:31 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:31.182 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:10:30Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f0bf10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f0b820>], id=f4eb0905-f43e-4877-8a1c-12fde1a30c81, ip_allocation=immediate, mac_address=fa:16:3e:f9:6d:d7, name=tempest-PortsTestJSON-1207434544, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:08:57Z, description=, dns_domain=, id=0f11084e-99c9-47ba-aac5-b3f38c139d59, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-1609140525, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=20384, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1631, status=ACTIVE, subnets=['baae9433-0dee-4764-8206-ec6942e4205a'], tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:10:26Z, vlan_transparent=None, network_id=0f11084e-99c9-47ba-aac5-b3f38c139d59, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['15591b44-9c5d-4fa1-bdbd-86617728aba4'], standard_attr_id=2083, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:10:30Z on network 0f11084e-99c9-47ba-aac5-b3f38c139d59
Dec 05 10:10:31 np0005546420.localdomain dnsmasq[317069]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 1 addresses
Dec 05 10:10:31 np0005546420.localdomain dnsmasq-dhcp[317069]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:10:31 np0005546420.localdomain dnsmasq-dhcp[317069]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:10:31 np0005546420.localdomain podman[317144]: 2025-12-05 10:10:31.45436584 +0000 UTC m=+0.056192165 container kill e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:10:31 np0005546420.localdomain podman[317188]: 
Dec 05 10:10:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:31.724 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:31 np0005546420.localdomain podman[317188]: 2025-12-05 10:10:31.739071073 +0000 UTC m=+0.091386314 container create 3d91e15524faec2002d617f7b8e9f6a12949c36b07052d4de91d84bc29656a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:10:31 np0005546420.localdomain ceph-mon[298353]: pgmap v319: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 15 KiB/s wr, 53 op/s
Dec 05 10:10:31 np0005546420.localdomain systemd[1]: Started libpod-conmon-3d91e15524faec2002d617f7b8e9f6a12949c36b07052d4de91d84bc29656a2a.scope.
Dec 05 10:10:31 np0005546420.localdomain podman[317188]: 2025-12-05 10:10:31.688650867 +0000 UTC m=+0.040966168 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:10:31 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:10:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c9467e263cc6170ea4d059583bed1473617c2940f5606e392a7f0c6dd1904b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:31 np0005546420.localdomain podman[317188]: 2025-12-05 10:10:31.816702005 +0000 UTC m=+0.169017236 container init 3d91e15524faec2002d617f7b8e9f6a12949c36b07052d4de91d84bc29656a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:10:31 np0005546420.localdomain podman[317188]: 2025-12-05 10:10:31.831699506 +0000 UTC m=+0.184014737 container start 3d91e15524faec2002d617f7b8e9f6a12949c36b07052d4de91d84bc29656a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 05 10:10:31 np0005546420.localdomain dnsmasq[317206]: started, version 2.85 cachesize 150
Dec 05 10:10:31 np0005546420.localdomain dnsmasq[317206]: DNS service limited to local subnets
Dec 05 10:10:31 np0005546420.localdomain dnsmasq[317206]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:10:31 np0005546420.localdomain dnsmasq[317206]: warning: no upstream servers configured
Dec 05 10:10:31 np0005546420.localdomain dnsmasq-dhcp[317206]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:10:31 np0005546420.localdomain dnsmasq[317206]: read /var/lib/neutron/dhcp/0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6/addn_hosts - 0 addresses
Dec 05 10:10:31 np0005546420.localdomain dnsmasq-dhcp[317206]: read /var/lib/neutron/dhcp/0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6/host
Dec 05 10:10:31 np0005546420.localdomain dnsmasq-dhcp[317206]: read /var/lib/neutron/dhcp/0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6/opts
Dec 05 10:10:31 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:31.843 262769 INFO neutron.agent.dhcp.agent [None req-bdd23d63-269e-44ae-bd01-c68d074b1818 - - - - - -] DHCP configuration for ports {'f4eb0905-f43e-4877-8a1c-12fde1a30c81'} is completed
Dec 05 10:10:32 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:32.010 262769 INFO neutron.agent.dhcp.agent [None req-c27d99e5-eb44-48cd-b7be-fb113f0de982 - - - - - -] DHCP configuration for ports {'22d3790b-d504-4177-aaa4-ff3c9af238b1'} is completed
Dec 05 10:10:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:32.481 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:32.483 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated
Dec 05 10:10:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:32.486 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:32.487 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[19ac4c2a-27e1-4244-9905-c2f1c04cae7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:32 np0005546420.localdomain systemd[1]: tmp-crun.QDr5uh.mount: Deactivated successfully.
Dec 05 10:10:34 np0005546420.localdomain ceph-mon[298353]: pgmap v320: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 2.1 KiB/s rd, 7.2 KiB/s wr, 5 op/s
Dec 05 10:10:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:35 np0005546420.localdomain dnsmasq[317069]: exiting on receipt of SIGTERM
Dec 05 10:10:35 np0005546420.localdomain podman[317223]: 2025-12-05 10:10:35.814044372 +0000 UTC m=+0.085230672 container kill e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:10:35 np0005546420.localdomain systemd[1]: libpod-e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b.scope: Deactivated successfully.
Dec 05 10:10:35 np0005546420.localdomain podman[317237]: 2025-12-05 10:10:35.889097269 +0000 UTC m=+0.062262083 container died e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:10:35 np0005546420.localdomain podman[317237]: 2025-12-05 10:10:35.975870998 +0000 UTC m=+0.149035772 container cleanup e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:35 np0005546420.localdomain systemd[1]: libpod-conmon-e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b.scope: Deactivated successfully.
Dec 05 10:10:35 np0005546420.localdomain podman[317244]: 2025-12-05 10:10:35.998496786 +0000 UTC m=+0.152845010 container remove e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:10:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:36.138 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:36 np0005546420.localdomain ceph-mon[298353]: pgmap v321: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 7.0 KiB/s wr, 3 op/s
Dec 05 10:10:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:36.728 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e145 e145: 6 total, 6 up, 6 in
Dec 05 10:10:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-f7364f89ae7926caa5ef5a36889a30f378d3da08577750537068ff2038ade099-merged.mount: Deactivated successfully.
Dec 05 10:10:36 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6cb911c9672c9675e2394aa7a2672b0313fabe7823357a7ef836dc9afd40a5b-userdata-shm.mount: Deactivated successfully.
Dec 05 10:10:37 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "93f98e8e-8d98-4266-b4f1-bf5b6e06e924", "format": "json"}]: dispatch
Dec 05 10:10:37 np0005546420.localdomain ceph-mon[298353]: osdmap e145: 6 total, 6 up, 6 in
Dec 05 10:10:38 np0005546420.localdomain ceph-mon[298353]: pgmap v323: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 6.4 KiB/s wr, 2 op/s
Dec 05 10:10:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:10:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:10:38 np0005546420.localdomain podman[317268]: 2025-12-05 10:10:38.506383928 +0000 UTC m=+0.079503435 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 10:10:38 np0005546420.localdomain podman[317268]: 2025-12-05 10:10:38.520812103 +0000 UTC m=+0.093931600 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Dec 05 10:10:38 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:10:38 np0005546420.localdomain systemd[1]: tmp-crun.FwoILN.mount: Deactivated successfully.
Dec 05 10:10:38 np0005546420.localdomain podman[317269]: 2025-12-05 10:10:38.612575916 +0000 UTC m=+0.183669090 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:10:38 np0005546420.localdomain podman[317269]: 2025-12-05 10:10:38.622504793 +0000 UTC m=+0.193597947 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:10:38 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:10:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:39.291 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73051091-e669-449c-b6ce-0fa3950b46d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3e74704f-5b87-479b-a0f2-9cc31811fac6) old=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:39.293 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3e74704f-5b87-479b-a0f2-9cc31811fac6 in datapath 0f11084e-99c9-47ba-aac5-b3f38c139d59 updated
Dec 05 10:10:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:39.296 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 43815d7b-1787-42c1-b249-c2c72d01ad2b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:10:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:39.296 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f11084e-99c9-47ba-aac5-b3f38c139d59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:39.297 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[81ecd870-8560-4188-bb51-4a1cc22d28e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:39.964 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:39.966 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated
Dec 05 10:10:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:39.969 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:39.970 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc5080a-f6da-4be6-b612-e0525a27c9c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:40 np0005546420.localdomain ceph-mon[298353]: pgmap v324: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 6.4 KiB/s wr, 2 op/s
Dec 05 10:10:40 np0005546420.localdomain podman[317360]: 
Dec 05 10:10:40 np0005546420.localdomain podman[317360]: 2025-12-05 10:10:40.622852877 +0000 UTC m=+0.094313003 container create 54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 05 10:10:40 np0005546420.localdomain systemd[1]: Started libpod-conmon-54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a.scope.
Dec 05 10:10:40 np0005546420.localdomain podman[317360]: 2025-12-05 10:10:40.577774365 +0000 UTC m=+0.049234571 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:10:40 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:10:40 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97fdd8b70e1cc05127dcbf6e75a3e49942ac66ee041d9132e07f89aa54bb72b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:40 np0005546420.localdomain podman[317360]: 2025-12-05 10:10:40.691182786 +0000 UTC m=+0.162642912 container init 54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:10:40 np0005546420.localdomain podman[317360]: 2025-12-05 10:10:40.702150645 +0000 UTC m=+0.173610811 container start 54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:10:40 np0005546420.localdomain dnsmasq[317378]: started, version 2.85 cachesize 150
Dec 05 10:10:40 np0005546420.localdomain dnsmasq[317378]: DNS service limited to local subnets
Dec 05 10:10:40 np0005546420.localdomain dnsmasq[317378]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:10:40 np0005546420.localdomain dnsmasq[317378]: warning: no upstream servers configured
Dec 05 10:10:40 np0005546420.localdomain dnsmasq-dhcp[317378]: DHCP, static leases only on 10.100.0.16, lease time 1d
Dec 05 10:10:40 np0005546420.localdomain dnsmasq-dhcp[317378]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:10:40 np0005546420.localdomain dnsmasq[317378]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 1 addresses
Dec 05 10:10:40 np0005546420.localdomain dnsmasq-dhcp[317378]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:10:40 np0005546420.localdomain dnsmasq-dhcp[317378]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:10:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:41.140 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:41 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "d095bb6f-0ba8-4ed5-9e70-b40104e570d0", "format": "json"}]: dispatch
Dec 05 10:10:41 np0005546420.localdomain systemd[1]: tmp-crun.gKOz0v.mount: Deactivated successfully.
Dec 05 10:10:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:41.730 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:42 np0005546420.localdomain ceph-mon[298353]: pgmap v325: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 5.2 KiB/s wr, 1 op/s
Dec 05 10:10:43 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:43.060 262769 INFO neutron.agent.dhcp.agent [None req-0014c356-2f8c-4b29-bcca-c9dee0edb90b - - - - - -] DHCP configuration for ports {'02057f88-1a3f-4ab4-a0ca-05b7fb70a51d', '3e74704f-5b87-479b-a0f2-9cc31811fac6', 'f4eb0905-f43e-4877-8a1c-12fde1a30c81', 'f76514dd-d742-4eac-9b43-a1be050fd678'} is completed
Dec 05 10:10:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:10:43 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:43.481 2 INFO neutron.agent.securitygroups_rpc [None req-d01f9f3e-ae84-4079-919f-76dbd61382ed 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['15591b44-9c5d-4fa1-bdbd-86617728aba4', '786481c0-4094-4fae-b4f0-fa1aecb51db7']
Dec 05 10:10:43 np0005546420.localdomain podman[317379]: 2025-12-05 10:10:43.503551528 +0000 UTC m=+0.083294483 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:10:43 np0005546420.localdomain podman[317379]: 2025-12-05 10:10:43.593700171 +0000 UTC m=+0.173443086 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 10:10:43 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:43.596 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:10:30Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f64d90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f642b0>], id=f4eb0905-f43e-4877-8a1c-12fde1a30c81, ip_allocation=immediate, mac_address=fa:16:3e:f9:6d:d7, name=tempest-PortsTestJSON-235381110, network_id=0f11084e-99c9-47ba-aac5-b3f38c139d59, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['786481c0-4094-4fae-b4f0-fa1aecb51db7'], standard_attr_id=2083, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:10:42Z on network 0f11084e-99c9-47ba-aac5-b3f38c139d59
Dec 05 10:10:43 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:43.600 262769 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.dhcp_release_cmd', '--privsep_sock_path', '/tmp/tmp08plvmhh/privsep.sock']
Dec 05 10:10:43 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:10:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:44.235 262769 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 10:10:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:44.127 317408 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:10:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:44.132 317408 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:10:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:44.135 317408 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 05 10:10:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:44.136 317408 INFO oslo.privsep.daemon [-] privsep daemon running as pid 317408
Dec 05 10:10:44 np0005546420.localdomain ceph-mon[298353]: pgmap v326: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 5.0 KiB/s wr, 1 op/s
Dec 05 10:10:44 np0005546420.localdomain dnsmasq-dhcp[317378]: DHCPRELEASE(tapf76514dd-d7) 10.100.0.11 fa:16:3e:f9:6d:d7
Dec 05 10:10:44 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:44.887 2 INFO neutron.agent.securitygroups_rpc [None req-3daa64bf-f117-4d8c-90b4-97c89eaa0f8a 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['786481c0-4094-4fae-b4f0-fa1aecb51db7']
Dec 05 10:10:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:45 np0005546420.localdomain dnsmasq[317378]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 1 addresses
Dec 05 10:10:45 np0005546420.localdomain dnsmasq-dhcp[317378]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:10:45 np0005546420.localdomain dnsmasq-dhcp[317378]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:10:45 np0005546420.localdomain podman[317430]: 2025-12-05 10:10:45.157742425 +0000 UTC m=+0.053229364 container kill 54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 10:10:45 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:45.399 262769 INFO neutron.agent.dhcp.agent [None req-155a8d01-530b-4f3b-871a-af2d8c147c5b - - - - - -] DHCP configuration for ports {'f4eb0905-f43e-4877-8a1c-12fde1a30c81'} is completed
Dec 05 10:10:45 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "ee146b37-fd03-4431-a7d8-313e59df415c", "format": "json"}]: dispatch
Dec 05 10:10:45 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3357921071' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:10:45 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3357921071' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:10:45 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:45.600 2 INFO neutron.agent.securitygroups_rpc [None req-82d78816-841a-466f-b480-1a70bf437762 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:10:45 np0005546420.localdomain dnsmasq[317378]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 0 addresses
Dec 05 10:10:45 np0005546420.localdomain dnsmasq-dhcp[317378]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:10:45 np0005546420.localdomain dnsmasq-dhcp[317378]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:10:45 np0005546420.localdomain podman[317467]: 2025-12-05 10:10:45.605728435 +0000 UTC m=+0.066468212 container kill 54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:10:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e146 e146: 6 total, 6 up, 6 in
Dec 05 10:10:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:46.142 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:46 np0005546420.localdomain ceph-mon[298353]: pgmap v327: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 5.0 KiB/s wr, 1 op/s
Dec 05 10:10:46 np0005546420.localdomain ceph-mon[298353]: osdmap e146: 6 total, 6 up, 6 in
Dec 05 10:10:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:46.732 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:10:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:10:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:10:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158661 "" "Go-http-client/1.1"
Dec 05 10:10:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:10:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19686 "" "Go-http-client/1.1"
Dec 05 10:10:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:10:48 np0005546420.localdomain ceph-mon[298353]: pgmap v329: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 6.7 KiB/s wr, 18 op/s
Dec 05 10:10:48 np0005546420.localdomain podman[317487]: 2025-12-05 10:10:48.51374682 +0000 UTC m=+0.092821006 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:10:48 np0005546420.localdomain podman[317487]: 2025-12-05 10:10:48.525038319 +0000 UTC m=+0.104112525 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:10:48 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:10:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:10:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:10:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:10:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:10:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:10:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:10:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:10:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:10:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:10:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:10:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:10:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:10:49 np0005546420.localdomain dnsmasq[317378]: exiting on receipt of SIGTERM
Dec 05 10:10:49 np0005546420.localdomain podman[317524]: 2025-12-05 10:10:49.27914949 +0000 UTC m=+0.064966457 container kill 54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:10:49 np0005546420.localdomain systemd[1]: libpod-54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a.scope: Deactivated successfully.
Dec 05 10:10:49 np0005546420.localdomain podman[317538]: 2025-12-05 10:10:49.354927509 +0000 UTC m=+0.062543572 container died 54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:49 np0005546420.localdomain podman[317538]: 2025-12-05 10:10:49.385419211 +0000 UTC m=+0.093035244 container cleanup 54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:10:49 np0005546420.localdomain systemd[1]: libpod-conmon-54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a.scope: Deactivated successfully.
Dec 05 10:10:49 np0005546420.localdomain podman[317540]: 2025-12-05 10:10:49.431580196 +0000 UTC m=+0.130018755 container remove 54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:10:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-97fdd8b70e1cc05127dcbf6e75a3e49942ac66ee041d9132e07f89aa54bb72b0-merged.mount: Deactivated successfully.
Dec 05 10:10:49 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54dcf29ee91be135714d1b6a78691aa91966c887e5fe60559525d587e580400a-userdata-shm.mount: Deactivated successfully.
Dec 05 10:10:49 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:49.814 262769 INFO neutron.agent.linux.ip_lib [None req-e18449cd-ea62-48c1-9e10-0488e72f3435 - - - - - -] Device tap5f3e046f-47 cannot be used as it has no MAC address
Dec 05 10:10:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:49.904 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:49 np0005546420.localdomain kernel: device tap5f3e046f-47 entered promiscuous mode
Dec 05 10:10:49 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929449.9134] manager: (tap5f3e046f-47): new Generic device (/org/freedesktop/NetworkManager/Devices/43)
Dec 05 10:10:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:49.914 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:49Z|00227|binding|INFO|Claiming lport 5f3e046f-4799-4ef7-b50c-5fd45526ccd0 for this chassis.
Dec 05 10:10:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:49Z|00228|binding|INFO|5f3e046f-4799-4ef7-b50c-5fd45526ccd0: Claiming unknown
Dec 05 10:10:49 np0005546420.localdomain systemd-udevd[317599]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:10:49 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:49.928 2 INFO neutron.agent.securitygroups_rpc [None req-aae22781-47b1-4320-95ea-cd4c9a77b012 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:10:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:49Z|00229|binding|INFO|Setting lport 5f3e046f-4799-4ef7-b50c-5fd45526ccd0 ovn-installed in OVS
Dec 05 10:10:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:49Z|00230|binding|INFO|Setting lport 5f3e046f-4799-4ef7-b50c-5fd45526ccd0 up in Southbound
Dec 05 10:10:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:49.934 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:49.933 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-7018fb5e-7de1-428e-a192-b352829f2392', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7018fb5e-7de1-428e-a192-b352829f2392', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '936331162fd849b28da8e38e2db0598a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71907140-60f0-4b51-82a2-84c70a48dfe9, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5f3e046f-4799-4ef7-b50c-5fd45526ccd0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:49.936 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5f3e046f-4799-4ef7-b50c-5fd45526ccd0 in datapath 7018fb5e-7de1-428e-a192-b352829f2392 bound to our chassis
Dec 05 10:10:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:49.937 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7018fb5e-7de1-428e-a192-b352829f2392 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:10:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:49.938 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2a033c-1fc6-4dd7-a9dd-36f7f8f4c88d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:49.960 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:50.002 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:50.034 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:50 np0005546420.localdomain podman[317643]: 
Dec 05 10:10:50 np0005546420.localdomain podman[317643]: 2025-12-05 10:10:50.366587991 +0000 UTC m=+0.102235588 container create 8d09ced82ea14b9b05d360e2a17b5b285504fa721959bd43414edc3add1c4198 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:10:50 np0005546420.localdomain systemd[1]: Started libpod-conmon-8d09ced82ea14b9b05d360e2a17b5b285504fa721959bd43414edc3add1c4198.scope.
Dec 05 10:10:50 np0005546420.localdomain podman[317643]: 2025-12-05 10:10:50.320505228 +0000 UTC m=+0.056152865 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:10:50 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:10:50 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acc1eca46a3edbe6c40122a7aff376ff3a7abd81c002d9ce6e0a3a4581a19c7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:50 np0005546420.localdomain podman[317643]: 2025-12-05 10:10:50.452902466 +0000 UTC m=+0.188550063 container init 8d09ced82ea14b9b05d360e2a17b5b285504fa721959bd43414edc3add1c4198 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:10:50 np0005546420.localdomain podman[317643]: 2025-12-05 10:10:50.46179247 +0000 UTC m=+0.197440067 container start 8d09ced82ea14b9b05d360e2a17b5b285504fa721959bd43414edc3add1c4198 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:50 np0005546420.localdomain dnsmasq[317672]: started, version 2.85 cachesize 150
Dec 05 10:10:50 np0005546420.localdomain dnsmasq[317672]: DNS service limited to local subnets
Dec 05 10:10:50 np0005546420.localdomain dnsmasq[317672]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:10:50 np0005546420.localdomain dnsmasq[317672]: warning: no upstream servers configured
Dec 05 10:10:50 np0005546420.localdomain dnsmasq-dhcp[317672]: DHCP, static leases only on 10.100.0.16, lease time 1d
Dec 05 10:10:50 np0005546420.localdomain dnsmasq[317672]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 0 addresses
Dec 05 10:10:50 np0005546420.localdomain dnsmasq-dhcp[317672]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:10:50 np0005546420.localdomain dnsmasq-dhcp[317672]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:10:50 np0005546420.localdomain systemd[1]: tmp-crun.nVRBiY.mount: Deactivated successfully.
Dec 05 10:10:50 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "6b71d1bf-ee94-491e-8179-33ce177e53ad", "format": "json"}]: dispatch
Dec 05 10:10:50 np0005546420.localdomain ceph-mon[298353]: pgmap v330: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 6.7 KiB/s wr, 18 op/s
Dec 05 10:10:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:50.814 262769 INFO neutron.agent.dhcp.agent [None req-afdceffb-565b-4b5d-9b57-a2b50b6c0e32 - - - - - -] DHCP configuration for ports {'02057f88-1a3f-4ab4-a0ca-05b7fb70a51d', '3e74704f-5b87-479b-a0f2-9cc31811fac6', 'f76514dd-d742-4eac-9b43-a1be050fd678'} is completed
Dec 05 10:10:50 np0005546420.localdomain podman[317711]: 
Dec 05 10:10:50 np0005546420.localdomain podman[317724]: 2025-12-05 10:10:50.935749812 +0000 UTC m=+0.056202016 container kill 8d09ced82ea14b9b05d360e2a17b5b285504fa721959bd43414edc3add1c4198 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 05 10:10:50 np0005546420.localdomain dnsmasq[317672]: exiting on receipt of SIGTERM
Dec 05 10:10:50 np0005546420.localdomain systemd[1]: libpod-8d09ced82ea14b9b05d360e2a17b5b285504fa721959bd43414edc3add1c4198.scope: Deactivated successfully.
Dec 05 10:10:50 np0005546420.localdomain podman[317711]: 2025-12-05 10:10:50.980416021 +0000 UTC m=+0.143614385 container create acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:10:50 np0005546420.localdomain podman[317711]: 2025-12-05 10:10:50.884726677 +0000 UTC m=+0.047925071 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:10:51 np0005546420.localdomain podman[317741]: 2025-12-05 10:10:51.027857005 +0000 UTC m=+0.069930560 container died 8d09ced82ea14b9b05d360e2a17b5b285504fa721959bd43414edc3add1c4198 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:10:51 np0005546420.localdomain systemd[1]: Started libpod-conmon-acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5.scope.
Dec 05 10:10:51 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:10:51 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aadb59304a91a5cec5ae4df36c1987861fba38af3201c3ac28345c51feeccc7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:51 np0005546420.localdomain podman[317741]: 2025-12-05 10:10:51.090368925 +0000 UTC m=+0.132442440 container remove 8d09ced82ea14b9b05d360e2a17b5b285504fa721959bd43414edc3add1c4198 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:10:51 np0005546420.localdomain podman[317711]: 2025-12-05 10:10:51.123727785 +0000 UTC m=+0.286926149 container init acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 05 10:10:51 np0005546420.localdomain systemd[1]: libpod-conmon-8d09ced82ea14b9b05d360e2a17b5b285504fa721959bd43414edc3add1c4198.scope: Deactivated successfully.
Dec 05 10:10:51 np0005546420.localdomain podman[317711]: 2025-12-05 10:10:51.133635161 +0000 UTC m=+0.296833525 container start acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:10:51 np0005546420.localdomain dnsmasq[317772]: started, version 2.85 cachesize 150
Dec 05 10:10:51 np0005546420.localdomain dnsmasq[317772]: DNS service limited to local subnets
Dec 05 10:10:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:51.183 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:51 np0005546420.localdomain dnsmasq[317772]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:10:51 np0005546420.localdomain dnsmasq[317772]: warning: no upstream servers configured
Dec 05 10:10:51 np0005546420.localdomain dnsmasq-dhcp[317772]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:10:51 np0005546420.localdomain dnsmasq[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/addn_hosts - 0 addresses
Dec 05 10:10:51 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/host
Dec 05 10:10:51 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/opts
Dec 05 10:10:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3acc1eca46a3edbe6c40122a7aff376ff3a7abd81c002d9ce6e0a3a4581a19c7-merged.mount: Deactivated successfully.
Dec 05 10:10:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d09ced82ea14b9b05d360e2a17b5b285504fa721959bd43414edc3add1c4198-userdata-shm.mount: Deactivated successfully.
Dec 05 10:10:51 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:51.697 262769 INFO neutron.agent.dhcp.agent [None req-65849c51-2b68-4009-a863-bc5dac121855 - - - - - -] DHCP configuration for ports {'f533db52-c5b5-441d-ac34-a84b8f19e026'} is completed
Dec 05 10:10:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:51.735 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:51 np0005546420.localdomain ceph-mon[298353]: pgmap v331: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 4.3 KiB/s wr, 18 op/s
Dec 05 10:10:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:52Z|00231|binding|INFO|Removing iface tapf76514dd-d7 ovn-installed in OVS
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.268 159503 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 43815d7b-1787-42c1-b249-c2c72d01ad2b with type ""
Dec 05 10:10:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:52Z|00232|binding|INFO|Removing lport f76514dd-d742-4eac-9b43-a1be050fd678 ovn-installed in OVS
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.270 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73051091-e669-449c-b6ce-0fa3950b46d0, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=f76514dd-d742-4eac-9b43-a1be050fd678) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.272 159503 INFO neutron.agent.ovn.metadata.agent [-] Port f76514dd-d742-4eac-9b43-a1be050fd678 in datapath 0f11084e-99c9-47ba-aac5-b3f38c139d59 unbound from our chassis
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.274 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f11084e-99c9-47ba-aac5-b3f38c139d59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.275 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a1acade4-fcee-4221-a176-ebd928cad1b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:52.321 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.323 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73051091-e669-449c-b6ce-0fa3950b46d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3e74704f-5b87-479b-a0f2-9cc31811fac6) old=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.18 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:52.324 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.326 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3e74704f-5b87-479b-a0f2-9cc31811fac6 in datapath 0f11084e-99c9-47ba-aac5-b3f38c139d59 updated
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.330 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f11084e-99c9-47ba-aac5-b3f38c139d59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.331 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[97490bc6-2f0b-40e9-82b6-e291e6551eb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:52Z|00233|binding|INFO|Claiming lport f76514dd-d742-4eac-9b43-a1be050fd678 for this chassis.
Dec 05 10:10:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:52Z|00234|binding|INFO|f76514dd-d742-4eac-9b43-a1be050fd678: Claiming unknown
Dec 05 10:10:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:52Z|00235|binding|INFO|Setting lport f76514dd-d742-4eac-9b43-a1be050fd678 ovn-installed in OVS
Dec 05 10:10:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:10:52Z|00236|binding|INFO|Setting lport f76514dd-d742-4eac-9b43-a1be050fd678 up in Southbound
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.685 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:dhcp', 'neutron:host_id': 'np0005546420.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73051091-e669-449c-b6ce-0fa3950b46d0, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=f76514dd-d742-4eac-9b43-a1be050fd678) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:10:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:52.686 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.688 159503 INFO neutron.agent.ovn.metadata.agent [-] Port f76514dd-d742-4eac-9b43-a1be050fd678 in datapath 0f11084e-99c9-47ba-aac5-b3f38c139d59 bound to our chassis
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.692 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 0d6b9761-14e7-4038-888a-114f2c38489e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.692 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f11084e-99c9-47ba-aac5-b3f38c139d59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:10:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:10:52.693 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[0551de05-06ee-4f4f-8a28-b319ca1054e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:10:52 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3296225489' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:10:52 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3296225489' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:10:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "aa88b782-616d-444b-a472-35fb008023be", "format": "json"}]: dispatch
Dec 05 10:10:53 np0005546420.localdomain podman[317822]: 
Dec 05 10:10:53 np0005546420.localdomain podman[317822]: 2025-12-05 10:10:53.28641515 +0000 UTC m=+0.104889249 container create 701b5de6ab7727ded69c2f3d5c14db30057dcdd24a01e86e7739a591704954da (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 05 10:10:53 np0005546420.localdomain podman[317822]: 2025-12-05 10:10:53.219557656 +0000 UTC m=+0.038031795 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:10:53 np0005546420.localdomain systemd[1]: Started libpod-conmon-701b5de6ab7727ded69c2f3d5c14db30057dcdd24a01e86e7739a591704954da.scope.
Dec 05 10:10:53 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:10:53 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaad509e46a5ec22481eeec9ad9bf67e85290be62d8111f3db7503122aef757e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:10:53 np0005546420.localdomain podman[317822]: 2025-12-05 10:10:53.37773352 +0000 UTC m=+0.196207629 container init 701b5de6ab7727ded69c2f3d5c14db30057dcdd24a01e86e7739a591704954da (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 05 10:10:53 np0005546420.localdomain podman[317822]: 2025-12-05 10:10:53.386285593 +0000 UTC m=+0.204759702 container start 701b5de6ab7727ded69c2f3d5c14db30057dcdd24a01e86e7739a591704954da (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:10:53 np0005546420.localdomain dnsmasq[317841]: started, version 2.85 cachesize 150
Dec 05 10:10:53 np0005546420.localdomain dnsmasq[317841]: DNS service limited to local subnets
Dec 05 10:10:53 np0005546420.localdomain dnsmasq[317841]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:10:53 np0005546420.localdomain dnsmasq[317841]: warning: no upstream servers configured
Dec 05 10:10:53 np0005546420.localdomain dnsmasq-dhcp[317841]: DHCP, static leases only on 10.100.0.16, lease time 1d
Dec 05 10:10:53 np0005546420.localdomain dnsmasq-dhcp[317841]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:10:53 np0005546420.localdomain dnsmasq[317841]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 0 addresses
Dec 05 10:10:53 np0005546420.localdomain dnsmasq-dhcp[317841]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:10:53 np0005546420.localdomain dnsmasq-dhcp[317841]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:10:53 np0005546420.localdomain ceph-mon[298353]: pgmap v332: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 4.3 KiB/s wr, 19 op/s
Dec 05 10:10:53 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:10:53.805 2 INFO neutron.agent.securitygroups_rpc [None req-cad5a74d-414c-4770-8214-a1cf51074fd1 2c33b8c3808c4d2fb486611901223652 936331162fd849b28da8e38e2db0598a - - default default] Security group member updated ['95cf58e1-082a-420e-bea6-fa59114af408']
Dec 05 10:10:54 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:54.674 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:10:52Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a8f7d00>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a02b640>], id=a4ae4a1b-8dd6-4911-9f5c-6cc52d542118, ip_allocation=immediate, mac_address=fa:16:3e:cc:b1:29, name=tempest-ExtraDHCPOptionsTestJSON-308966652, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:10:44Z, description=, dns_domain=, id=7018fb5e-7de1-428e-a192-b352829f2392, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsTestJSON-test-network-1299484161, port_security_enabled=True, project_id=936331162fd849b28da8e38e2db0598a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25610, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2124, status=ACTIVE, subnets=['5020690e-cb93-4b4a-b202-9b4662357741'], tags=[], tenant_id=936331162fd849b28da8e38e2db0598a, updated_at=2025-12-05T10:10:48Z, vlan_transparent=None, network_id=7018fb5e-7de1-428e-a192-b352829f2392, port_security_enabled=True, project_id=936331162fd849b28da8e38e2db0598a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['95cf58e1-082a-420e-bea6-fa59114af408'], standard_attr_id=2155, status=DOWN, tags=[], tenant_id=936331162fd849b28da8e38e2db0598a, updated_at=2025-12-05T10:10:52Z on network 7018fb5e-7de1-428e-a192-b352829f2392
Dec 05 10:10:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:10:55 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:55.912 262769 INFO neutron.agent.dhcp.agent [None req-03543bfa-b6f0-45b3-97bd-30682c2d323d - - - - - -] DHCP configuration for ports {'02057f88-1a3f-4ab4-a0ca-05b7fb70a51d', '3e74704f-5b87-479b-a0f2-9cc31811fac6', 'f76514dd-d742-4eac-9b43-a1be050fd678'} is completed
Dec 05 10:10:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:56.218 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:56 np0005546420.localdomain ceph-mon[298353]: pgmap v333: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 4.3 KiB/s wr, 19 op/s
Dec 05 10:10:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:10:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:10:56 np0005546420.localdomain podman[317858]: 2025-12-05 10:10:56.526777545 +0000 UTC m=+0.095305553 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:10:56 np0005546420.localdomain podman[317858]: 2025-12-05 10:10:56.541441657 +0000 UTC m=+0.109969675 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:10:56 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:10:56 np0005546420.localdomain podman[317860]: 2025-12-05 10:10:56.628301169 +0000 UTC m=+0.191382819 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 05 10:10:56 np0005546420.localdomain dnsmasq[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/addn_hosts - 1 addresses
Dec 05 10:10:56 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/host
Dec 05 10:10:56 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/opts
Dec 05 10:10:56 np0005546420.localdomain podman[317861]: 2025-12-05 10:10:56.669697137 +0000 UTC m=+0.228460454 container kill acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:10:56 np0005546420.localdomain podman[317860]: 2025-12-05 10:10:56.710512827 +0000 UTC m=+0.273594417 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:10:56 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:10:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:10:56.737 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:10:58 np0005546420.localdomain ceph-mon[298353]: pgmap v334: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 5.9 KiB/s wr, 31 op/s
Dec 05 10:10:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:10:58 np0005546420.localdomain podman[317919]: 2025-12-05 10:10:58.513543699 +0000 UTC m=+0.088043598 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:10:58 np0005546420.localdomain podman[317919]: 2025-12-05 10:10:58.523723394 +0000 UTC m=+0.098223293 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:10:58 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:10:58 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:10:58 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3889605546' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:10:58 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:10:58 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3889605546' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:10:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "aa88b782-616d-444b-a472-35fb008023be_e64cf724-7e03-4a8e-beab-a8f412d1da44", "force": true, "format": "json"}]: dispatch
Dec 05 10:10:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "aa88b782-616d-444b-a472-35fb008023be", "force": true, "format": "json"}]: dispatch
Dec 05 10:10:59 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3889605546' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:10:59 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3889605546' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:10:59 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:10:59.664 262769 INFO neutron.agent.dhcp.agent [None req-08af007e-66aa-400c-a0b1-baf9ebf1d46f - - - - - -] DHCP configuration for ports {'a4ae4a1b-8dd6-4911-9f5c-6cc52d542118'} is completed
Dec 05 10:11:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:00 np0005546420.localdomain ceph-mon[298353]: pgmap v335: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 4.0 KiB/s wr, 16 op/s
Dec 05 10:11:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:01.256 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:01.741 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:01 np0005546420.localdomain ceph-mon[298353]: pgmap v336: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 30 op/s
Dec 05 10:11:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3347321735' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:11:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3347321735' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:11:04 np0005546420.localdomain sudo[317938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:11:04 np0005546420.localdomain sudo[317938]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:11:04 np0005546420.localdomain sudo[317938]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:04 np0005546420.localdomain sudo[317956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:11:04 np0005546420.localdomain sudo[317956]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:11:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:04.129 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:11:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:04.130 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:11:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:04.131 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:11:04 np0005546420.localdomain ceph-mon[298353]: pgmap v337: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 9.5 KiB/s wr, 29 op/s
Dec 05 10:11:04 np0005546420.localdomain sudo[317956]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:05 np0005546420.localdomain sudo[318006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:11:05 np0005546420.localdomain sudo[318006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:11:05 np0005546420.localdomain sudo[318006]: pam_unix(sudo:session): session closed for user root
Dec 05 10:11:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:05 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "6b71d1bf-ee94-491e-8179-33ce177e53ad_3fa5c92b-65c8-4238-9ca3-7e55fc957abc", "force": true, "format": "json"}]: dispatch
Dec 05 10:11:05 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "6b71d1bf-ee94-491e-8179-33ce177e53ad", "force": true, "format": "json"}]: dispatch
Dec 05 10:11:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:11:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:11:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:11:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:11:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:11:05 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:05.847 2 INFO neutron.agent.securitygroups_rpc [None req-aad81300-6ba5-4212-8b03-95be9bc3ee4e 2c33b8c3808c4d2fb486611901223652 936331162fd849b28da8e38e2db0598a - - default default] Security group member updated ['95cf58e1-082a-420e-bea6-fa59114af408']
Dec 05 10:11:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:06.290 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:06 np0005546420.localdomain ceph-mon[298353]: pgmap v338: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 9.5 KiB/s wr, 29 op/s
Dec 05 10:11:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:06.744 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e147 e147: 6 total, 6 up, 6 in
Dec 05 10:11:07 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e148 e148: 6 total, 6 up, 6 in
Dec 05 10:11:07 np0005546420.localdomain ceph-mon[298353]: osdmap e147: 6 total, 6 up, 6 in
Dec 05 10:11:07 np0005546420.localdomain ceph-mon[298353]: pgmap v340: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 19 op/s
Dec 05 10:11:08 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:08.177 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:04Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e8f550>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e8f970>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e8ff10>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e8f430>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e8feb0>], id=f7bc763b-d625-4213-9422-c67d48260c03, ip_allocation=immediate, mac_address=fa:16:3e:f5:bf:a3, name=tempest-ExtraDHCPOptionsTestJSON-1660557644, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:10:44Z, description=, dns_domain=, id=7018fb5e-7de1-428e-a192-b352829f2392, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsTestJSON-test-network-1299484161, port_security_enabled=True, project_id=936331162fd849b28da8e38e2db0598a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25610, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2124, status=ACTIVE, subnets=['5020690e-cb93-4b4a-b202-9b4662357741'], tags=[], tenant_id=936331162fd849b28da8e38e2db0598a, updated_at=2025-12-05T10:10:48Z, vlan_transparent=None, network_id=7018fb5e-7de1-428e-a192-b352829f2392, port_security_enabled=True, project_id=936331162fd849b28da8e38e2db0598a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['95cf58e1-082a-420e-bea6-fa59114af408'], standard_attr_id=2171, status=DOWN, tags=[], tenant_id=936331162fd849b28da8e38e2db0598a, updated_at=2025-12-05T10:11:04Z on network 7018fb5e-7de1-428e-a192-b352829f2392
Dec 05 10:11:08 np0005546420.localdomain ceph-mon[298353]: osdmap e148: 6 total, 6 up, 6 in
Dec 05 10:11:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:11:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:11:09 np0005546420.localdomain podman[318024]: 2025-12-05 10:11:09.526380003 +0000 UTC m=+0.091221057 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 10:11:09 np0005546420.localdomain podman[318024]: 2025-12-05 10:11:09.570520756 +0000 UTC m=+0.135361820 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=edpm, architecture=x86_64, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:11:09 np0005546420.localdomain podman[318025]: 2025-12-05 10:11:09.580644738 +0000 UTC m=+0.141097177 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:11:09 np0005546420.localdomain podman[318025]: 2025-12-05 10:11:09.588252263 +0000 UTC m=+0.148704732 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:11:09 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:11:09 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:11:09 np0005546420.localdomain ceph-mon[298353]: pgmap v342: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 11 KiB/s wr, 3 op/s
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.766337) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470766399, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1420, "num_deletes": 256, "total_data_size": 2048513, "memory_usage": 2076968, "flush_reason": "Manual Compaction"}
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470779153, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1332740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25229, "largest_seqno": 26644, "table_properties": {"data_size": 1327068, "index_size": 2950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13220, "raw_average_key_size": 20, "raw_value_size": 1315086, "raw_average_value_size": 2013, "num_data_blocks": 130, "num_entries": 653, "num_filter_entries": 653, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929390, "oldest_key_time": 1764929390, "file_creation_time": 1764929470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 12864 microseconds, and 5138 cpu microseconds.
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.779200) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1332740 bytes OK
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.779223) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.781102) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.781124) EVENT_LOG_v1 {"time_micros": 1764929470781118, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.781145) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2041641, prev total WAL file size 2041965, number of live WAL files 2.
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.782024) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303135' seq:72057594037927935, type:22 .. '6C6F676D0034323636' seq:0, type:0; will stop at (end)
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1301KB)], [42(16MB)]
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470782063, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 18361802, "oldest_snapshot_seqno": -1}
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 12610 keys, 17810630 bytes, temperature: kUnknown
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470886452, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 17810630, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17737622, "index_size": 40395, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31557, "raw_key_size": 337724, "raw_average_key_size": 26, "raw_value_size": 17521853, "raw_average_value_size": 1389, "num_data_blocks": 1532, "num_entries": 12610, "num_filter_entries": 12610, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.887396) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 17810630 bytes
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.889822) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.7 rd, 170.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 16.2 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(27.1) write-amplify(13.4) OK, records in: 13146, records dropped: 536 output_compression: NoCompression
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.889872) EVENT_LOG_v1 {"time_micros": 1764929470889848, "job": 24, "event": "compaction_finished", "compaction_time_micros": 104531, "compaction_time_cpu_micros": 49386, "output_level": 6, "num_output_files": 1, "total_output_size": 17810630, "num_input_records": 13146, "num_output_records": 12610, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470890342, "job": 24, "event": "table_file_deletion", "file_number": 44}
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470892917, "job": 24, "event": "table_file_deletion", "file_number": 42}
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.781923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.893040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.893049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.893052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.893055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:10 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:11:10.893058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:11:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "ee146b37-fd03-4431-a7d8-313e59df415c_edc5a584-db9d-4a4f-9d75-00283c84ff62", "force": true, "format": "json"}]: dispatch
Dec 05 10:11:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "ee146b37-fd03-4431-a7d8-313e59df415c", "force": true, "format": "json"}]: dispatch
Dec 05 10:11:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:11.335 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:11.746 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:12 np0005546420.localdomain ceph-mon[298353]: pgmap v343: 177 pgs: 177 active+clean; 146 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 25 KiB/s wr, 4 op/s
Dec 05 10:11:12 np0005546420.localdomain dnsmasq[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/addn_hosts - 2 addresses
Dec 05 10:11:12 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/host
Dec 05 10:11:12 np0005546420.localdomain podman[318080]: 2025-12-05 10:11:12.258460696 +0000 UTC m=+0.060510238 container kill acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 10:11:12 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/opts
Dec 05 10:11:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:14.372 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:14.372 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:11:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:11:14 np0005546420.localdomain ceph-mon[298353]: pgmap v344: 177 pgs: 177 active+clean; 146 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 25 KiB/s wr, 4 op/s
Dec 05 10:11:14 np0005546420.localdomain podman[318101]: 2025-12-05 10:11:14.512279044 +0000 UTC m=+0.091514917 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:11:14 np0005546420.localdomain podman[318101]: 2025-12-05 10:11:14.552698051 +0000 UTC m=+0.131933904 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:14 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:11:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:15 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:15.159 2 INFO neutron.agent.securitygroups_rpc [None req-8c15d9ea-25f6-4f94-9a5a-880c38bb33fb 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:15.499 262769 INFO neutron.agent.dhcp.agent [None req-65ce79fb-d326-44a6-aac5-a5c7671ae393 - - - - - -] DHCP configuration for ports {'f7bc763b-d625-4213-9422-c67d48260c03'} is completed
Dec 05 10:11:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e149 e149: 6 total, 6 up, 6 in
Dec 05 10:11:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:15.983 262769 INFO neutron.agent.linux.ip_lib [None req-20beac5b-726f-4d1e-8167-3a2ea33b3163 - - - - - -] Device tap9a7a21f4-04 cannot be used as it has no MAC address
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.011 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:16 np0005546420.localdomain kernel: device tap9a7a21f4-04 entered promiscuous mode
Dec 05 10:11:16 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929476.0217] manager: (tap9a7a21f4-04): new Generic device (/org/freedesktop/NetworkManager/Devices/44)
Dec 05 10:11:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:16Z|00237|binding|INFO|Claiming lport 9a7a21f4-04d4-407f-8b7b-206cf53af706 for this chassis.
Dec 05 10:11:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:16Z|00238|binding|INFO|9a7a21f4-04d4-407f-8b7b-206cf53af706: Claiming unknown
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.026 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:16 np0005546420.localdomain systemd-udevd[318136]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:11:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:16.039 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-8e6ab9b0-7159-4152-b7a3-0d448919a3d4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e6ab9b0-7159-4152-b7a3-0d448919a3d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d322de9-8153-4f0e-9c24-0df822bba185, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=9a7a21f4-04d4-407f-8b7b-206cf53af706) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:16.041 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 9a7a21f4-04d4-407f-8b7b-206cf53af706 in datapath 8e6ab9b0-7159-4152-b7a3-0d448919a3d4 bound to our chassis
Dec 05 10:11:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:16.045 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8e6ab9b0-7159-4152-b7a3-0d448919a3d4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:11:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:16.046 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[ea540e75-0415-442a-a53d-23d419ccd7b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:11:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap9a7a21f4-04: No such device
Dec 05 10:11:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap9a7a21f4-04: No such device
Dec 05 10:11:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:16Z|00239|binding|INFO|Setting lport 9a7a21f4-04d4-407f-8b7b-206cf53af706 ovn-installed in OVS
Dec 05 10:11:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:16Z|00240|binding|INFO|Setting lport 9a7a21f4-04d4-407f-8b7b-206cf53af706 up in Southbound
Dec 05 10:11:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap9a7a21f4-04: No such device
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.059 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap9a7a21f4-04: No such device
Dec 05 10:11:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap9a7a21f4-04: No such device
Dec 05 10:11:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap9a7a21f4-04: No such device
Dec 05 10:11:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap9a7a21f4-04: No such device
Dec 05 10:11:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap9a7a21f4-04: No such device
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.097 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.161 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.337 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:16 np0005546420.localdomain ceph-mon[298353]: pgmap v345: 177 pgs: 177 active+clean; 146 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s wr, 1 op/s
Dec 05 10:11:16 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "d095bb6f-0ba8-4ed5-9e70-b40104e570d0_18c17ccc-0586-4398-8488-62f94018964f", "force": true, "format": "json"}]: dispatch
Dec 05 10:11:16 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "d095bb6f-0ba8-4ed5-9e70-b40104e570d0", "force": true, "format": "json"}]: dispatch
Dec 05 10:11:16 np0005546420.localdomain ceph-mon[298353]: osdmap e149: 6 total, 6 up, 6 in
Dec 05 10:11:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:16.656 159503 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port a09ffb5b-2152-4c6b-b8d6-7ccca0efdb9f with type ""
Dec 05 10:11:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:16Z|00241|binding|INFO|Removing iface tap9a7a21f4-04 ovn-installed in OVS
Dec 05 10:11:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:16Z|00242|binding|INFO|Removing lport 9a7a21f4-04d4-407f-8b7b-206cf53af706 ovn-installed in OVS
Dec 05 10:11:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:16.657 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-8e6ab9b0-7159-4152-b7a3-0d448919a3d4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e6ab9b0-7159-4152-b7a3-0d448919a3d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d322de9-8153-4f0e-9c24-0df822bba185, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=9a7a21f4-04d4-407f-8b7b-206cf53af706) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.658 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:16.659 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 9a7a21f4-04d4-407f-8b7b-206cf53af706 in datapath 8e6ab9b0-7159-4152-b7a3-0d448919a3d4 unbound from our chassis
Dec 05 10:11:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:16.660 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8e6ab9b0-7159-4152-b7a3-0d448919a3d4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:11:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:16.660 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[8fec7b67-7620-4035-9921-d13f3ec42c32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.665 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.749 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e150 e150: 6 total, 6 up, 6 in
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:11:16 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:16.961 2 INFO neutron.agent.securitygroups_rpc [None req-ffc55049-227f-4f9b-b0f1-95b6549f2f8a 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['b0972671-904b-4941-be17-6352223dc520']
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.967 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:11:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:16.968 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.023 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:16Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0de250>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0ded30>], id=d659c101-5137-4f24-88bb-c6d997252957, ip_allocation=immediate, mac_address=fa:16:3e:ba:a3:3a, name=tempest-PortsTestJSON-1695385867, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:08:57Z, description=, dns_domain=, id=0f11084e-99c9-47ba-aac5-b3f38c139d59, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-1609140525, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=20384, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=1631, status=ACTIVE, subnets=['e6ac5268-495d-4d71-b46f-2caf4d11210e', 'fbf2d39f-25da-475b-b090-ceb371cd9400'], tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:10:49Z, vlan_transparent=None, network_id=0f11084e-99c9-47ba-aac5-b3f38c139d59, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['b0972671-904b-4941-be17-6352223dc520'], standard_attr_id=2183, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:11:16Z on network 0f11084e-99c9-47ba-aac5-b3f38c139d59
Dec 05 10:11:17 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:17.030 2 INFO neutron.agent.securitygroups_rpc [None req-08311682-69b0-4e5c-a393-5c4aab9e3dd1 2c33b8c3808c4d2fb486611901223652 936331162fd849b28da8e38e2db0598a - - default default] Security group member updated ['95cf58e1-082a-420e-bea6-fa59114af408']
Dec 05 10:11:17 np0005546420.localdomain podman[318205]: 
Dec 05 10:11:17 np0005546420.localdomain podman[318205]: 2025-12-05 10:11:17.061569575 +0000 UTC m=+0.109644885 container create c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e6ab9b0-7159-4152-b7a3-0d448919a3d4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true)
Dec 05 10:11:17 np0005546420.localdomain systemd[1]: Started libpod-conmon-c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11.scope.
Dec 05 10:11:17 np0005546420.localdomain systemd[1]: tmp-crun.2HWPUI.mount: Deactivated successfully.
Dec 05 10:11:17 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:11:17 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/441ee6dfb680a319bf83c20b93d099c5b75be238415d959b4c31774d5f17ba2c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:17 np0005546420.localdomain podman[318205]: 2025-12-05 10:11:17.023915363 +0000 UTC m=+0.071990663 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:11:17 np0005546420.localdomain podman[318205]: 2025-12-05 10:11:17.134084364 +0000 UTC m=+0.182159654 container init c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e6ab9b0-7159-4152-b7a3-0d448919a3d4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true)
Dec 05 10:11:17 np0005546420.localdomain podman[318205]: 2025-12-05 10:11:17.141887465 +0000 UTC m=+0.189962745 container start c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e6ab9b0-7159-4152-b7a3-0d448919a3d4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Dec 05 10:11:17 np0005546420.localdomain dnsmasq[318227]: started, version 2.85 cachesize 150
Dec 05 10:11:17 np0005546420.localdomain dnsmasq[318227]: DNS service limited to local subnets
Dec 05 10:11:17 np0005546420.localdomain dnsmasq[318227]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:11:17 np0005546420.localdomain dnsmasq[318227]: warning: no upstream servers configured
Dec 05 10:11:17 np0005546420.localdomain dnsmasq-dhcp[318227]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:11:17 np0005546420.localdomain dnsmasq[318227]: read /var/lib/neutron/dhcp/8e6ab9b0-7159-4152-b7a3-0d448919a3d4/addn_hosts - 0 addresses
Dec 05 10:11:17 np0005546420.localdomain dnsmasq-dhcp[318227]: read /var/lib/neutron/dhcp/8e6ab9b0-7159-4152-b7a3-0d448919a3d4/host
Dec 05 10:11:17 np0005546420.localdomain dnsmasq-dhcp[318227]: read /var/lib/neutron/dhcp/8e6ab9b0-7159-4152-b7a3-0d448919a3d4/opts
Dec 05 10:11:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:11:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:11:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:11:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 162301 "" "Go-http-client/1.1"
Dec 05 10:11:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:11:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20628 "" "Go-http-client/1.1"
Dec 05 10:11:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:17.316 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:17 np0005546420.localdomain kernel: device tap9a7a21f4-04 left promiscuous mode
Dec 05 10:11:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:17.327 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:17 np0005546420.localdomain dnsmasq[317841]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 1 addresses
Dec 05 10:11:17 np0005546420.localdomain podman[318286]: 2025-12-05 10:11:17.460846752 +0000 UTC m=+0.053306777 container kill 701b5de6ab7727ded69c2f3d5c14db30057dcdd24a01e86e7739a591704954da (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:11:17 np0005546420.localdomain dnsmasq-dhcp[317841]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:11:17 np0005546420.localdomain dnsmasq-dhcp[317841]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:11:17 np0005546420.localdomain dnsmasq[318227]: read /var/lib/neutron/dhcp/8e6ab9b0-7159-4152-b7a3-0d448919a3d4/addn_hosts - 0 addresses
Dec 05 10:11:17 np0005546420.localdomain dnsmasq-dhcp[318227]: read /var/lib/neutron/dhcp/8e6ab9b0-7159-4152-b7a3-0d448919a3d4/host
Dec 05 10:11:17 np0005546420.localdomain podman[318293]: 2025-12-05 10:11:17.483646326 +0000 UTC m=+0.051420369 container kill c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e6ab9b0-7159-4152-b7a3-0d448919a3d4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:11:17 np0005546420.localdomain dnsmasq-dhcp[318227]: read /var/lib/neutron/dhcp/8e6ab9b0-7159-4152-b7a3-0d448919a3d4/opts
Dec 05 10:11:17 np0005546420.localdomain dnsmasq[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/addn_hosts - 1 addresses
Dec 05 10:11:17 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/host
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.510 262769 INFO neutron.agent.dhcp.agent [None req-465a1b83-24c2-42b9-b088-ba79050f14be - - - - - -] DHCP configuration for ports {'8262ac79-7bff-40f0-afc8-94a0f920db84'} is completed
Dec 05 10:11:17 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/opts
Dec 05 10:11:17 np0005546420.localdomain podman[318258]: 2025-12-05 10:11:17.510600487 +0000 UTC m=+0.158766591 container kill acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent [None req-20beac5b-726f-4d1e-8167-3a2ea33b3163 - - - - - -] Unable to reload_allocations dhcp for 8e6ab9b0-7159-4152-b7a3-0d448919a3d4.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9a7a21f4-04 not found in namespace qdhcp-8e6ab9b0-7159-4152-b7a3-0d448919a3d4.
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     return fut.result()
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     raise self._exception
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9a7a21f4-04 not found in namespace qdhcp-8e6ab9b0-7159-4152-b7a3-0d448919a3d4.
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.517 262769 ERROR neutron.agent.dhcp.agent 
Dec 05 10:11:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:17.604 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:17.606 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated
Dec 05 10:11:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:17.608 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:11:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:17.609 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[d050afdc-0e14-4b99-9052-aaf55c4316e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.692 262769 INFO neutron.agent.dhcp.agent [None req-5f5be37f-f2ad-4772-b524-6f9efea16c83 - - - - - -] Synchronizing state
Dec 05 10:11:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:17.800 262769 INFO neutron.agent.dhcp.agent [None req-a1a5e588-b4fe-490e-978a-dcb0c2e88801 - - - - - -] DHCP configuration for ports {'d659c101-5137-4f24-88bb-c6d997252957'} is completed
Dec 05 10:11:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e151 e151: 6 total, 6 up, 6 in
Dec 05 10:11:17 np0005546420.localdomain ceph-mon[298353]: osdmap e150: 6 total, 6 up, 6 in
Dec 05 10:11:17 np0005546420.localdomain ceph-mon[298353]: pgmap v348: 177 pgs: 177 active+clean; 146 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 24 KiB/s wr, 4 op/s
Dec 05 10:11:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:17.964 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:18.035 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:18.037 262769 INFO neutron.agent.dhcp.agent [None req-3e6fb4a2-8210-4d67-82b7-c5f0808d7414 - - - - - -] All active networks have been fetched through RPC.
Dec 05 10:11:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:18.039 262769 INFO neutron.agent.dhcp.agent [-] Starting network 8e6ab9b0-7159-4152-b7a3-0d448919a3d4 dhcp configuration
Dec 05 10:11:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:18.040 262769 INFO neutron.agent.dhcp.agent [-] Finished network 8e6ab9b0-7159-4152-b7a3-0d448919a3d4 dhcp configuration
Dec 05 10:11:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:18.040 262769 INFO neutron.agent.dhcp.agent [None req-3e6fb4a2-8210-4d67-82b7-c5f0808d7414 - - - - - -] Synchronizing state complete
Dec 05 10:11:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:18.197 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:10:52Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f408e0>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ee4700>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ee4d90>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ee4490>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ea3670>], id=a4ae4a1b-8dd6-4911-9f5c-6cc52d542118, ip_allocation=immediate, mac_address=fa:16:3e:cc:b1:29, name=tempest-new-port-name-533645745, network_id=7018fb5e-7de1-428e-a192-b352829f2392, port_security_enabled=True, project_id=936331162fd849b28da8e38e2db0598a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['95cf58e1-082a-420e-bea6-fa59114af408'], standard_attr_id=2155, status=DOWN, tags=[], tenant_id=936331162fd849b28da8e38e2db0598a, updated_at=2025-12-05T10:11:17Z on network 7018fb5e-7de1-428e-a192-b352829f2392
Dec 05 10:11:18 np0005546420.localdomain dnsmasq[318227]: exiting on receipt of SIGTERM
Dec 05 10:11:18 np0005546420.localdomain podman[318346]: 2025-12-05 10:11:18.282385674 +0000 UTC m=+0.066762572 container kill c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e6ab9b0-7159-4152-b7a3-0d448919a3d4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 05 10:11:18 np0005546420.localdomain systemd[1]: libpod-c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11.scope: Deactivated successfully.
Dec 05 10:11:18 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:18.319 2 INFO neutron.agent.securitygroups_rpc [None req-4e46051d-d7a3-4253-a7f0-9cf8592c0f42 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:11:18 np0005546420.localdomain podman[318367]: 2025-12-05 10:11:18.356083609 +0000 UTC m=+0.055148374 container died c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e6ab9b0-7159-4152-b7a3-0d448919a3d4, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:11:18 np0005546420.localdomain systemd[1]: tmp-crun.YOG4Oe.mount: Deactivated successfully.
Dec 05 10:11:18 np0005546420.localdomain podman[318367]: 2025-12-05 10:11:18.394432633 +0000 UTC m=+0.093497368 container cleanup c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e6ab9b0-7159-4152-b7a3-0d448919a3d4, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:11:18 np0005546420.localdomain systemd[1]: libpod-conmon-c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11.scope: Deactivated successfully.
Dec 05 10:11:18 np0005546420.localdomain podman[318369]: 2025-12-05 10:11:18.444329124 +0000 UTC m=+0.138594711 container remove c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e6ab9b0-7159-4152-b7a3-0d448919a3d4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:11:18 np0005546420.localdomain dnsmasq[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/addn_hosts - 1 addresses
Dec 05 10:11:18 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/host
Dec 05 10:11:18 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/opts
Dec 05 10:11:18 np0005546420.localdomain podman[318397]: 2025-12-05 10:11:18.486504646 +0000 UTC m=+0.113081773 container kill acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:11:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:18.742 262769 INFO neutron.agent.dhcp.agent [None req-d47e0ba9-77a4-494f-ae9b-07607f3d3f01 - - - - - -] DHCP configuration for ports {'a4ae4a1b-8dd6-4911-9f5c-6cc52d542118'} is completed
Dec 05 10:11:18 np0005546420.localdomain ceph-mon[298353]: osdmap e151: 6 total, 6 up, 6 in
Dec 05 10:11:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/4251045490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:11:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:11:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:11:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:11:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:11:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:11:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:11:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:11:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:11:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:11:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:11:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:11:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:11:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:11:19 np0005546420.localdomain podman[318425]: 2025-12-05 10:11:19.011157362 +0000 UTC m=+0.091121784 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:19 np0005546420.localdomain podman[318425]: 2025-12-05 10:11:19.051561939 +0000 UTC m=+0.131526371 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:11:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-441ee6dfb680a319bf83c20b93d099c5b75be238415d959b4c31774d5f17ba2c-merged.mount: Deactivated successfully.
Dec 05 10:11:19 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c94baab15b73bea0abf3652692dc70b63d3f773f351d8efd703d68b1cdd57f11-userdata-shm.mount: Deactivated successfully.
Dec 05 10:11:19 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d8e6ab9b0\x2d7159\x2d4152\x2db7a3\x2d0d448919a3d4.mount: Deactivated successfully.
Dec 05 10:11:19 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:11:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "93f98e8e-8d98-4266-b4f1-bf5b6e06e924_737ae99a-cc32-40c8-bb8b-961fe482b6b8", "force": true, "format": "json"}]: dispatch
Dec 05 10:11:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "93f98e8e-8d98-4266-b4f1-bf5b6e06e924", "force": true, "format": "json"}]: dispatch
Dec 05 10:11:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/794925693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:11:19 np0005546420.localdomain ceph-mon[298353]: pgmap v350: 177 pgs: 177 active+clean; 146 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 13 KiB/s wr, 3 op/s
Dec 05 10:11:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:20 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:20.796 2 INFO neutron.agent.securitygroups_rpc [None req-828b9ecd-077c-4f2f-8083-1ef0b87cf212 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:11:20 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:20.798 2 INFO neutron.agent.securitygroups_rpc [None req-63f2b7b1-c1f0-4243-813a-65200aed5790 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:21 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:21.388 2 INFO neutron.agent.securitygroups_rpc [None req-63f2b7b1-c1f0-4243-813a-65200aed5790 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:21.390 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:21 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:21.393 2 INFO neutron.agent.securitygroups_rpc [None req-a93a3ac1-e950-4704-8bef-447f823d18a8 2c33b8c3808c4d2fb486611901223652 936331162fd849b28da8e38e2db0598a - - default default] Security group member updated ['95cf58e1-082a-420e-bea6-fa59114af408']
Dec 05 10:11:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:21.750 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:21 np0005546420.localdomain ceph-mon[298353]: pgmap v351: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 31 KiB/s wr, 7 op/s
Dec 05 10:11:21 np0005546420.localdomain systemd[1]: tmp-crun.kQKhMS.mount: Deactivated successfully.
Dec 05 10:11:21 np0005546420.localdomain podman[318472]: 2025-12-05 10:11:21.819694026 +0000 UTC m=+0.081272700 container kill acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 05 10:11:21 np0005546420.localdomain dnsmasq[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/addn_hosts - 0 addresses
Dec 05 10:11:21 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/host
Dec 05 10:11:21 np0005546420.localdomain dnsmasq-dhcp[317772]: read /var/lib/neutron/dhcp/7018fb5e-7de1-428e-a192-b352829f2392/opts
Dec 05 10:11:21 np0005546420.localdomain dnsmasq[317841]: exiting on receipt of SIGTERM
Dec 05 10:11:21 np0005546420.localdomain podman[318492]: 2025-12-05 10:11:21.871248788 +0000 UTC m=+0.065529354 container kill 701b5de6ab7727ded69c2f3d5c14db30057dcdd24a01e86e7739a591704954da (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:11:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:21.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:21.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:21 np0005546420.localdomain systemd[1]: libpod-701b5de6ab7727ded69c2f3d5c14db30057dcdd24a01e86e7739a591704954da.scope: Deactivated successfully.
Dec 05 10:11:21 np0005546420.localdomain podman[318509]: 2025-12-05 10:11:21.941913199 +0000 UTC m=+0.048888990 container died 701b5de6ab7727ded69c2f3d5c14db30057dcdd24a01e86e7739a591704954da (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:21 np0005546420.localdomain podman[318509]: 2025-12-05 10:11:21.983670078 +0000 UTC m=+0.090645859 container remove 701b5de6ab7727ded69c2f3d5c14db30057dcdd24a01e86e7739a591704954da (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:11:21 np0005546420.localdomain systemd[1]: libpod-conmon-701b5de6ab7727ded69c2f3d5c14db30057dcdd24a01e86e7739a591704954da.scope: Deactivated successfully.
Dec 05 10:11:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-aaad509e46a5ec22481eeec9ad9bf67e85290be62d8111f3db7503122aef757e-merged.mount: Deactivated successfully.
Dec 05 10:11:22 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-701b5de6ab7727ded69c2f3d5c14db30057dcdd24a01e86e7739a591704954da-userdata-shm.mount: Deactivated successfully.
Dec 05 10:11:22 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "format": "json"}]: dispatch
Dec 05 10:11:22 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "force": true, "format": "json"}]: dispatch
Dec 05 10:11:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:22.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:22.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:23 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:23.116 2 INFO neutron.agent.securitygroups_rpc [None req-5be8e1bf-782e-4367-8d11-75bd094f07a1 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:23 np0005546420.localdomain ceph-mon[298353]: pgmap v352: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 816 B/s rd, 25 KiB/s wr, 6 op/s
Dec 05 10:11:24 np0005546420.localdomain dnsmasq[317772]: exiting on receipt of SIGTERM
Dec 05 10:11:24 np0005546420.localdomain podman[318558]: 2025-12-05 10:11:24.735013556 +0000 UTC m=+0.044500675 container kill acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:11:24 np0005546420.localdomain systemd[1]: libpod-acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5.scope: Deactivated successfully.
Dec 05 10:11:24 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:24.748 2 INFO neutron.agent.securitygroups_rpc [None req-2d2aa7bb-9ffc-4eb9-9e7e-32b49c722eb1 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:24 np0005546420.localdomain podman[318571]: 2025-12-05 10:11:24.814953744 +0000 UTC m=+0.064200273 container died acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:24 np0005546420.localdomain systemd[1]: tmp-crun.4KXgw7.mount: Deactivated successfully.
Dec 05 10:11:24 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5-userdata-shm.mount: Deactivated successfully.
Dec 05 10:11:24 np0005546420.localdomain podman[318571]: 2025-12-05 10:11:24.85367694 +0000 UTC m=+0.102923429 container cleanup acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:11:24 np0005546420.localdomain systemd[1]: libpod-conmon-acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5.scope: Deactivated successfully.
Dec 05 10:11:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:24.868 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:24 np0005546420.localdomain podman[318573]: 2025-12-05 10:11:24.898904036 +0000 UTC m=+0.134156493 container remove acd013f2a8e3b945d03c6a3788afac67ed49123f8881c7cfd2addfeba0e0d3d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7018fb5e-7de1-428e-a192-b352829f2392, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:24 np0005546420.localdomain kernel: device tap5f3e046f-47 left promiscuous mode
Dec 05 10:11:24 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:24Z|00243|binding|INFO|Releasing lport 5f3e046f-4799-4ef7-b50c-5fd45526ccd0 from this chassis (sb_readonly=0)
Dec 05 10:11:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:24.969 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:24 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:24Z|00244|binding|INFO|Setting lport 5f3e046f-4799-4ef7-b50c-5fd45526ccd0 down in Southbound
Dec 05 10:11:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:24.977 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-7018fb5e-7de1-428e-a192-b352829f2392', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7018fb5e-7de1-428e-a192-b352829f2392', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '936331162fd849b28da8e38e2db0598a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71907140-60f0-4b51-82a2-84c70a48dfe9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5f3e046f-4799-4ef7-b50c-5fd45526ccd0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:24.979 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5f3e046f-4799-4ef7-b50c-5fd45526ccd0 in datapath 7018fb5e-7de1-428e-a192-b352829f2392 unbound from our chassis
Dec 05 10:11:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:24.982 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7018fb5e-7de1-428e-a192-b352829f2392, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:11:24 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:24.983 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[d32f8765-72aa-4356-99b6-09867a7b2b2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:11:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:24.991 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:25.217 262769 INFO neutron.agent.dhcp.agent [None req-020a97b1-1d4f-46de-89db-f969980ae70b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:11:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:25.348 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73051091-e669-449c-b6ce-0fa3950b46d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3e74704f-5b87-479b-a0f2-9cc31811fac6) old=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:25.351 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3e74704f-5b87-479b-a0f2-9cc31811fac6 in datapath 0f11084e-99c9-47ba-aac5-b3f38c139d59 updated
Dec 05 10:11:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:25.354 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 0d6b9761-14e7-4038-888a-114f2c38489e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:11:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:25.355 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f11084e-99c9-47ba-aac5-b3f38c139d59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:11:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:25.356 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[3360ce43-5b1c-4454-b67e-c5584172a1c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:11:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:25.380 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:25.381 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:25 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:25.382 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:11:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:25.419 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:11:25 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7aadb59304a91a5cec5ae4df36c1987861fba38af3201c3ac28345c51feeccc7-merged.mount: Deactivated successfully.
Dec 05 10:11:25 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d7018fb5e\x2d7de1\x2d428e\x2da192\x2db352829f2392.mount: Deactivated successfully.
Dec 05 10:11:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e152 e152: 6 total, 6 up, 6 in
Dec 05 10:11:26 np0005546420.localdomain podman[318651]: 
Dec 05 10:11:26 np0005546420.localdomain podman[318651]: 2025-12-05 10:11:26.024626809 +0000 UTC m=+0.085101519 container create e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 05 10:11:26 np0005546420.localdomain systemd[1]: Started libpod-conmon-e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9.scope.
Dec 05 10:11:26 np0005546420.localdomain systemd[1]: tmp-crun.Jnl38P.mount: Deactivated successfully.
Dec 05 10:11:26 np0005546420.localdomain podman[318651]: 2025-12-05 10:11:25.977067911 +0000 UTC m=+0.037542671 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:11:26 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:11:26 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b66a6f5e4747ea5a24f1fef1e926ce79bdac896580f0d17026fa41e90c9c44/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:26 np0005546420.localdomain podman[318651]: 2025-12-05 10:11:26.096831417 +0000 UTC m=+0.157306127 container init e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:11:26 np0005546420.localdomain podman[318651]: 2025-12-05 10:11:26.103243526 +0000 UTC m=+0.163718246 container start e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:11:26 np0005546420.localdomain dnsmasq[318668]: started, version 2.85 cachesize 150
Dec 05 10:11:26 np0005546420.localdomain dnsmasq[318668]: DNS service limited to local subnets
Dec 05 10:11:26 np0005546420.localdomain dnsmasq[318668]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:11:26 np0005546420.localdomain dnsmasq[318668]: warning: no upstream servers configured
Dec 05 10:11:26 np0005546420.localdomain dnsmasq-dhcp[318668]: DHCP, static leases only on 10.100.0.16, lease time 1d
Dec 05 10:11:26 np0005546420.localdomain dnsmasq-dhcp[318668]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:11:26 np0005546420.localdomain dnsmasq-dhcp[318668]: DHCP, static leases only on 10.100.0.32, lease time 1d
Dec 05 10:11:26 np0005546420.localdomain dnsmasq[318668]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 1 addresses
Dec 05 10:11:26 np0005546420.localdomain dnsmasq-dhcp[318668]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:11:26 np0005546420.localdomain dnsmasq-dhcp[318668]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:11:26 np0005546420.localdomain ceph-mon[298353]: pgmap v353: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 481 B/s rd, 13 KiB/s wr, 3 op/s
Dec 05 10:11:26 np0005546420.localdomain ceph-mon[298353]: osdmap e152: 6 total, 6 up, 6 in
Dec 05 10:11:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:26.416 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:26 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:26.434 262769 INFO neutron.agent.dhcp.agent [None req-64451c78-5b75-48fb-a6eb-03384c3df411 - - - - - -] DHCP configuration for ports {'02057f88-1a3f-4ab4-a0ca-05b7fb70a51d', '3e74704f-5b87-479b-a0f2-9cc31811fac6', 'd659c101-5137-4f24-88bb-c6d997252957', 'f76514dd-d742-4eac-9b43-a1be050fd678'} is completed
Dec 05 10:11:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:11:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:11:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:26.757 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:26 np0005546420.localdomain podman[318669]: 2025-12-05 10:11:26.758833185 +0000 UTC m=+0.089161554 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:11:26 np0005546420.localdomain podman[318669]: 2025-12-05 10:11:26.793299019 +0000 UTC m=+0.123627358 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:11:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e153 e153: 6 total, 6 up, 6 in
Dec 05 10:11:26 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:11:26 np0005546420.localdomain systemd[1]: tmp-crun.PBhpZK.mount: Deactivated successfully.
Dec 05 10:11:26 np0005546420.localdomain podman[318685]: 2025-12-05 10:11:26.861238146 +0000 UTC m=+0.098093890 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:11:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:26.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:11:26 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:26.890 2 INFO neutron.agent.securitygroups_rpc [None req-b4396528-8b42-40af-ac75-cd0a49228f19 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['b0972671-904b-4941-be17-6352223dc520', '5983ed6d-e0ce-45eb-b8c9-1f29f23a87de', 'b7cbd0ad-c7a0-4c31-b4b2-a706028a8f44']
Dec 05 10:11:26 np0005546420.localdomain podman[318685]: 2025-12-05 10:11:26.895340608 +0000 UTC m=+0.132196272 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:11:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:26.900 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:11:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:26.901 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:11:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:26.901 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:11:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:26.901 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:11:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:26.902 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:11:26 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:11:26 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:26.919 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:16Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e6d5e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e6d100>], id=d659c101-5137-4f24-88bb-c6d997252957, ip_allocation=immediate, mac_address=fa:16:3e:ba:a3:3a, name=tempest-PortsTestJSON-1213014757, network_id=0f11084e-99c9-47ba-aac5-b3f38c139d59, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['5983ed6d-e0ce-45eb-b8c9-1f29f23a87de', 'b7cbd0ad-c7a0-4c31-b4b2-a706028a8f44'], standard_attr_id=2183, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:11:26Z on network 0f11084e-99c9-47ba-aac5-b3f38c139d59
Dec 05 10:11:26 np0005546420.localdomain dnsmasq-dhcp[318668]: DHCPRELEASE(tapf76514dd-d7) 10.100.0.5 fa:16:3e:ba:a3:3a
Dec 05 10:11:26 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:26.989 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:11:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2299155455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:11:27 np0005546420.localdomain ceph-mon[298353]: osdmap e153: 6 total, 6 up, 6 in
Dec 05 10:11:27 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:11:27 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2812963864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:11:27 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:27.395 2 INFO neutron.agent.securitygroups_rpc [None req-2dcc9d40-3349-45e7-88d4-8dacaf7c5016 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['b7cbd0ad-c7a0-4c31-b4b2-a706028a8f44', '5983ed6d-e0ce-45eb-b8c9-1f29f23a87de']
Dec 05 10:11:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:27.407 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:11:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:27.608 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:11:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:27.609 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11594MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:11:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:27.609 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:11:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:27.610 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:11:27 np0005546420.localdomain podman[318750]: 2025-12-05 10:11:27.627067899 +0000 UTC m=+0.066389501 container kill e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:11:27 np0005546420.localdomain dnsmasq[318668]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 1 addresses
Dec 05 10:11:27 np0005546420.localdomain dnsmasq-dhcp[318668]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:11:27 np0005546420.localdomain dnsmasq-dhcp[318668]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:11:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:28.385 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:11:28 np0005546420.localdomain ceph-mon[298353]: pgmap v356: 177 pgs: 177 active+clean; 146 MiB data, 798 MiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 23 KiB/s wr, 5 op/s
Dec 05 10:11:28 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2812963864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:11:28 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2831382554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:11:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:28.774 262769 INFO neutron.agent.dhcp.agent [None req-2a140a2e-58bc-442f-8d67-bbad5c05429f - - - - - -] DHCP configuration for ports {'d659c101-5137-4f24-88bb-c6d997252957'} is completed
Dec 05 10:11:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:28.793 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:11:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:28.794 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:11:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:28.866 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:11:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:28.882 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:29 np0005546420.localdomain dnsmasq[318668]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 0 addresses
Dec 05 10:11:29 np0005546420.localdomain dnsmasq-dhcp[318668]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:11:29 np0005546420.localdomain podman[318807]: 2025-12-05 10:11:29.220085537 +0000 UTC m=+0.053247145 container kill e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:11:29 np0005546420.localdomain dnsmasq-dhcp[318668]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:11:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:11:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:11:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3698165665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:11:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:29.296 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:11:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:29.305 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:11:29 np0005546420.localdomain podman[318820]: 2025-12-05 10:11:29.330712903 +0000 UTC m=+0.085373417 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:11:29 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:29.341 2 INFO neutron.agent.securitygroups_rpc [None req-cc1df5d5-c8de-4978-bc3b-3ad0af9779bd 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:11:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:29.342 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:11:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:29.344 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:11:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:29.344 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:11:29 np0005546420.localdomain podman[318820]: 2025-12-05 10:11:29.35235126 +0000 UTC m=+0.107011714 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 10:11:29 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:11:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3698165665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:11:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:30 np0005546420.localdomain ceph-mon[298353]: pgmap v357: 177 pgs: 177 active+clean; 146 MiB data, 798 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 9.9 KiB/s wr, 2 op/s
Dec 05 10:11:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:31.448 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:31 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:31.532 2 INFO neutron.agent.securitygroups_rpc [None req-95d48023-8205-4ffe-b872-52d057520238 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:11:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:31.756 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:31 np0005546420.localdomain ceph-mon[298353]: pgmap v358: 177 pgs: 177 active+clean; 146 MiB data, 798 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 12 op/s
Dec 05 10:11:32 np0005546420.localdomain dnsmasq[318668]: exiting on receipt of SIGTERM
Dec 05 10:11:32 np0005546420.localdomain podman[318865]: 2025-12-05 10:11:32.319422769 +0000 UTC m=+0.067538987 container kill e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Dec 05 10:11:32 np0005546420.localdomain systemd[1]: tmp-crun.dKBXuM.mount: Deactivated successfully.
Dec 05 10:11:32 np0005546420.localdomain systemd[1]: libpod-e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9.scope: Deactivated successfully.
Dec 05 10:11:32 np0005546420.localdomain podman[318878]: 2025-12-05 10:11:32.392940248 +0000 UTC m=+0.056600778 container died e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:11:32 np0005546420.localdomain podman[318878]: 2025-12-05 10:11:32.424525113 +0000 UTC m=+0.088185613 container cleanup e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:11:32 np0005546420.localdomain systemd[1]: libpod-conmon-e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9.scope: Deactivated successfully.
Dec 05 10:11:32 np0005546420.localdomain podman[318879]: 2025-12-05 10:11:32.475725714 +0000 UTC m=+0.135003029 container remove e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 10:11:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e2b66a6f5e4747ea5a24f1fef1e926ce79bdac896580f0d17026fa41e90c9c44-merged.mount: Deactivated successfully.
Dec 05 10:11:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e9eb3d63db2ecbf2ffb90cc4ade5744e2e19dc65b257986c31d6d8f9e36cdae9-userdata-shm.mount: Deactivated successfully.
Dec 05 10:11:34 np0005546420.localdomain ceph-mon[298353]: pgmap v359: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 15 KiB/s wr, 12 op/s
Dec 05 10:11:34 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:34.466 2 INFO neutron.agent.securitygroups_rpc [None req-ed8e1157-f7f5-45ba-ad76-18d51438c9ec 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:34 np0005546420.localdomain podman[318955]: 
Dec 05 10:11:34 np0005546420.localdomain podman[318955]: 2025-12-05 10:11:34.754245356 +0000 UTC m=+0.082335394 container create 6f3fff81143713680e0b47199fb0e0f023e676c763d9cb55552c672996944f6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:11:34 np0005546420.localdomain systemd[1]: Started libpod-conmon-6f3fff81143713680e0b47199fb0e0f023e676c763d9cb55552c672996944f6d.scope.
Dec 05 10:11:34 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:11:34 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7391c565e0c0a8878d6d665b9179b2422fab59c833df72c3f5a8c2c2ab9bf9a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:34 np0005546420.localdomain podman[318955]: 2025-12-05 10:11:34.719881794 +0000 UTC m=+0.047971812 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:11:34 np0005546420.localdomain podman[318955]: 2025-12-05 10:11:34.82435142 +0000 UTC m=+0.152441418 container init 6f3fff81143713680e0b47199fb0e0f023e676c763d9cb55552c672996944f6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:11:34 np0005546420.localdomain podman[318955]: 2025-12-05 10:11:34.834489192 +0000 UTC m=+0.162579200 container start 6f3fff81143713680e0b47199fb0e0f023e676c763d9cb55552c672996944f6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:11:34 np0005546420.localdomain dnsmasq[318973]: started, version 2.85 cachesize 150
Dec 05 10:11:34 np0005546420.localdomain dnsmasq[318973]: DNS service limited to local subnets
Dec 05 10:11:34 np0005546420.localdomain dnsmasq[318973]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:11:34 np0005546420.localdomain dnsmasq[318973]: warning: no upstream servers configured
Dec 05 10:11:34 np0005546420.localdomain dnsmasq-dhcp[318973]: DHCP, static leases only on 10.100.0.16, lease time 1d
Dec 05 10:11:34 np0005546420.localdomain dnsmasq-dhcp[318973]: DHCP, static leases only on 10.100.0.32, lease time 1d
Dec 05 10:11:34 np0005546420.localdomain dnsmasq[318973]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/addn_hosts - 0 addresses
Dec 05 10:11:34 np0005546420.localdomain dnsmasq-dhcp[318973]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/host
Dec 05 10:11:34 np0005546420.localdomain dnsmasq-dhcp[318973]: read /var/lib/neutron/dhcp/0f11084e-99c9-47ba-aac5-b3f38c139d59/opts
Dec 05 10:11:34 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:34.937 2 INFO neutron.agent.securitygroups_rpc [None req-123b23be-a9bb-4a1f-9d98-ce4fe835d14f 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:35 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:35.162 262769 INFO neutron.agent.dhcp.agent [None req-3c508d6d-643a-4eb2-a5ee-6abf15ebf30d - - - - - -] DHCP configuration for ports {'02057f88-1a3f-4ab4-a0ca-05b7fb70a51d', '3e74704f-5b87-479b-a0f2-9cc31811fac6', 'f76514dd-d742-4eac-9b43-a1be050fd678'} is completed
Dec 05 10:11:35 np0005546420.localdomain dnsmasq[318973]: exiting on receipt of SIGTERM
Dec 05 10:11:35 np0005546420.localdomain podman[318991]: 2025-12-05 10:11:35.280096229 +0000 UTC m=+0.060834599 container kill 6f3fff81143713680e0b47199fb0e0f023e676c763d9cb55552c672996944f6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:11:35 np0005546420.localdomain systemd[1]: libpod-6f3fff81143713680e0b47199fb0e0f023e676c763d9cb55552c672996944f6d.scope: Deactivated successfully.
Dec 05 10:11:35 np0005546420.localdomain podman[319002]: 2025-12-05 10:11:35.351846444 +0000 UTC m=+0.060539280 container died 6f3fff81143713680e0b47199fb0e0f023e676c763d9cb55552c672996944f6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:11:35 np0005546420.localdomain podman[319002]: 2025-12-05 10:11:35.38087042 +0000 UTC m=+0.089563226 container cleanup 6f3fff81143713680e0b47199fb0e0f023e676c763d9cb55552c672996944f6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:11:35 np0005546420.localdomain systemd[1]: libpod-conmon-6f3fff81143713680e0b47199fb0e0f023e676c763d9cb55552c672996944f6d.scope: Deactivated successfully.
Dec 05 10:11:35 np0005546420.localdomain podman[319005]: 2025-12-05 10:11:35.440166801 +0000 UTC m=+0.143262084 container remove 6f3fff81143713680e0b47199fb0e0f023e676c763d9cb55552c672996944f6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f11084e-99c9-47ba-aac5-b3f38c139d59, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 10:11:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:35.453 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:35 np0005546420.localdomain kernel: device tapf76514dd-d7 left promiscuous mode
Dec 05 10:11:35 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:35Z|00245|binding|INFO|Releasing lport f76514dd-d742-4eac-9b43-a1be050fd678 from this chassis (sb_readonly=0)
Dec 05 10:11:35 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:35Z|00246|binding|INFO|Setting lport f76514dd-d742-4eac-9b43-a1be050fd678 down in Southbound
Dec 05 10:11:35 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:35.464 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28 10.100.0.35/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:dhcp', 'neutron:host_id': '', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73051091-e669-449c-b6ce-0fa3950b46d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=f76514dd-d742-4eac-9b43-a1be050fd678) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:35 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:35.466 159503 INFO neutron.agent.ovn.metadata.agent [-] Port f76514dd-d742-4eac-9b43-a1be050fd678 in datapath 0f11084e-99c9-47ba-aac5-b3f38c139d59 unbound from our chassis
Dec 05 10:11:35 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:35.468 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f11084e-99c9-47ba-aac5-b3f38c139d59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:11:35 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:35.469 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[cadeac79-20cb-4c5f-a70d-72896fa0ad42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:11:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:35.477 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:35 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-7391c565e0c0a8878d6d665b9179b2422fab59c833df72c3f5a8c2c2ab9bf9a8-merged.mount: Deactivated successfully.
Dec 05 10:11:35 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f3fff81143713680e0b47199fb0e0f023e676c763d9cb55552c672996944f6d-userdata-shm.mount: Deactivated successfully.
Dec 05 10:11:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e154 e154: 6 total, 6 up, 6 in
Dec 05 10:11:36 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:36.135 262769 INFO neutron.agent.dhcp.agent [None req-6947d47e-ed97-43df-92f6-1d0ec56f77b5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:11:36 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d0f11084e\x2d99c9\x2d47ba\x2daac5\x2db3f38c139d59.mount: Deactivated successfully.
Dec 05 10:11:36 np0005546420.localdomain ceph-mon[298353]: pgmap v360: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 13 KiB/s wr, 10 op/s
Dec 05 10:11:36 np0005546420.localdomain ceph-mon[298353]: osdmap e154: 6 total, 6 up, 6 in
Dec 05 10:11:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:36.508 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:36 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:36.532 2 INFO neutron.agent.securitygroups_rpc [None req-55cddc01-31b2-4409-889c-7116c714aef0 3bf116b8d9ba48c49fc0ffd65e2e4fda 8cb09bff88504c818b275bb285c1a663 - - default default] Security group rule updated ['cbc56729-8ff3-450f-b83f-5c75940540af']
Dec 05 10:11:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:36.756 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:37 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:37.158 2 INFO neutron.agent.securitygroups_rpc [None req-965108e7-c0e8-4e67-9212-37e12b776ebc 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']
Dec 05 10:11:37 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:37.187 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:11:37 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:37.482 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:11:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:37.738 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:38 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:38.394 2 INFO neutron.agent.securitygroups_rpc [None req-285387f2-eddd-474f-887d-e0e2d505f1be 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:11:38 np0005546420.localdomain ceph-mon[298353]: pgmap v362: 177 pgs: 177 active+clean; 171 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 926 KiB/s wr, 43 op/s
Dec 05 10:11:38 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:11:38 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e155 e155: 6 total, 6 up, 6 in
Dec 05 10:11:38 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:38.948 2 INFO neutron.agent.securitygroups_rpc [None req-aa155b96-5a12-44b5-9136-21c1b0d4baef 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:11:39 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b7c2d7d8-57b5-40a5-9e26-8515c67f1048", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:11:39 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b7c2d7d8-57b5-40a5-9e26-8515c67f1048", "format": "json"}]: dispatch
Dec 05 10:11:39 np0005546420.localdomain ceph-mon[298353]: osdmap e155: 6 total, 6 up, 6 in
Dec 05 10:11:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:11:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:11:39 np0005546420.localdomain podman[319034]: 2025-12-05 10:11:39.859661647 +0000 UTC m=+0.098545223 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Dec 05 10:11:39 np0005546420.localdomain podman[319034]: 2025-12-05 10:11:39.900618022 +0000 UTC m=+0.139501618 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64)
Dec 05 10:11:39 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:11:39 np0005546420.localdomain podman[319035]: 2025-12-05 10:11:39.956717914 +0000 UTC m=+0.192407972 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:11:39 np0005546420.localdomain podman[319035]: 2025-12-05 10:11:39.994430888 +0000 UTC m=+0.230120996 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:11:40 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:11:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:40 np0005546420.localdomain ceph-mon[298353]: pgmap v364: 177 pgs: 177 active+clean; 171 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 595 KiB/s rd, 1.1 MiB/s wr, 42 op/s
Dec 05 10:11:41 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:41.105 262769 INFO neutron.agent.linux.ip_lib [None req-030c48a4-d364-4e9a-b25c-3688bcb82744 - - - - - -] Device tap5cce7e57-65 cannot be used as it has no MAC address
Dec 05 10:11:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:41.126 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:41 np0005546420.localdomain kernel: device tap5cce7e57-65 entered promiscuous mode
Dec 05 10:11:41 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929501.1372] manager: (tap5cce7e57-65): new Generic device (/org/freedesktop/NetworkManager/Devices/45)
Dec 05 10:11:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:41.136 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:41 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:41Z|00247|binding|INFO|Claiming lport 5cce7e57-6563-4a1a-b84a-1378fbe7405c for this chassis.
Dec 05 10:11:41 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:41Z|00248|binding|INFO|5cce7e57-6563-4a1a-b84a-1378fbe7405c: Claiming unknown
Dec 05 10:11:41 np0005546420.localdomain systemd-udevd[319088]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:11:41 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5cce7e57-65: No such device
Dec 05 10:11:41 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5cce7e57-65: No such device
Dec 05 10:11:41 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:41Z|00249|binding|INFO|Setting lport 5cce7e57-6563-4a1a-b84a-1378fbe7405c ovn-installed in OVS
Dec 05 10:11:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:41.171 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:41.174 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:41 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5cce7e57-65: No such device
Dec 05 10:11:41 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5cce7e57-65: No such device
Dec 05 10:11:41 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5cce7e57-65: No such device
Dec 05 10:11:41 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5cce7e57-65: No such device
Dec 05 10:11:41 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5cce7e57-65: No such device
Dec 05 10:11:41 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5cce7e57-65: No such device
Dec 05 10:11:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:41.213 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:41.245 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:41 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:41.290 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-b556e8b7-35b7-4923-b2c9-552415bb1dda', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b556e8b7-35b7-4923-b2c9-552415bb1dda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dba761eb9482439aa79c2d9ffe5c0dfa', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa2ad848-f2e9-4900-9f38-972020b63aa9, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5cce7e57-6563-4a1a-b84a-1378fbe7405c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:41 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:41Z|00250|binding|INFO|Setting lport 5cce7e57-6563-4a1a-b84a-1378fbe7405c up in Southbound
Dec 05 10:11:41 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:41.292 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5cce7e57-6563-4a1a-b84a-1378fbe7405c in datapath b556e8b7-35b7-4923-b2c9-552415bb1dda bound to our chassis
Dec 05 10:11:41 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:41.293 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b556e8b7-35b7-4923-b2c9-552415bb1dda or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:11:41 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:41.294 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[c9896882-d234-490d-a832-902bff942c08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:11:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:41.549 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:41.759 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:41 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2527535918' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:11:41 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2527535918' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:11:41 np0005546420.localdomain ceph-mon[298353]: pgmap v365: 177 pgs: 177 active+clean; 193 MiB data, 884 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 2.7 MiB/s wr, 99 op/s
Dec 05 10:11:42 np0005546420.localdomain podman[319159]: 
Dec 05 10:11:42 np0005546420.localdomain podman[319159]: 2025-12-05 10:11:42.158581148 +0000 UTC m=+0.110293416 container create ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b556e8b7-35b7-4923-b2c9-552415bb1dda, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:11:42 np0005546420.localdomain podman[319159]: 2025-12-05 10:11:42.097728449 +0000 UTC m=+0.049440727 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:11:42 np0005546420.localdomain systemd[1]: Started libpod-conmon-ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c.scope.
Dec 05 10:11:42 np0005546420.localdomain systemd[1]: tmp-crun.5F4HgI.mount: Deactivated successfully.
Dec 05 10:11:42 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:11:42 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13e6c6149839e5785901394f04786c50fc33b41abf00b53bc7eb4313108670f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:42 np0005546420.localdomain podman[319159]: 2025-12-05 10:11:42.242026625 +0000 UTC m=+0.193738893 container init ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b556e8b7-35b7-4923-b2c9-552415bb1dda, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:11:42 np0005546420.localdomain podman[319159]: 2025-12-05 10:11:42.253120937 +0000 UTC m=+0.204833215 container start ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b556e8b7-35b7-4923-b2c9-552415bb1dda, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 05 10:11:42 np0005546420.localdomain dnsmasq[319176]: started, version 2.85 cachesize 150
Dec 05 10:11:42 np0005546420.localdomain dnsmasq[319176]: DNS service limited to local subnets
Dec 05 10:11:42 np0005546420.localdomain dnsmasq[319176]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:11:42 np0005546420.localdomain dnsmasq[319176]: warning: no upstream servers configured
Dec 05 10:11:42 np0005546420.localdomain dnsmasq-dhcp[319176]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:11:42 np0005546420.localdomain dnsmasq[319176]: read /var/lib/neutron/dhcp/b556e8b7-35b7-4923-b2c9-552415bb1dda/addn_hosts - 0 addresses
Dec 05 10:11:42 np0005546420.localdomain dnsmasq-dhcp[319176]: read /var/lib/neutron/dhcp/b556e8b7-35b7-4923-b2c9-552415bb1dda/host
Dec 05 10:11:42 np0005546420.localdomain dnsmasq-dhcp[319176]: read /var/lib/neutron/dhcp/b556e8b7-35b7-4923-b2c9-552415bb1dda/opts
Dec 05 10:11:42 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:42.308 262769 INFO neutron.agent.dhcp.agent [None req-030c48a4-d364-4e9a-b25c-3688bcb82744 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:39Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a873c70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a873850>], id=151dbc55-8865-427a-af52-e1a4957d21ea, ip_allocation=immediate, mac_address=fa:16:3e:d6:d9:2a, name=tempest-PortsIpV6TestJSON-842128654, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:11:38Z, description=, dns_domain=, id=b556e8b7-35b7-4923-b2c9-552415bb1dda, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-263825973, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=63339, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2259, status=ACTIVE, subnets=['fecf16c8-d76d-4d28-917b-30bd0515c10a'], tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:11:39Z, vlan_transparent=None, network_id=b556e8b7-35b7-4923-b2c9-552415bb1dda, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2267, status=DOWN, tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:11:39Z on network b556e8b7-35b7-4923-b2c9-552415bb1dda
Dec 05 10:11:42 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:42.446 262769 INFO neutron.agent.dhcp.agent [None req-9d9f7f3a-7ff1-4ac5-9854-151810dd59f2 - - - - - -] DHCP configuration for ports {'3780cfdf-5ac6-44ba-98f7-e0236be19070'} is completed
Dec 05 10:11:42 np0005546420.localdomain dnsmasq[319176]: read /var/lib/neutron/dhcp/b556e8b7-35b7-4923-b2c9-552415bb1dda/addn_hosts - 1 addresses
Dec 05 10:11:42 np0005546420.localdomain dnsmasq-dhcp[319176]: read /var/lib/neutron/dhcp/b556e8b7-35b7-4923-b2c9-552415bb1dda/host
Dec 05 10:11:42 np0005546420.localdomain podman[319195]: 2025-12-05 10:11:42.513583748 +0000 UTC m=+0.064327087 container kill ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b556e8b7-35b7-4923-b2c9-552415bb1dda, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:11:42 np0005546420.localdomain dnsmasq-dhcp[319176]: read /var/lib/neutron/dhcp/b556e8b7-35b7-4923-b2c9-552415bb1dda/opts
Dec 05 10:11:42 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:42.767 262769 INFO neutron.agent.dhcp.agent [None req-b74d356a-79d2-4b6c-9905-7d3ad01f5ce4 - - - - - -] DHCP configuration for ports {'151dbc55-8865-427a-af52-e1a4957d21ea'} is completed
Dec 05 10:11:42 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2922772668' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:11:42 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2922772668' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:11:42 np0005546420.localdomain dnsmasq[319176]: read /var/lib/neutron/dhcp/b556e8b7-35b7-4923-b2c9-552415bb1dda/addn_hosts - 0 addresses
Dec 05 10:11:42 np0005546420.localdomain dnsmasq-dhcp[319176]: read /var/lib/neutron/dhcp/b556e8b7-35b7-4923-b2c9-552415bb1dda/host
Dec 05 10:11:42 np0005546420.localdomain dnsmasq-dhcp[319176]: read /var/lib/neutron/dhcp/b556e8b7-35b7-4923-b2c9-552415bb1dda/opts
Dec 05 10:11:42 np0005546420.localdomain podman[319232]: 2025-12-05 10:11:42.846464024 +0000 UTC m=+0.068875967 container kill ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b556e8b7-35b7-4923-b2c9-552415bb1dda, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:11:43 np0005546420.localdomain dnsmasq[319176]: exiting on receipt of SIGTERM
Dec 05 10:11:43 np0005546420.localdomain podman[319271]: 2025-12-05 10:11:43.249564308 +0000 UTC m=+0.060125847 container kill ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b556e8b7-35b7-4923-b2c9-552415bb1dda, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:11:43 np0005546420.localdomain systemd[1]: libpod-ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c.scope: Deactivated successfully.
Dec 05 10:11:43 np0005546420.localdomain podman[319283]: 2025-12-05 10:11:43.317515666 +0000 UTC m=+0.052664916 container died ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b556e8b7-35b7-4923-b2c9-552415bb1dda, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:11:43 np0005546420.localdomain podman[319283]: 2025-12-05 10:11:43.347318236 +0000 UTC m=+0.082467446 container cleanup ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b556e8b7-35b7-4923-b2c9-552415bb1dda, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 10:11:43 np0005546420.localdomain systemd[1]: libpod-conmon-ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c.scope: Deactivated successfully.
Dec 05 10:11:43 np0005546420.localdomain podman[319285]: 2025-12-05 10:11:43.40121144 +0000 UTC m=+0.129321214 container remove ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b556e8b7-35b7-4923-b2c9-552415bb1dda, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:11:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:43.415 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:43 np0005546420.localdomain kernel: device tap5cce7e57-65 left promiscuous mode
Dec 05 10:11:43 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:43Z|00251|binding|INFO|Releasing lport 5cce7e57-6563-4a1a-b84a-1378fbe7405c from this chassis (sb_readonly=0)
Dec 05 10:11:43 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:43Z|00252|binding|INFO|Setting lport 5cce7e57-6563-4a1a-b84a-1378fbe7405c down in Southbound
Dec 05 10:11:43 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:43.425 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-b556e8b7-35b7-4923-b2c9-552415bb1dda', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b556e8b7-35b7-4923-b2c9-552415bb1dda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dba761eb9482439aa79c2d9ffe5c0dfa', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa2ad848-f2e9-4900-9f38-972020b63aa9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5cce7e57-6563-4a1a-b84a-1378fbe7405c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:43 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:43.427 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5cce7e57-6563-4a1a-b84a-1378fbe7405c in datapath b556e8b7-35b7-4923-b2c9-552415bb1dda unbound from our chassis
Dec 05 10:11:43 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:43.429 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b556e8b7-35b7-4923-b2c9-552415bb1dda or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:11:43 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:43.430 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[6de9d396-cf60-4bf9-b501-87310988f6cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:11:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:43.435 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:43 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:43.495 262769 INFO neutron.agent.dhcp.agent [None req-9d9cf6f7-9165-4573-b381-f831fa700df6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:11:43 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:43.800 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:11:43 np0005546420.localdomain ceph-mon[298353]: pgmap v366: 177 pgs: 177 active+clean; 193 MiB data, 884 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 2.7 MiB/s wr, 105 op/s
Dec 05 10:11:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:44.125 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d13e6c6149839e5785901394f04786c50fc33b41abf00b53bc7eb4313108670f-merged.mount: Deactivated successfully.
Dec 05 10:11:44 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad0cb837e3074b0a4b7175da1ec7c90e88fd8725e66d3e9c3ff5d3fc0398d05c-userdata-shm.mount: Deactivated successfully.
Dec 05 10:11:44 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2db556e8b7\x2d35b7\x2d4923\x2db2c9\x2d552415bb1dda.mount: Deactivated successfully.
Dec 05 10:11:44 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:44.381 2 INFO neutron.agent.securitygroups_rpc [None req-ee3bec68-6531-4fdf-bfc7-09d2f8472e15 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:44 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:44.589 2 INFO neutron.agent.securitygroups_rpc [None req-35f96a62-bc9a-44a1-b1ac-958011ae1ee9 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:11:44 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:44.726 2 INFO neutron.agent.securitygroups_rpc [None req-4e3d1bd9-c627-4f4a-847b-61ac809c1b12 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:44 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b7c2d7d8-57b5-40a5-9e26-8515c67f1048", "format": "json"}]: dispatch
Dec 05 10:11:44 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b7c2d7d8-57b5-40a5-9e26-8515c67f1048", "force": true, "format": "json"}]: dispatch
Dec 05 10:11:44 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e156 e156: 6 total, 6 up, 6 in
Dec 05 10:11:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:11:45 np0005546420.localdomain podman[319313]: 2025-12-05 10:11:45.507271396 +0000 UTC m=+0.084797848 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:45 np0005546420.localdomain podman[319313]: 2025-12-05 10:11:45.60423066 +0000 UTC m=+0.181757092 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Dec 05 10:11:45 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:11:45 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:45.778 2 INFO neutron.agent.securitygroups_rpc [None req-ca5d9241-35ac-438d-91f3-a12934d4a96c 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:45 np0005546420.localdomain ceph-mon[298353]: osdmap e156: 6 total, 6 up, 6 in
Dec 05 10:11:45 np0005546420.localdomain ceph-mon[298353]: pgmap v368: 177 pgs: 177 active+clean; 193 MiB data, 884 MiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 1.5 MiB/s wr, 62 op/s
Dec 05 10:11:46 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:46.085 262769 INFO neutron.agent.linux.ip_lib [None req-ee818cd0-0e49-41bf-ae75-eb91a77197f6 - - - - - -] Device tapebe93665-20 cannot be used as it has no MAC address
Dec 05 10:11:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:46.115 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:46 np0005546420.localdomain kernel: device tapebe93665-20 entered promiscuous mode
Dec 05 10:11:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:46Z|00253|binding|INFO|Claiming lport ebe93665-2069-4c4c-8bc5-8728703bf144 for this chassis.
Dec 05 10:11:46 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929506.1230] manager: (tapebe93665-20): new Generic device (/org/freedesktop/NetworkManager/Devices/46)
Dec 05 10:11:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:46Z|00254|binding|INFO|ebe93665-2069-4c4c-8bc5-8728703bf144: Claiming unknown
Dec 05 10:11:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:46.123 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:46 np0005546420.localdomain systemd-udevd[319349]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:11:46 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:46.137 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-ea04d395-4bec-4f63-bd63-ab23162d2324', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea04d395-4bec-4f63-bd63-ab23162d2324', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69416a5f-f283-43d3-bb87-820ace18e12d, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=ebe93665-2069-4c4c-8bc5-8728703bf144) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:46 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:46.139 159503 INFO neutron.agent.ovn.metadata.agent [-] Port ebe93665-2069-4c4c-8bc5-8728703bf144 in datapath ea04d395-4bec-4f63-bd63-ab23162d2324 bound to our chassis
Dec 05 10:11:46 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:46.140 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ea04d395-4bec-4f63-bd63-ab23162d2324 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:11:46 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:46.141 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[1846674e-5bf1-47f5-8e5e-100bda1d1b15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:11:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:46Z|00255|binding|INFO|Setting lport ebe93665-2069-4c4c-8bc5-8728703bf144 ovn-installed in OVS
Dec 05 10:11:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:46Z|00256|binding|INFO|Setting lport ebe93665-2069-4c4c-8bc5-8728703bf144 up in Southbound
Dec 05 10:11:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapebe93665-20: No such device
Dec 05 10:11:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:46.154 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapebe93665-20: No such device
Dec 05 10:11:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapebe93665-20: No such device
Dec 05 10:11:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapebe93665-20: No such device
Dec 05 10:11:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapebe93665-20: No such device
Dec 05 10:11:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapebe93665-20: No such device
Dec 05 10:11:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapebe93665-20: No such device
Dec 05 10:11:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapebe93665-20: No such device
Dec 05 10:11:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:46.201 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:46.232 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:46.551 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:46 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:46.691 2 INFO neutron.agent.securitygroups_rpc [None req-2472a8ec-d91a-458d-b5a9-cca43a6faa0b 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:11:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:46.761 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:47 np0005546420.localdomain podman[319420]: 
Dec 05 10:11:47 np0005546420.localdomain podman[319420]: 2025-12-05 10:11:47.097659495 +0000 UTC m=+0.074887483 container create 8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea04d395-4bec-4f63-bd63-ab23162d2324, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:11:47 np0005546420.localdomain systemd[1]: Started libpod-conmon-8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769.scope.
Dec 05 10:11:47 np0005546420.localdomain podman[319420]: 2025-12-05 10:11:47.056605437 +0000 UTC m=+0.033833445 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:11:47 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:11:47 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3de1191ea9ecf4fbb12d9b03c212ea65d8a6ab659ce79ea8fe810ebf9b867a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:47 np0005546420.localdomain podman[319420]: 2025-12-05 10:11:47.17524108 +0000 UTC m=+0.152469068 container init 8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea04d395-4bec-4f63-bd63-ab23162d2324, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:11:47 np0005546420.localdomain podman[319420]: 2025-12-05 10:11:47.183833455 +0000 UTC m=+0.161061433 container start 8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea04d395-4bec-4f63-bd63-ab23162d2324, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:11:47 np0005546420.localdomain dnsmasq[319439]: started, version 2.85 cachesize 150
Dec 05 10:11:47 np0005546420.localdomain dnsmasq[319439]: DNS service limited to local subnets
Dec 05 10:11:47 np0005546420.localdomain dnsmasq[319439]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:11:47 np0005546420.localdomain dnsmasq[319439]: warning: no upstream servers configured
Dec 05 10:11:47 np0005546420.localdomain dnsmasq-dhcp[319439]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d
Dec 05 10:11:47 np0005546420.localdomain dnsmasq[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/addn_hosts - 0 addresses
Dec 05 10:11:47 np0005546420.localdomain dnsmasq-dhcp[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/host
Dec 05 10:11:47 np0005546420.localdomain dnsmasq-dhcp[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/opts
Dec 05 10:11:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:11:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:11:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:11:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158551 "" "Go-http-client/1.1"
Dec 05 10:11:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:11:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19686 "" "Go-http-client/1.1"
Dec 05 10:11:48 np0005546420.localdomain ceph-mon[298353]: pgmap v369: 177 pgs: 177 active+clean; 193 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 1.4 MiB/s wr, 104 op/s
Dec 05 10:11:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:48.797 262769 INFO neutron.agent.dhcp.agent [None req-018c2442-4e18-4845-b8ee-79f6ce80a6c0 - - - - - -] DHCP configuration for ports {'595e0ba0-0fc2-41e0-9733-fa091d05028d'} is completed
Dec 05 10:11:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:11:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:11:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:11:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:11:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:11:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:11:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:11:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:11:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:11:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:11:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:11:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:11:49 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:49.356 2 INFO neutron.agent.securitygroups_rpc [None req-f87f428a-cfa8-4117-93ac-ddf0a6843c9e 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:49 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/967009489' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:11:49 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/967009489' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:11:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:11:49 np0005546420.localdomain podman[319440]: 2025-12-05 10:11:49.509317437 +0000 UTC m=+0.084941175 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:49 np0005546420.localdomain podman[319440]: 2025-12-05 10:11:49.522481432 +0000 UTC m=+0.098105240 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 10:11:49 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:11:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:50 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:50.150 2 INFO neutron.agent.securitygroups_rpc [None req-10ca296f-d221-4980-916d-de770cb006e3 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:50 np0005546420.localdomain ceph-mon[298353]: pgmap v370: 177 pgs: 177 active+clean; 193 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 1.2 MiB/s wr, 92 op/s
Dec 05 10:11:50 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4199753165' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:11:50 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4199753165' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:11:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:51.553 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:51.764 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:11:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "format": "json"}]: dispatch
Dec 05 10:11:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:11:51 np0005546420.localdomain ceph-mon[298353]: pgmap v371: 177 pgs: 177 active+clean; 193 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 20 KiB/s wr, 76 op/s
Dec 05 10:11:52 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:52.061 2 INFO neutron.agent.securitygroups_rpc [None req-c97b0d8d-542f-470c-a7c0-bdee005052ed 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:11:52 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:52.481 2 INFO neutron.agent.securitygroups_rpc [None req-ecdd3e4a-f10e-40b7-adec-da44faf1234f 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:52 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:52.821 2 INFO neutron.agent.securitygroups_rpc [None req-1ee97cf8-7398-44f0-bcbd-01950c106314 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:11:53 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:53.585 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:53Z, description=, device_id=9d5b5baf-fbfa-4251-a34d-ea34d3d785c1, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e6d1f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e6d970>], id=47045052-9bcb-4123-9f0d-101747643206, ip_allocation=immediate, mac_address=fa:16:3e:67:e5:7d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:11:42Z, description=, dns_domain=, id=ea04d395-4bec-4f63-bd63-ab23162d2324, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-530960671, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44901, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2294, status=ACTIVE, subnets=['c2d9666d-4ef8-4221-9137-7aaf5bc05d57'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:11:44Z, vlan_transparent=None, network_id=ea04d395-4bec-4f63-bd63-ab23162d2324, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2335, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:11:53Z on network ea04d395-4bec-4f63-bd63-ab23162d2324
Dec 05 10:11:53 np0005546420.localdomain dnsmasq[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/addn_hosts - 1 addresses
Dec 05 10:11:53 np0005546420.localdomain podman[319478]: 2025-12-05 10:11:53.792150864 +0000 UTC m=+0.059889761 container kill 8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea04d395-4bec-4f63-bd63-ab23162d2324, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:11:53 np0005546420.localdomain dnsmasq-dhcp[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/host
Dec 05 10:11:53 np0005546420.localdomain dnsmasq-dhcp[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/opts
Dec 05 10:11:54 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:54.075 262769 INFO neutron.agent.dhcp.agent [None req-02f92adc-977e-441e-951c-27e4ff1e26ed - - - - - -] DHCP configuration for ports {'47045052-9bcb-4123-9f0d-101747643206'} is completed
Dec 05 10:11:54 np0005546420.localdomain ceph-mon[298353]: pgmap v372: 177 pgs: 177 active+clean; 193 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 20 KiB/s wr, 72 op/s
Dec 05 10:11:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:11:55 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:11:55 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "format": "json"}]: dispatch
Dec 05 10:11:55 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:11:56 np0005546420.localdomain ceph-mon[298353]: pgmap v373: 177 pgs: 177 active+clean; 193 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 19 KiB/s wr, 69 op/s
Dec 05 10:11:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:56.555 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:56.767 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:11:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:11:57 np0005546420.localdomain systemd[1]: tmp-crun.jgLwiU.mount: Deactivated successfully.
Dec 05 10:11:57 np0005546420.localdomain podman[319500]: 2025-12-05 10:11:57.519788781 +0000 UTC m=+0.092722553 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:11:57 np0005546420.localdomain podman[319500]: 2025-12-05 10:11:57.555095732 +0000 UTC m=+0.128029464 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:11:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:57.557 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:53Z, description=, device_id=9d5b5baf-fbfa-4251-a34d-ea34d3d785c1, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e8fac0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e8f100>], id=47045052-9bcb-4123-9f0d-101747643206, ip_allocation=immediate, mac_address=fa:16:3e:67:e5:7d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:11:42Z, description=, dns_domain=, id=ea04d395-4bec-4f63-bd63-ab23162d2324, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-530960671, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44901, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2294, status=ACTIVE, subnets=['c2d9666d-4ef8-4221-9137-7aaf5bc05d57'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:11:44Z, vlan_transparent=None, network_id=ea04d395-4bec-4f63-bd63-ab23162d2324, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2335, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:11:53Z on network ea04d395-4bec-4f63-bd63-ab23162d2324
Dec 05 10:11:57 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:11:57 np0005546420.localdomain podman[319501]: 2025-12-05 10:11:57.573121818 +0000 UTC m=+0.143178221 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:57 np0005546420.localdomain podman[319501]: 2025-12-05 10:11:57.608552602 +0000 UTC m=+0.178609005 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 10:11:57 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:11:57 np0005546420.localdomain dnsmasq[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/addn_hosts - 1 addresses
Dec 05 10:11:57 np0005546420.localdomain podman[319560]: 2025-12-05 10:11:57.74550067 +0000 UTC m=+0.055152354 container kill 8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea04d395-4bec-4f63-bd63-ab23162d2324, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:11:57 np0005546420.localdomain dnsmasq-dhcp[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/host
Dec 05 10:11:57 np0005546420.localdomain dnsmasq-dhcp[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/opts
Dec 05 10:11:58 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:58.046 262769 INFO neutron.agent.dhcp.agent [None req-cfcd249e-1467-4395-b5a2-525c60beffc8 - - - - - -] DHCP configuration for ports {'47045052-9bcb-4123-9f0d-101747643206'} is completed
Dec 05 10:11:58 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:58.087 262769 INFO neutron.agent.linux.ip_lib [None req-40e2d2c1-31f4-4c4c-92c9-966d71a797af - - - - - -] Device tap32445689-65 cannot be used as it has no MAC address
Dec 05 10:11:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:58.113 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:58 np0005546420.localdomain kernel: device tap32445689-65 entered promiscuous mode
Dec 05 10:11:58 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:58Z|00257|binding|INFO|Claiming lport 32445689-65d6-448e-9132-10d4d7f21b0d for this chassis.
Dec 05 10:11:58 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:58Z|00258|binding|INFO|32445689-65d6-448e-9132-10d4d7f21b0d: Claiming unknown
Dec 05 10:11:58 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929518.1235] manager: (tap32445689-65): new Generic device (/org/freedesktop/NetworkManager/Devices/47)
Dec 05 10:11:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:58.124 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:58 np0005546420.localdomain systemd-udevd[319590]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:11:58 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:58.138 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-fe3f8827-7b14-4763-9518-1eb6dc5f6704', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe3f8827-7b14-4763-9518-1eb6dc5f6704', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dba761eb9482439aa79c2d9ffe5c0dfa', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e712708-ede8-40e9-8bc0-ab9138f51b63, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=32445689-65d6-448e-9132-10d4d7f21b0d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:58 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:58.140 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 32445689-65d6-448e-9132-10d4d7f21b0d in datapath fe3f8827-7b14-4763-9518-1eb6dc5f6704 bound to our chassis
Dec 05 10:11:58 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:58.142 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fe3f8827-7b14-4763-9518-1eb6dc5f6704 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:11:58 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:58.143 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[d0141088-2d02-45f2-bb03-d964c424f677]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:11:58 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap32445689-65: No such device
Dec 05 10:11:58 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:58Z|00259|binding|INFO|Setting lport 32445689-65d6-448e-9132-10d4d7f21b0d ovn-installed in OVS
Dec 05 10:11:58 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:58Z|00260|binding|INFO|Setting lport 32445689-65d6-448e-9132-10d4d7f21b0d up in Southbound
Dec 05 10:11:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:58.162 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:58 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap32445689-65: No such device
Dec 05 10:11:58 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap32445689-65: No such device
Dec 05 10:11:58 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap32445689-65: No such device
Dec 05 10:11:58 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap32445689-65: No such device
Dec 05 10:11:58 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap32445689-65: No such device
Dec 05 10:11:58 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap32445689-65: No such device
Dec 05 10:11:58 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap32445689-65: No such device
Dec 05 10:11:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:58.202 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:58.231 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:58 np0005546420.localdomain ceph-mon[298353]: pgmap v374: 177 pgs: 177 active+clean; 193 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 23 KiB/s wr, 61 op/s
Dec 05 10:11:58 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 05 10:11:58 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/ca912719-4ded-4fbc-bc88-8b50dbf8a797", "osd", "allow rw pool=manila_data namespace=fsvolumens_f85fdc57-8808-499d-89b5-dab3ea53a537", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:11:58 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/ca912719-4ded-4fbc-bc88-8b50dbf8a797", "osd", "allow rw pool=manila_data namespace=fsvolumens_f85fdc57-8808-499d-89b5-dab3ea53a537", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:11:58 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:58.740 2 INFO neutron.agent.securitygroups_rpc [None req-0878b731-cddc-4396-af6e-f1551053cbd3 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:11:59 np0005546420.localdomain podman[319661]: 
Dec 05 10:11:59 np0005546420.localdomain podman[319661]: 2025-12-05 10:11:59.141563158 +0000 UTC m=+0.125877547 container create bfd90fb3c8e64220cade788373291a0f428e4e9ee59a5e30b9a394456e780bbc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:11:59 np0005546420.localdomain systemd[1]: Started libpod-conmon-bfd90fb3c8e64220cade788373291a0f428e4e9ee59a5e30b9a394456e780bbc.scope.
Dec 05 10:11:59 np0005546420.localdomain podman[319661]: 2025-12-05 10:11:59.092053499 +0000 UTC m=+0.076367888 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:11:59 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:11:59 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20f1c2e2d511a3f7e950dff94943e279f1d5d4176fb5d4a83cbceb3c85467b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:11:59 np0005546420.localdomain podman[319661]: 2025-12-05 10:11:59.216172002 +0000 UTC m=+0.200486361 container init bfd90fb3c8e64220cade788373291a0f428e4e9ee59a5e30b9a394456e780bbc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 05 10:11:59 np0005546420.localdomain podman[319661]: 2025-12-05 10:11:59.224268242 +0000 UTC m=+0.208582611 container start bfd90fb3c8e64220cade788373291a0f428e4e9ee59a5e30b9a394456e780bbc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:11:59 np0005546420.localdomain dnsmasq[319679]: started, version 2.85 cachesize 150
Dec 05 10:11:59 np0005546420.localdomain dnsmasq[319679]: DNS service limited to local subnets
Dec 05 10:11:59 np0005546420.localdomain dnsmasq[319679]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:11:59 np0005546420.localdomain dnsmasq[319679]: warning: no upstream servers configured
Dec 05 10:11:59 np0005546420.localdomain dnsmasq-dhcp[319679]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:11:59 np0005546420.localdomain dnsmasq[319679]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/addn_hosts - 0 addresses
Dec 05 10:11:59 np0005546420.localdomain dnsmasq-dhcp[319679]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/host
Dec 05 10:11:59 np0005546420.localdomain dnsmasq-dhcp[319679]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/opts
Dec 05 10:11:59 np0005546420.localdomain podman[319697]: 2025-12-05 10:11:59.414953549 +0000 UTC m=+0.061472839 container kill 8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea04d395-4bec-4f63-bd63-ab23162d2324, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:59 np0005546420.localdomain dnsmasq[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/addn_hosts - 0 addresses
Dec 05 10:11:59 np0005546420.localdomain dnsmasq-dhcp[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/host
Dec 05 10:11:59 np0005546420.localdomain dnsmasq-dhcp[319439]: read /var/lib/neutron/dhcp/ea04d395-4bec-4f63-bd63-ab23162d2324/opts
Dec 05 10:11:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:11:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "auth_id": "Joe", "tenant_id": "f4c34f38ddb048808ef72391bdda40b5", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:11:59 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:11:59.510 262769 INFO neutron.agent.dhcp.agent [None req-1bf14375-2799-45b4-b669-d6eca3efc152 - - - - - -] DHCP configuration for ports {'06e4ccca-56d1-4e65-80d7-53008bd83223'} is completed
Dec 05 10:11:59 np0005546420.localdomain podman[319710]: 2025-12-05 10:11:59.515079859 +0000 UTC m=+0.088349868 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:59 np0005546420.localdomain podman[319710]: 2025-12-05 10:11:59.5539944 +0000 UTC m=+0.127264369 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:11:59 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:11:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:59.617 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:59 np0005546420.localdomain kernel: device tapebe93665-20 left promiscuous mode
Dec 05 10:11:59 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:59Z|00261|binding|INFO|Releasing lport ebe93665-2069-4c4c-8bc5-8728703bf144 from this chassis (sb_readonly=0)
Dec 05 10:11:59 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:11:59Z|00262|binding|INFO|Setting lport ebe93665-2069-4c4c-8bc5-8728703bf144 down in Southbound
Dec 05 10:11:59 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:59.627 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-ea04d395-4bec-4f63-bd63-ab23162d2324', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea04d395-4bec-4f63-bd63-ab23162d2324', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69416a5f-f283-43d3-bb87-820ace18e12d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=ebe93665-2069-4c4c-8bc5-8728703bf144) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:11:59 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:59.629 159503 INFO neutron.agent.ovn.metadata.agent [-] Port ebe93665-2069-4c4c-8bc5-8728703bf144 in datapath ea04d395-4bec-4f63-bd63-ab23162d2324 unbound from our chassis
Dec 05 10:11:59 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:59.630 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ea04d395-4bec-4f63-bd63-ab23162d2324 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:11:59 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:11:59.631 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[cea836a8-8538-42fa-941f-583a1e6c3b8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:11:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:11:59.644 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:11:59 np0005546420.localdomain dnsmasq[319679]: exiting on receipt of SIGTERM
Dec 05 10:11:59 np0005546420.localdomain podman[319756]: 2025-12-05 10:11:59.730075236 +0000 UTC m=+0.058785776 container kill bfd90fb3c8e64220cade788373291a0f428e4e9ee59a5e30b9a394456e780bbc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:11:59 np0005546420.localdomain systemd[1]: libpod-bfd90fb3c8e64220cade788373291a0f428e4e9ee59a5e30b9a394456e780bbc.scope: Deactivated successfully.
Dec 05 10:11:59 np0005546420.localdomain podman[319769]: 2025-12-05 10:11:59.799381996 +0000 UTC m=+0.054296427 container died bfd90fb3c8e64220cade788373291a0f428e4e9ee59a5e30b9a394456e780bbc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:11:59 np0005546420.localdomain systemd[1]: tmp-crun.eDTccU.mount: Deactivated successfully.
Dec 05 10:11:59 np0005546420.localdomain podman[319769]: 2025-12-05 10:11:59.845766638 +0000 UTC m=+0.100681039 container cleanup bfd90fb3c8e64220cade788373291a0f428e4e9ee59a5e30b9a394456e780bbc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:11:59 np0005546420.localdomain systemd[1]: libpod-conmon-bfd90fb3c8e64220cade788373291a0f428e4e9ee59a5e30b9a394456e780bbc.scope: Deactivated successfully.
Dec 05 10:11:59 np0005546420.localdomain podman[319771]: 2025-12-05 10:11:59.868013574 +0000 UTC m=+0.114589057 container remove bfd90fb3c8e64220cade788373291a0f428e4e9ee59a5e30b9a394456e780bbc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 10:11:59 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:11:59.947 2 INFO neutron.agent.securitygroups_rpc [None req-c90784b4-3fe0-420c-9674-eb8afca62c54 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:12:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:00 np0005546420.localdomain ceph-mon[298353]: pgmap v375: 177 pgs: 177 active+clean; 193 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 14 KiB/s wr, 25 op/s
Dec 05 10:12:00 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2570026049' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:00 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2570026049' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-b20f1c2e2d511a3f7e950dff94943e279f1d5d4176fb5d4a83cbceb3c85467b6-merged.mount: Deactivated successfully.
Dec 05 10:12:00 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bfd90fb3c8e64220cade788373291a0f428e4e9ee59a5e30b9a394456e780bbc-userdata-shm.mount: Deactivated successfully.
Dec 05 10:12:00 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:00.671 2 INFO neutron.agent.securitygroups_rpc [None req-73dd46a2-4352-469e-9b13-904ee05be5c1 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:12:01 np0005546420.localdomain podman[319850]: 
Dec 05 10:12:01 np0005546420.localdomain podman[319850]: 2025-12-05 10:12:01.282228784 +0000 UTC m=+0.070802437 container create c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:12:01 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:01.306 2 INFO neutron.agent.securitygroups_rpc [None req-a333ab83-84ff-4adc-85ac-6248a883a1ec 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:12:01 np0005546420.localdomain systemd[1]: Started libpod-conmon-c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669.scope.
Dec 05 10:12:01 np0005546420.localdomain systemd[1]: tmp-crun.aEvjl5.mount: Deactivated successfully.
Dec 05 10:12:01 np0005546420.localdomain podman[319850]: 2025-12-05 10:12:01.254446866 +0000 UTC m=+0.043020499 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:12:01 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:12:01 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc068c420be1e2396c86571a9c3389e11c96a69396da0ee09192937494ed6a51/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:12:01 np0005546420.localdomain podman[319850]: 2025-12-05 10:12:01.377763613 +0000 UTC m=+0.166337326 container init c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:12:01 np0005546420.localdomain podman[319850]: 2025-12-05 10:12:01.387819234 +0000 UTC m=+0.176392907 container start c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:12:01 np0005546420.localdomain dnsmasq[319869]: started, version 2.85 cachesize 150
Dec 05 10:12:01 np0005546420.localdomain dnsmasq[319869]: DNS service limited to local subnets
Dec 05 10:12:01 np0005546420.localdomain dnsmasq[319869]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:12:01 np0005546420.localdomain dnsmasq[319869]: warning: no upstream servers configured
Dec 05 10:12:01 np0005546420.localdomain dnsmasq-dhcp[319869]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:12:01 np0005546420.localdomain dnsmasq-dhcp[319869]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Dec 05 10:12:01 np0005546420.localdomain dnsmasq[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/addn_hosts - 2 addresses
Dec 05 10:12:01 np0005546420.localdomain dnsmasq-dhcp[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/host
Dec 05 10:12:01 np0005546420.localdomain dnsmasq-dhcp[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/opts
Dec 05 10:12:01 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:01.487 262769 INFO neutron.agent.dhcp.agent [None req-c7ef480e-7d11-4731-b6f6-45544c1e6716 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:58Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e325b0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e32340>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e32be0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e32850>], id=19c8457e-b7fe-4e1f-8db4-6a58c99f7dad, ip_allocation=immediate, mac_address=fa:16:3e:e1:8a:9f, name=tempest-PortsIpV6TestJSON-606128523, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:11:52Z, description=, dns_domain=, id=fe3f8827-7b14-4763-9518-1eb6dc5f6704, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1468358048, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53302, qos_policy_id=None, revision_number=3, router:external=False, shared=False, standard_attr_id=2333, status=ACTIVE, subnets=['7bd922e3-7bd9-4682-b3ed-7ec352b8b600', 'de88f7a5-c950-47b7-90b3-0c46b33e96d9'], tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:11:57Z, vlan_transparent=None, network_id=fe3f8827-7b14-4763-9518-1eb6dc5f6704, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9c5d500d-a686-46a9-8ad0-737ee529f53d'], standard_attr_id=2355, status=DOWN, tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:11:58Z on network fe3f8827-7b14-4763-9518-1eb6dc5f6704
Dec 05 10:12:01 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:01.584 2 INFO neutron.agent.securitygroups_rpc [None req-aba445b9-58e1-4b57-95a7-f56c26bc6c4b 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:12:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:01.587 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:01 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:01.594 262769 INFO neutron.agent.dhcp.agent [None req-02172649-75a9-4199-87b0-eeaaad084230 - - - - - -] DHCP configuration for ports {'19c8457e-b7fe-4e1f-8db4-6a58c99f7dad', '06e4ccca-56d1-4e65-80d7-53008bd83223', '32445689-65d6-448e-9132-10d4d7f21b0d'} is completed
Dec 05 10:12:01 np0005546420.localdomain podman[319887]: 2025-12-05 10:12:01.694851763 +0000 UTC m=+0.070636013 container kill c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:12:01 np0005546420.localdomain dnsmasq[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/addn_hosts - 2 addresses
Dec 05 10:12:01 np0005546420.localdomain dnsmasq-dhcp[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/host
Dec 05 10:12:01 np0005546420.localdomain dnsmasq-dhcp[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/opts
Dec 05 10:12:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:01.770 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:01 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:12:01 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "format": "json"}]: dispatch
Dec 05 10:12:01 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:12:01 np0005546420.localdomain ceph-mon[298353]: pgmap v376: 177 pgs: 177 active+clean; 193 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 29 KiB/s wr, 31 op/s
Dec 05 10:12:01 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:01.855 262769 INFO neutron.agent.dhcp.agent [None req-c7ef480e-7d11-4731-b6f6-45544c1e6716 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:58Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e8fa60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e8f190>], id=19c8457e-b7fe-4e1f-8db4-6a58c99f7dad, ip_allocation=immediate, mac_address=fa:16:3e:e1:8a:9f, name=tempest-PortsIpV6TestJSON-606128523, network_id=fe3f8827-7b14-4763-9518-1eb6dc5f6704, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['9c5d500d-a686-46a9-8ad0-737ee529f53d'], standard_attr_id=2355, status=DOWN, tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:11:59Z on network fe3f8827-7b14-4763-9518-1eb6dc5f6704
Dec 05 10:12:01 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:01.921 262769 INFO neutron.agent.dhcp.agent [None req-3bd305ec-bbae-4b72-a194-7d54b2218417 - - - - - -] DHCP configuration for ports {'19c8457e-b7fe-4e1f-8db4-6a58c99f7dad'} is completed
Dec 05 10:12:02 np0005546420.localdomain dnsmasq[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/addn_hosts - 1 addresses
Dec 05 10:12:02 np0005546420.localdomain dnsmasq-dhcp[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/host
Dec 05 10:12:02 np0005546420.localdomain dnsmasq-dhcp[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/opts
Dec 05 10:12:02 np0005546420.localdomain podman[319926]: 2025-12-05 10:12:02.061048467 +0000 UTC m=+0.063855633 container kill c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:12:02 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:02.161 2 INFO neutron.agent.securitygroups_rpc [None req-1b9b59bb-fd0c-4e00-8bab-7b331e80efc1 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:12:02 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:02.229 262769 INFO neutron.agent.dhcp.agent [None req-c7ef480e-7d11-4731-b6f6-45544c1e6716 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:58Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a03ad60>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a03a6a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e4c730>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e4c760>], id=19c8457e-b7fe-4e1f-8db4-6a58c99f7dad, ip_allocation=immediate, mac_address=fa:16:3e:e1:8a:9f, name=tempest-PortsIpV6TestJSON-606128523, network_id=fe3f8827-7b14-4763-9518-1eb6dc5f6704, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['9c5d500d-a686-46a9-8ad0-737ee529f53d'], standard_attr_id=2355, status=DOWN, tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:12:00Z on network fe3f8827-7b14-4763-9518-1eb6dc5f6704
Dec 05 10:12:02 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:02.420 262769 INFO neutron.agent.dhcp.agent [None req-0f3a62be-258a-4e82-b496-d002d8804fb5 - - - - - -] DHCP configuration for ports {'19c8457e-b7fe-4e1f-8db4-6a58c99f7dad'} is completed
Dec 05 10:12:02 np0005546420.localdomain podman[319965]: 2025-12-05 10:12:02.433860766 +0000 UTC m=+0.064511652 container kill c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:12:02 np0005546420.localdomain dnsmasq[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/addn_hosts - 2 addresses
Dec 05 10:12:02 np0005546420.localdomain dnsmasq-dhcp[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/host
Dec 05 10:12:02 np0005546420.localdomain dnsmasq-dhcp[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/opts
Dec 05 10:12:02 np0005546420.localdomain dnsmasq[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/addn_hosts - 0 addresses
Dec 05 10:12:02 np0005546420.localdomain dnsmasq-dhcp[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/host
Dec 05 10:12:02 np0005546420.localdomain dnsmasq-dhcp[319869]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/opts
Dec 05 10:12:02 np0005546420.localdomain podman[320003]: 2025-12-05 10:12:02.750616775 +0000 UTC m=+0.057940959 container kill c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:12:03 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:03.131 262769 INFO neutron.agent.dhcp.agent [None req-8864b8d9-aa6b-4c46-b2f6-7b7332b6b0af - - - - - -] DHCP configuration for ports {'19c8457e-b7fe-4e1f-8db4-6a58c99f7dad'} is completed
Dec 05 10:12:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/950483569' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/950483569' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:04.130 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:12:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:04.131 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:12:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:04.131 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:12:04 np0005546420.localdomain systemd[1]: tmp-crun.CRQa6e.mount: Deactivated successfully.
Dec 05 10:12:04 np0005546420.localdomain dnsmasq[319869]: exiting on receipt of SIGTERM
Dec 05 10:12:04 np0005546420.localdomain podman[320042]: 2025-12-05 10:12:04.301638318 +0000 UTC m=+0.078120063 container kill c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:12:04 np0005546420.localdomain systemd[1]: libpod-c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669.scope: Deactivated successfully.
Dec 05 10:12:04 np0005546420.localdomain podman[320069]: 2025-12-05 10:12:04.365003514 +0000 UTC m=+0.053574575 container died c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:12:04 np0005546420.localdomain ceph-mon[298353]: pgmap v377: 177 pgs: 177 active+clean; 193 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 21 KiB/s wr, 17 op/s
Dec 05 10:12:04 np0005546420.localdomain podman[320069]: 2025-12-05 10:12:04.452875827 +0000 UTC m=+0.141446818 container cleanup c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:12:04 np0005546420.localdomain systemd[1]: libpod-conmon-c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669.scope: Deactivated successfully.
Dec 05 10:12:04 np0005546420.localdomain podman[320076]: 2025-12-05 10:12:04.477542078 +0000 UTC m=+0.153197131 container remove c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:12:04 np0005546420.localdomain dnsmasq[319439]: exiting on receipt of SIGTERM
Dec 05 10:12:04 np0005546420.localdomain podman[320089]: 2025-12-05 10:12:04.503938153 +0000 UTC m=+0.156988328 container kill 8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea04d395-4bec-4f63-bd63-ab23162d2324, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:12:04 np0005546420.localdomain systemd[1]: libpod-8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769.scope: Deactivated successfully.
Dec 05 10:12:04 np0005546420.localdomain podman[320117]: 2025-12-05 10:12:04.587759911 +0000 UTC m=+0.066412681 container died 8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea04d395-4bec-4f63-bd63-ab23162d2324, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:12:04 np0005546420.localdomain podman[320117]: 2025-12-05 10:12:04.618948123 +0000 UTC m=+0.097600853 container cleanup 8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea04d395-4bec-4f63-bd63-ab23162d2324, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:12:04 np0005546420.localdomain systemd[1]: libpod-conmon-8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769.scope: Deactivated successfully.
Dec 05 10:12:04 np0005546420.localdomain podman[320118]: 2025-12-05 10:12:04.668485903 +0000 UTC m=+0.139796667 container remove 8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea04d395-4bec-4f63-bd63-ab23162d2324, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:12:04 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:04Z|00263|binding|INFO|Removing iface tap32445689-65 ovn-installed in OVS
Dec 05 10:12:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:04.858 159503 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 13fed192-8447-4bf4-9df2-342e50733ce8 with type ""
Dec 05 10:12:04 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:04Z|00264|binding|INFO|Removing lport 32445689-65d6-448e-9132-10d4d7f21b0d ovn-installed in OVS
Dec 05 10:12:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:04.859 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-fe3f8827-7b14-4763-9518-1eb6dc5f6704', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe3f8827-7b14-4763-9518-1eb6dc5f6704', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dba761eb9482439aa79c2d9ffe5c0dfa', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e712708-ede8-40e9-8bc0-ab9138f51b63, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=32445689-65d6-448e-9132-10d4d7f21b0d) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:04.860 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:04.862 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 32445689-65d6-448e-9132-10d4d7f21b0d in datapath fe3f8827-7b14-4763-9518-1eb6dc5f6704 unbound from our chassis
Dec 05 10:12:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:04.863 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fe3f8827-7b14-4763-9518-1eb6dc5f6704 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:12:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:04.865 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[3987fc03-8815-4e76-b134-6aa0b9161544]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:04.867 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:05 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:05.016 262769 INFO neutron.agent.dhcp.agent [None req-6198699c-0893-4a2d-b1bb-b5ff981150e4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:05 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:05.059 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:05 np0005546420.localdomain sudo[320178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:12:05 np0005546420.localdomain sudo[320178]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:12:05 np0005546420.localdomain sudo[320178]: pam_unix(sudo:session): session closed for user root
Dec 05 10:12:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-bc068c420be1e2396c86571a9c3389e11c96a69396da0ee09192937494ed6a51-merged.mount: Deactivated successfully.
Dec 05 10:12:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c46d44f848dcab8c5b35ba22ea7d83a38a2b68d5ee97e43d61963f1cc88bc669-userdata-shm.mount: Deactivated successfully.
Dec 05 10:12:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d3de1191ea9ecf4fbb12d9b03c212ea65d8a6ab659ce79ea8fe810ebf9b867a0-merged.mount: Deactivated successfully.
Dec 05 10:12:05 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c5e336b59f43ec90eaaee7f71e2a60c29a989bb6b0be8f8d8baa7539ed4e769-userdata-shm.mount: Deactivated successfully.
Dec 05 10:12:05 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2dea04d395\x2d4bec\x2d4f63\x2dbd63\x2dab23162d2324.mount: Deactivated successfully.
Dec 05 10:12:05 np0005546420.localdomain sudo[320207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 10:12:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:05.392 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:05 np0005546420.localdomain sudo[320207]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:12:05 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:05.395 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:05 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "Joe", "tenant_id": "a1984fed702d4461879e97dd7c6fc401", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:12:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 05 10:12:05 np0005546420.localdomain podman[320218]: 
Dec 05 10:12:05 np0005546420.localdomain podman[320218]: 2025-12-05 10:12:05.469620465 +0000 UTC m=+0.156924915 container create bdd24c09eabb2b703e974fbf27102a8404589071981c420825c48fb9c03fa685 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:12:05 np0005546420.localdomain systemd[1]: Started libpod-conmon-bdd24c09eabb2b703e974fbf27102a8404589071981c420825c48fb9c03fa685.scope.
Dec 05 10:12:05 np0005546420.localdomain podman[320218]: 2025-12-05 10:12:05.422456609 +0000 UTC m=+0.109761089 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:12:05 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:12:05 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2025529bdff7b5e1afc577d27645ba114e149a99ab2fcfdb2ee00bbc011801a3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:12:05 np0005546420.localdomain podman[320218]: 2025-12-05 10:12:05.551054879 +0000 UTC m=+0.238359329 container init bdd24c09eabb2b703e974fbf27102a8404589071981c420825c48fb9c03fa685 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:12:05 np0005546420.localdomain podman[320218]: 2025-12-05 10:12:05.560379877 +0000 UTC m=+0.247684327 container start bdd24c09eabb2b703e974fbf27102a8404589071981c420825c48fb9c03fa685 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:12:05 np0005546420.localdomain dnsmasq[320248]: started, version 2.85 cachesize 150
Dec 05 10:12:05 np0005546420.localdomain dnsmasq[320248]: DNS service limited to local subnets
Dec 05 10:12:05 np0005546420.localdomain dnsmasq[320248]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:12:05 np0005546420.localdomain dnsmasq[320248]: warning: no upstream servers configured
Dec 05 10:12:05 np0005546420.localdomain dnsmasq-dhcp[320248]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:12:05 np0005546420.localdomain dnsmasq[320248]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/addn_hosts - 0 addresses
Dec 05 10:12:05 np0005546420.localdomain dnsmasq-dhcp[320248]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/host
Dec 05 10:12:05 np0005546420.localdomain dnsmasq-dhcp[320248]: read /var/lib/neutron/dhcp/fe3f8827-7b14-4763-9518-1eb6dc5f6704/opts
Dec 05 10:12:05 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:05.666 262769 INFO neutron.agent.dhcp.agent [None req-f9c4baec-8875-41af-9039-24e11807f0a3 - - - - - -] DHCP configuration for ports {'06e4ccca-56d1-4e65-80d7-53008bd83223', '32445689-65d6-448e-9132-10d4d7f21b0d'} is completed
Dec 05 10:12:05 np0005546420.localdomain dnsmasq[320248]: exiting on receipt of SIGTERM
Dec 05 10:12:05 np0005546420.localdomain podman[320278]: 2025-12-05 10:12:05.818310449 +0000 UTC m=+0.077000017 container kill bdd24c09eabb2b703e974fbf27102a8404589071981c420825c48fb9c03fa685 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 10:12:05 np0005546420.localdomain systemd[1]: libpod-bdd24c09eabb2b703e974fbf27102a8404589071981c420825c48fb9c03fa685.scope: Deactivated successfully.
Dec 05 10:12:05 np0005546420.localdomain podman[320286]: 2025-12-05 10:12:05.898417383 +0000 UTC m=+0.130019985 container died bdd24c09eabb2b703e974fbf27102a8404589071981c420825c48fb9c03fa685 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:12:06 np0005546420.localdomain podman[320302]: 2025-12-05 10:12:06.040631423 +0000 UTC m=+0.207067633 container cleanup bdd24c09eabb2b703e974fbf27102a8404589071981c420825c48fb9c03fa685 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:12:06 np0005546420.localdomain systemd[1]: libpod-conmon-bdd24c09eabb2b703e974fbf27102a8404589071981c420825c48fb9c03fa685.scope: Deactivated successfully.
Dec 05 10:12:06 np0005546420.localdomain podman[320307]: 2025-12-05 10:12:06.116549967 +0000 UTC m=+0.271692429 container remove bdd24c09eabb2b703e974fbf27102a8404589071981c420825c48fb9c03fa685 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe3f8827-7b14-4763-9518-1eb6dc5f6704, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:12:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:06.132 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:06 np0005546420.localdomain kernel: device tap32445689-65 left promiscuous mode
Dec 05 10:12:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:12:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.1 total, 600.0 interval
                                                          Cumulative writes: 12K writes, 45K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s
                                                          Cumulative WAL: 12K writes, 3398 syncs, 3.54 writes per sync, written: 0.04 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 6182 writes, 19K keys, 6182 commit groups, 1.0 writes per commit group, ingest: 17.37 MB, 0.03 MB/s
                                                          Interval WAL: 6182 writes, 2614 syncs, 2.36 writes per sync, written: 0.02 GB, 0.03 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:12:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:06.157 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:06 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:06.188 262769 INFO neutron.agent.dhcp.agent [None req-efa3b5dd-925e-45e7-8a81-47665edc0f5b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:06 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:06.189 262769 INFO neutron.agent.dhcp.agent [None req-efa3b5dd-925e-45e7-8a81-47665edc0f5b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-2025529bdff7b5e1afc577d27645ba114e149a99ab2fcfdb2ee00bbc011801a3-merged.mount: Deactivated successfully.
Dec 05 10:12:06 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bdd24c09eabb2b703e974fbf27102a8404589071981c420825c48fb9c03fa685-userdata-shm.mount: Deactivated successfully.
Dec 05 10:12:06 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2dfe3f8827\x2d7b14\x2d4763\x2d9518\x2d1eb6dc5f6704.mount: Deactivated successfully.
Dec 05 10:12:06 np0005546420.localdomain systemd[1]: tmp-crun.UFrxTq.mount: Deactivated successfully.
Dec 05 10:12:06 np0005546420.localdomain podman[320375]: 2025-12-05 10:12:06.324158646 +0000 UTC m=+0.087678448 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, RELEASE=main, version=7, name=rhceph, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:12:06 np0005546420.localdomain podman[320375]: 2025-12-05 10:12:06.431451088 +0000 UTC m=+0.194970900 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, release=1763362218, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., distribution-scope=public)
Dec 05 10:12:06 np0005546420.localdomain ceph-mon[298353]: pgmap v378: 177 pgs: 177 active+clean; 193 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 21 KiB/s wr, 17 op/s
Dec 05 10:12:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:12:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:06.635 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:06.772 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:07 np0005546420.localdomain sudo[320207]: pam_unix(sudo:session): session closed for user root
Dec 05 10:12:07 np0005546420.localdomain sudo[320496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:12:07 np0005546420.localdomain sudo[320496]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:12:07 np0005546420.localdomain sudo[320496]: pam_unix(sudo:session): session closed for user root
Dec 05 10:12:07 np0005546420.localdomain sudo[320514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:12:07 np0005546420.localdomain sudo[320514]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:12:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "de805df9-2757-4e33-8bfe-65fc6ef40510", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:12:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "de805df9-2757-4e33-8bfe-65fc6ef40510", "format": "json"}]: dispatch
Dec 05 10:12:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:12:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:12:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:12:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:12:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:12:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:12:07 np0005546420.localdomain sudo[320514]: pam_unix(sudo:session): session closed for user root
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e157 e157: 6 total, 6 up, 6 in
Dec 05 10:12:08 np0005546420.localdomain sudo[320564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:12:08 np0005546420.localdomain sudo[320564]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:12:08 np0005546420.localdomain sudo[320564]: pam_unix(sudo:session): session closed for user root
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: pgmap v379: 177 pgs: 177 active+clean; 193 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 32 KiB/s wr, 19 op/s
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1227377066", "format": "json"} : dispatch
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1227377066", "caps": ["mds", "allow rw path=/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/2445c2dd-0554-4d30-92cc-44fd04cf4a33", "osd", "allow rw pool=manila_data namespace=fsvolumens_7ec11635-5c27-465d-8a70-06bc2f1e99f2", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1227377066", "caps": ["mds", "allow rw path=/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/2445c2dd-0554-4d30-92cc-44fd04cf4a33", "osd", "allow rw pool=manila_data namespace=fsvolumens_7ec11635-5c27-465d-8a70-06bc2f1e99f2", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:12:08 np0005546420.localdomain ceph-mon[298353]: osdmap e157: 6 total, 6 up, 6 in
Dec 05 10:12:09 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:09.103 2 INFO neutron.agent.securitygroups_rpc [None req-3f77c6ad-acb0-48a9-aac0-03c96698ac64 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:12:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "tempest-cephx-id-1227377066", "tenant_id": "a1984fed702d4461879e97dd7c6fc401", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:12:09 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M
Dec 05 10:12:09 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M
Dec 05 10:12:09 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:12:09 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:12:09 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M
Dec 05 10:12:09 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:12:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:12:10 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:10.087 262769 INFO neutron.agent.linux.ip_lib [None req-0dd95ccd-8e31-464c-bbe8-7796d7e4b411 - - - - - -] Device tap5734520e-04 cannot be used as it has no MAC address
Dec 05 10:12:10 np0005546420.localdomain systemd[1]: tmp-crun.uvOytR.mount: Deactivated successfully.
Dec 05 10:12:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:12:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:10.111 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:10 np0005546420.localdomain podman[320584]: 2025-12-05 10:12:10.118030688 +0000 UTC m=+0.103183775 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.expose-services=, distribution-scope=public, version=9.6, managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm)
Dec 05 10:12:10 np0005546420.localdomain kernel: device tap5734520e-04 entered promiscuous mode
Dec 05 10:12:10 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929530.1220] manager: (tap5734520e-04): new Generic device (/org/freedesktop/NetworkManager/Devices/48)
Dec 05 10:12:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:10.124 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:10 np0005546420.localdomain systemd-udevd[320618]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:12:10 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:10Z|00265|binding|INFO|Claiming lport 5734520e-04e6-4061-ab92-3eec7d16326f for this chassis.
Dec 05 10:12:10 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:10Z|00266|binding|INFO|5734520e-04e6-4061-ab92-3eec7d16326f: Claiming unknown
Dec 05 10:12:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:10 np0005546420.localdomain podman[320584]: 2025-12-05 10:12:10.139572173 +0000 UTC m=+0.124725320 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, container_name=openstack_network_exporter, config_id=edpm, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 10:12:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:10.138 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-4ce8fc66-ab78-477c-93e4-2501481b7154', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ce8fc66-ab78-477c-93e4-2501481b7154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dba761eb9482439aa79c2d9ffe5c0dfa', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=804b65d6-2013-4b34-84fa-9c3def9cac4b, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5734520e-04e6-4061-ab92-3eec7d16326f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:10.140 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5734520e-04e6-4061-ab92-3eec7d16326f in datapath 4ce8fc66-ab78-477c-93e4-2501481b7154 bound to our chassis
Dec 05 10:12:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:10.143 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4ce8fc66-ab78-477c-93e4-2501481b7154 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:12:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:10.144 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[52d7167a-4c9d-4e72-8dae-10026335efa5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:10 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:12:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5734520e-04: No such device
Dec 05 10:12:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5734520e-04: No such device
Dec 05 10:12:10 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:10Z|00267|binding|INFO|Setting lport 5734520e-04e6-4061-ab92-3eec7d16326f ovn-installed in OVS
Dec 05 10:12:10 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:10Z|00268|binding|INFO|Setting lport 5734520e-04e6-4061-ab92-3eec7d16326f up in Southbound
Dec 05 10:12:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:10.168 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:10.170 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5734520e-04: No such device
Dec 05 10:12:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5734520e-04: No such device
Dec 05 10:12:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5734520e-04: No such device
Dec 05 10:12:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5734520e-04: No such device
Dec 05 10:12:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5734520e-04: No such device
Dec 05 10:12:10 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap5734520e-04: No such device
Dec 05 10:12:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:10.231 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:10 np0005546420.localdomain podman[320607]: 2025-12-05 10:12:10.232813682 +0000 UTC m=+0.113876126 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:12:10 np0005546420.localdomain podman[320607]: 2025-12-05 10:12:10.239351294 +0000 UTC m=+0.120413758 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:12:10 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:12:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:10.255 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:10 np0005546420.localdomain ceph-mon[298353]: pgmap v381: 177 pgs: 177 active+clean; 193 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 32 KiB/s wr, 22 op/s
Dec 05 10:12:10 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:12:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:12:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e158 e158: 6 total, 6 up, 6 in
Dec 05 10:12:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:12:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.2 total, 600.0 interval
                                                          Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s
                                                          Cumulative WAL: 10K writes, 3073 syncs, 3.41 writes per sync, written: 0.04 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 5529 writes, 18K keys, 5529 commit groups, 1.0 writes per commit group, ingest: 16.77 MB, 0.03 MB/s
                                                          Interval WAL: 5529 writes, 2368 syncs, 2.33 writes per sync, written: 0.02 GB, 0.03 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:12:10 np0005546420.localdomain systemd[1]: tmp-crun.Blsiag.mount: Deactivated successfully.
Dec 05 10:12:11 np0005546420.localdomain podman[320706]: 
Dec 05 10:12:11 np0005546420.localdomain podman[320706]: 2025-12-05 10:12:11.099119017 +0000 UTC m=+0.083751867 container create bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ce8fc66-ab78-477c-93e4-2501481b7154, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:12:11 np0005546420.localdomain systemd[1]: Started libpod-conmon-bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81.scope.
Dec 05 10:12:11 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:12:11 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25083dbb8436951d49497e25b187912a3bb8ae1cf40ff5b851b357c51566201/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:12:11 np0005546420.localdomain podman[320706]: 2025-12-05 10:12:11.058699638 +0000 UTC m=+0.043332528 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:12:11 np0005546420.localdomain podman[320706]: 2025-12-05 10:12:11.1669484 +0000 UTC m=+0.151581260 container init bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ce8fc66-ab78-477c-93e4-2501481b7154, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 05 10:12:11 np0005546420.localdomain podman[320706]: 2025-12-05 10:12:11.176520225 +0000 UTC m=+0.161153075 container start bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ce8fc66-ab78-477c-93e4-2501481b7154, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 05 10:12:11 np0005546420.localdomain dnsmasq[320725]: started, version 2.85 cachesize 150
Dec 05 10:12:11 np0005546420.localdomain dnsmasq[320725]: DNS service limited to local subnets
Dec 05 10:12:11 np0005546420.localdomain dnsmasq[320725]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:12:11 np0005546420.localdomain dnsmasq[320725]: warning: no upstream servers configured
Dec 05 10:12:11 np0005546420.localdomain dnsmasq-dhcp[320725]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:12:11 np0005546420.localdomain dnsmasq[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/addn_hosts - 0 addresses
Dec 05 10:12:11 np0005546420.localdomain dnsmasq-dhcp[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/host
Dec 05 10:12:11 np0005546420.localdomain dnsmasq-dhcp[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/opts
Dec 05 10:12:11 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/2445c2dd-0554-4d30-92cc-44fd04cf4a33],prefix=session evict} (starting...)
Dec 05 10:12:11 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:11.190 2 INFO neutron.agent.securitygroups_rpc [None req-07ea4a38-e98a-42b4-8586-d7ec7fae7988 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:12:11 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:11.198 2 INFO neutron.agent.securitygroups_rpc [None req-74116218-c595-488c-8172-3668fd713ccc 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:12:11 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:11.265 262769 INFO neutron.agent.dhcp.agent [None req-0dd95ccd-8e31-464c-bbe8-7796d7e4b411 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:08Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ee2220>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ee2a00>], id=2310b6f2-07b9-466d-8cea-bfb9cc05579f, ip_allocation=immediate, mac_address=fa:16:3e:bf:3d:78, name=tempest-PortsIpV6TestJSON-761630670, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:05Z, description=, dns_domain=, id=4ce8fc66-ab78-477c-93e4-2501481b7154, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1254891715, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15478, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2385, status=ACTIVE, subnets=['c8826b38-7e28-421d-a5e7-88b34af69156'], tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:12:07Z, vlan_transparent=None, network_id=4ce8fc66-ab78-477c-93e4-2501481b7154, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9c5d500d-a686-46a9-8ad0-737ee529f53d'], standard_attr_id=2394, status=DOWN, tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:12:08Z on network 4ce8fc66-ab78-477c-93e4-2501481b7154
Dec 05 10:12:11 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:11.392 262769 INFO neutron.agent.dhcp.agent [None req-b45126f7-4f61-48cf-ab5d-bdaf9720979b - - - - - -] DHCP configuration for ports {'732e9f83-8e99-423a-bf4d-431e80d75639'} is completed
Dec 05 10:12:11 np0005546420.localdomain dnsmasq[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/addn_hosts - 1 addresses
Dec 05 10:12:11 np0005546420.localdomain dnsmasq-dhcp[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/host
Dec 05 10:12:11 np0005546420.localdomain podman[320741]: 2025-12-05 10:12:11.473511034 +0000 UTC m=+0.065141742 container kill bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ce8fc66-ab78-477c-93e4-2501481b7154, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:12:11 np0005546420.localdomain dnsmasq-dhcp[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/opts
Dec 05 10:12:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "56639cdb-9f87-40b6-91d1-c10cab7d966b", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:12:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "56639cdb-9f87-40b6-91d1-c10cab7d966b", "format": "json"}]: dispatch
Dec 05 10:12:11 np0005546420.localdomain ceph-mon[298353]: osdmap e158: 6 total, 6 up, 6 in
Dec 05 10:12:11 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:11.622 262769 INFO neutron.agent.dhcp.agent [None req-0dd95ccd-8e31-464c-bbe8-7796d7e4b411 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:10Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ec7b20>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ec7430>], id=ecff6a3b-13bc-43d9-aeae-180cb778761b, ip_allocation=immediate, mac_address=fa:16:3e:ae:2e:4e, name=tempest-PortsIpV6TestJSON-1317096896, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:05Z, description=, dns_domain=, id=4ce8fc66-ab78-477c-93e4-2501481b7154, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1254891715, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15478, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2385, status=ACTIVE, subnets=['c8826b38-7e28-421d-a5e7-88b34af69156'], tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:12:07Z, vlan_transparent=None, network_id=4ce8fc66-ab78-477c-93e4-2501481b7154, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9c5d500d-a686-46a9-8ad0-737ee529f53d'], standard_attr_id=2398, status=DOWN, tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:12:10Z on network 4ce8fc66-ab78-477c-93e4-2501481b7154
Dec 05 10:12:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:11.671 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:11 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:11.672 2 INFO neutron.agent.securitygroups_rpc [None req-8fce75c8-7ad5-4ffa-9ef1-d6fce74a08c3 859f234eba4c442983333d06bc12b112 0d15dccf4c864d558d055b0c7cd1cccc - - default default] Security group member updated ['6262f27b-ae7f-4862-a034-43ed1f313c2e']
Dec 05 10:12:11 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:11.757 262769 INFO neutron.agent.dhcp.agent [None req-8cb0e43c-1322-45fd-b7a0-c60414e6e64c - - - - - -] DHCP configuration for ports {'2310b6f2-07b9-466d-8cea-bfb9cc05579f'} is completed
Dec 05 10:12:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:11.774 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:11 np0005546420.localdomain dnsmasq[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/addn_hosts - 2 addresses
Dec 05 10:12:11 np0005546420.localdomain dnsmasq-dhcp[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/host
Dec 05 10:12:11 np0005546420.localdomain podman[320780]: 2025-12-05 10:12:11.809304541 +0000 UTC m=+0.055597288 container kill bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ce8fc66-ab78-477c-93e4-2501481b7154, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:12:11 np0005546420.localdomain dnsmasq-dhcp[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/opts
Dec 05 10:12:11 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e159 e159: 6 total, 6 up, 6 in
Dec 05 10:12:12 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:12.118 262769 INFO neutron.agent.dhcp.agent [None req-9e942046-06f6-43fa-a415-498ef8cc5d1f - - - - - -] DHCP configuration for ports {'ecff6a3b-13bc-43d9-aeae-180cb778761b'} is completed
Dec 05 10:12:12 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:12.250 2 INFO neutron.agent.securitygroups_rpc [None req-4b26bab8-a657-4bed-8b9b-bee7300fbeb3 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:12:12 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:12.253 2 INFO neutron.agent.securitygroups_rpc [None req-ccc77b1e-6cab-481a-912a-ed42ab93e56d 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:12:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 05 10:12:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 05 10:12:12 np0005546420.localdomain ceph-mon[298353]: pgmap v383: 177 pgs: 177 active+clean; 193 MiB data, 888 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 41 KiB/s wr, 45 op/s
Dec 05 10:12:12 np0005546420.localdomain ceph-mon[298353]: osdmap e159: 6 total, 6 up, 6 in
Dec 05 10:12:12 np0005546420.localdomain dnsmasq[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/addn_hosts - 1 addresses
Dec 05 10:12:12 np0005546420.localdomain dnsmasq-dhcp[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/host
Dec 05 10:12:12 np0005546420.localdomain podman[320820]: 2025-12-05 10:12:12.724157044 +0000 UTC m=+0.055737052 container kill bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ce8fc66-ab78-477c-93e4-2501481b7154, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:12:12 np0005546420.localdomain dnsmasq-dhcp[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/opts
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:12:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:12:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e160 e160: 6 total, 6 up, 6 in
Dec 05 10:12:13 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:13.529 2 INFO neutron.agent.securitygroups_rpc [None req-573803be-a4fe-451f-b3d4-e9e26fafbcb6 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:12:13 np0005546420.localdomain dnsmasq[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/addn_hosts - 0 addresses
Dec 05 10:12:13 np0005546420.localdomain dnsmasq-dhcp[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/host
Dec 05 10:12:13 np0005546420.localdomain dnsmasq-dhcp[320725]: read /var/lib/neutron/dhcp/4ce8fc66-ab78-477c-93e4-2501481b7154/opts
Dec 05 10:12:13 np0005546420.localdomain podman[320860]: 2025-12-05 10:12:13.732940706 +0000 UTC m=+0.058111975 container kill bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ce8fc66-ab78-477c-93e4-2501481b7154, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:12:13 np0005546420.localdomain ceph-mon[298353]: osdmap e160: 6 total, 6 up, 6 in
Dec 05 10:12:13 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "de805df9-2757-4e33-8bfe-65fc6ef40510", "format": "json"}]: dispatch
Dec 05 10:12:13 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "de805df9-2757-4e33-8bfe-65fc6ef40510", "force": true, "format": "json"}]: dispatch
Dec 05 10:12:13 np0005546420.localdomain ceph-mon[298353]: pgmap v386: 177 pgs: 177 active+clean; 193 MiB data, 888 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 38 KiB/s wr, 68 op/s
Dec 05 10:12:14 np0005546420.localdomain dnsmasq[320725]: exiting on receipt of SIGTERM
Dec 05 10:12:14 np0005546420.localdomain systemd[1]: libpod-bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81.scope: Deactivated successfully.
Dec 05 10:12:14 np0005546420.localdomain podman[320895]: 2025-12-05 10:12:14.546642666 +0000 UTC m=+0.075543484 container kill bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ce8fc66-ab78-477c-93e4-2501481b7154, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:12:14 np0005546420.localdomain podman[320909]: 2025-12-05 10:12:14.62354044 +0000 UTC m=+0.061218821 container died bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ce8fc66-ab78-477c-93e4-2501481b7154, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:12:14 np0005546420.localdomain podman[320909]: 2025-12-05 10:12:14.655169767 +0000 UTC m=+0.092848088 container cleanup bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ce8fc66-ab78-477c-93e4-2501481b7154, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:12:14 np0005546420.localdomain systemd[1]: libpod-conmon-bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81.scope: Deactivated successfully.
Dec 05 10:12:14 np0005546420.localdomain podman[320911]: 2025-12-05 10:12:14.701809287 +0000 UTC m=+0.129111758 container remove bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ce8fc66-ab78-477c-93e4-2501481b7154, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:12:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:14.716 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:14 np0005546420.localdomain kernel: device tap5734520e-04 left promiscuous mode
Dec 05 10:12:14 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:14Z|00269|binding|INFO|Releasing lport 5734520e-04e6-4061-ab92-3eec7d16326f from this chassis (sb_readonly=0)
Dec 05 10:12:14 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:14Z|00270|binding|INFO|Setting lport 5734520e-04e6-4061-ab92-3eec7d16326f down in Southbound
Dec 05 10:12:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-b25083dbb8436951d49497e25b187912a3bb8ae1cf40ff5b851b357c51566201-merged.mount: Deactivated successfully.
Dec 05 10:12:14 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb4cfa1e56801e7ba7d9125489a20c6e0cff96e3499d0747e2c52c9bb14c5c81-userdata-shm.mount: Deactivated successfully.
Dec 05 10:12:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:14.735 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:14 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:14.823 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-4ce8fc66-ab78-477c-93e4-2501481b7154', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ce8fc66-ab78-477c-93e4-2501481b7154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dba761eb9482439aa79c2d9ffe5c0dfa', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=804b65d6-2013-4b34-84fa-9c3def9cac4b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5734520e-04e6-4061-ab92-3eec7d16326f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:14 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:14.826 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5734520e-04e6-4061-ab92-3eec7d16326f in datapath 4ce8fc66-ab78-477c-93e4-2501481b7154 unbound from our chassis
Dec 05 10:12:14 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:14.827 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4ce8fc66-ab78-477c-93e4-2501481b7154 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:12:14 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:14.829 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[29da915c-7b92-421c-8c1c-e0ffaba7cf4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:14 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=tempest-cephx-id-1227377066,client_metadata.root=/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/2445c2dd-0554-4d30-92cc-44fd04cf4a33],prefix=session evict} (starting...)
Dec 05 10:12:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e161 e161: 6 total, 6 up, 6 in
Dec 05 10:12:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1227377066", "format": "json"} : dispatch
Dec 05 10:12:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1227377066"} : dispatch
Dec 05 10:12:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1227377066"}]': finished
Dec 05 10:12:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1985756166' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1985756166' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:15 np0005546420.localdomain ceph-mon[298353]: osdmap e161: 6 total, 6 up, 6 in
Dec 05 10:12:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:15 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d4ce8fc66\x2dab78\x2d477c\x2d93e4\x2d2501481b7154.mount: Deactivated successfully.
Dec 05 10:12:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:15.182 262769 INFO neutron.agent.dhcp.agent [None req-0cbfbc54-c7f8-4bcd-bfbc-4abed06ce520 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:15.480 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:16 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "tempest-cephx-id-1227377066", "format": "json"}]: dispatch
Dec 05 10:12:16 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "tempest-cephx-id-1227377066", "format": "json"}]: dispatch
Dec 05 10:12:16 np0005546420.localdomain ceph-mon[298353]: pgmap v388: 177 pgs: 177 active+clean; 193 MiB data, 888 MiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 40 KiB/s wr, 71 op/s
Dec 05 10:12:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:16.021 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:12:16 np0005546420.localdomain dnsmasq[317206]: exiting on receipt of SIGTERM
Dec 05 10:12:16 np0005546420.localdomain podman[320956]: 2025-12-05 10:12:16.492156248 +0000 UTC m=+0.083205120 container kill 3d91e15524faec2002d617f7b8e9f6a12949c36b07052d4de91d84bc29656a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:12:16 np0005546420.localdomain systemd[1]: tmp-crun.cgjeZP.mount: Deactivated successfully.
Dec 05 10:12:16 np0005546420.localdomain systemd[1]: libpod-3d91e15524faec2002d617f7b8e9f6a12949c36b07052d4de91d84bc29656a2a.scope: Deactivated successfully.
Dec 05 10:12:16 np0005546420.localdomain podman[320962]: 2025-12-05 10:12:16.533454292 +0000 UTC m=+0.106585102 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:12:16 np0005546420.localdomain podman[320981]: 2025-12-05 10:12:16.567515024 +0000 UTC m=+0.052340197 container died 3d91e15524faec2002d617f7b8e9f6a12949c36b07052d4de91d84bc29656a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 05 10:12:16 np0005546420.localdomain podman[320981]: 2025-12-05 10:12:16.647192663 +0000 UTC m=+0.132017826 container remove 3d91e15524faec2002d617f7b8e9f6a12949c36b07052d4de91d84bc29656a2a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:12:16 np0005546420.localdomain podman[320962]: 2025-12-05 10:12:16.653469807 +0000 UTC m=+0.226600597 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 05 10:12:16 np0005546420.localdomain systemd[1]: libpod-conmon-3d91e15524faec2002d617f7b8e9f6a12949c36b07052d4de91d84bc29656a2a.scope: Deactivated successfully.
Dec 05 10:12:16 np0005546420.localdomain kernel: device tap5dab9d54-e6 left promiscuous mode
Dec 05 10:12:16 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:12:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:16Z|00271|binding|INFO|Releasing lport 5dab9d54-e6ae-4a75-b908-163b44a64d04 from this chassis (sb_readonly=0)
Dec 05 10:12:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:16Z|00272|binding|INFO|Setting lport 5dab9d54-e6ae-4a75-b908-163b44a64d04 down in Southbound
Dec 05 10:12:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:16.663 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:16.675 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebc61650-8998-47fd-a8ee-981cc6780af7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=5dab9d54-e6ae-4a75-b908-163b44a64d04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:16.677 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 5dab9d54-e6ae-4a75-b908-163b44a64d04 in datapath 0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6 unbound from our chassis
Dec 05 10:12:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:16.678 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0b7a4f8a-0c6d-4e3d-9da8-1936977e24a6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:12:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:16.679 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[c77f1f45-19c8-4a66-a3ca-daac34fcac90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:16.681 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:16.689 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:16.691 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:16.696 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:16.777 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:16.893 262769 INFO neutron.agent.dhcp.agent [None req-d57fd982-be47-496a-88fb-413632b5f7a1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:16.894 262769 INFO neutron.agent.dhcp.agent [None req-d57fd982-be47-496a-88fb-413632b5f7a1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e162 e162: 6 total, 6 up, 6 in
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.071770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537071825, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1421, "num_deletes": 256, "total_data_size": 1793648, "memory_usage": 1819296, "flush_reason": "Manual Compaction"}
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "56639cdb-9f87-40b6-91d1-c10cab7d966b", "format": "json"}]: dispatch
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "56639cdb-9f87-40b6-91d1-c10cab7d966b", "force": true, "format": "json"}]: dispatch
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537084740, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 1173458, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26649, "largest_seqno": 28065, "table_properties": {"data_size": 1167345, "index_size": 3327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14809, "raw_average_key_size": 21, "raw_value_size": 1154518, "raw_average_value_size": 1702, "num_data_blocks": 140, "num_entries": 678, "num_filter_entries": 678, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929470, "oldest_key_time": 1764929470, "file_creation_time": 1764929537, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 13017 microseconds, and 4372 cpu microseconds.
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.084787) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 1173458 bytes OK
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.084811) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.087173) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.087197) EVENT_LOG_v1 {"time_micros": 1764929537087190, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.087219) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1786661, prev total WAL file size 1786661, number of live WAL files 2.
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.088034) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end)
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(1145KB)], [45(16MB)]
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537088102, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 18984088, "oldest_snapshot_seqno": -1}
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 12749 keys, 17763330 bytes, temperature: kUnknown
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537188497, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 17763330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17689380, "index_size": 40953, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31941, "raw_key_size": 341619, "raw_average_key_size": 26, "raw_value_size": 17471245, "raw_average_value_size": 1370, "num_data_blocks": 1550, "num_entries": 12749, "num_filter_entries": 12749, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929537, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.188889) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 17763330 bytes
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.191856) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.9 rd, 176.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 17.0 +0.0 blob) out(16.9 +0.0 blob), read-write-amplify(31.3) write-amplify(15.1) OK, records in: 13288, records dropped: 539 output_compression: NoCompression
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.191884) EVENT_LOG_v1 {"time_micros": 1764929537191872, "job": 26, "event": "compaction_finished", "compaction_time_micros": 100487, "compaction_time_cpu_micros": 53971, "output_level": 6, "num_output_files": 1, "total_output_size": 17763330, "num_input_records": 13288, "num_output_records": 12749, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537192199, "job": 26, "event": "table_file_deletion", "file_number": 47}
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537195482, "job": 26, "event": "table_file_deletion", "file_number": 45}
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.087888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.195687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.195695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.195698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:12:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:12:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.195702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:12:17 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:12:17.195705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:12:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:12:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154915 "" "Go-http-client/1.1"
Dec 05 10:12:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:12:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18742 "" "Go-http-client/1.1"
Dec 05 10:12:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:17.344 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:17.345 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:12:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-3c9467e263cc6170ea4d059583bed1473617c2940f5606e392a7f0c6dd1904b5-merged.mount: Deactivated successfully.
Dec 05 10:12:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d91e15524faec2002d617f7b8e9f6a12949c36b07052d4de91d84bc29656a2a-userdata-shm.mount: Deactivated successfully.
Dec 05 10:12:17 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d0b7a4f8a\x2d0c6d\x2d4e3d\x2d9da8\x2d1936977e24a6.mount: Deactivated successfully.
Dec 05 10:12:17 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:17.519 2 INFO neutron.agent.securitygroups_rpc [None req-cd9bae7f-808b-456c-bf8e-f31a82155a0f 859f234eba4c442983333d06bc12b112 0d15dccf4c864d558d055b0c7cd1cccc - - default default] Security group member updated ['6262f27b-ae7f-4862-a034-43ed1f313c2e']
Dec 05 10:12:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:17.638 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:17.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:17.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:12:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:17.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:12:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:18.019 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:12:18 np0005546420.localdomain ceph-mon[298353]: osdmap e162: 6 total, 6 up, 6 in
Dec 05 10:12:18 np0005546420.localdomain ceph-mon[298353]: pgmap v390: 177 pgs: 177 active+clean; 193 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 45 KiB/s wr, 111 op/s
Dec 05 10:12:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2431482962' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:12:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:18.319 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:18 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/ca912719-4ded-4fbc-bc88-8b50dbf8a797],prefix=session evict} (starting...)
Dec 05 10:12:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:12:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:12:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:12:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:12:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:12:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:12:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:12:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:12:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:12:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:12:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:12:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:12:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:18.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:18 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:18.878 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8:0:1:f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:18 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:18.880 159503 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated
Dec 05 10:12:18 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:18.883 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:12:18 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:18.884 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[6f3544f3-7082-4f51-8d92-dea85dd15b11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 05 10:12:19 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 05 10:12:19 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Dec 05 10:12:19 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Dec 05 10:12:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 05 10:12:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1601322822' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:12:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e163 e163: 6 total, 6 up, 6 in
Dec 05 10:12:19 np0005546420.localdomain systemd[1]: tmp-crun.YsEJXq.mount: Deactivated successfully.
Dec 05 10:12:19 np0005546420.localdomain dnsmasq[316352]: exiting on receipt of SIGTERM
Dec 05 10:12:19 np0005546420.localdomain podman[321034]: 2025-12-05 10:12:19.473077183 +0000 UTC m=+0.062915334 container kill eb59ae13596ebae8864c300d3988eef99fa7034a00c637b25b55ee5c930c92e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8adbc708-ce3a-4885-a714-2e1429dac54a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:12:19 np0005546420.localdomain systemd[1]: libpod-eb59ae13596ebae8864c300d3988eef99fa7034a00c637b25b55ee5c930c92e0.scope: Deactivated successfully.
Dec 05 10:12:19 np0005546420.localdomain podman[321046]: 2025-12-05 10:12:19.524727927 +0000 UTC m=+0.038464939 container died eb59ae13596ebae8864c300d3988eef99fa7034a00c637b25b55ee5c930c92e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8adbc708-ce3a-4885-a714-2e1429dac54a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:12:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:12:19 np0005546420.localdomain podman[321046]: 2025-12-05 10:12:19.569016475 +0000 UTC m=+0.082753507 container cleanup eb59ae13596ebae8864c300d3988eef99fa7034a00c637b25b55ee5c930c92e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8adbc708-ce3a-4885-a714-2e1429dac54a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:12:19 np0005546420.localdomain systemd[1]: libpod-conmon-eb59ae13596ebae8864c300d3988eef99fa7034a00c637b25b55ee5c930c92e0.scope: Deactivated successfully.
Dec 05 10:12:19 np0005546420.localdomain podman[321047]: 2025-12-05 10:12:19.588389953 +0000 UTC m=+0.093493977 container remove eb59ae13596ebae8864c300d3988eef99fa7034a00c637b25b55ee5c930c92e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8adbc708-ce3a-4885-a714-2e1429dac54a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:12:19 np0005546420.localdomain kernel: device tapfa974c5a-64 left promiscuous mode
Dec 05 10:12:19 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:19Z|00273|binding|INFO|Releasing lport fa974c5a-64bf-4671-a776-ad34f0f9de1f from this chassis (sb_readonly=0)
Dec 05 10:12:19 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:19Z|00274|binding|INFO|Setting lport fa974c5a-64bf-4671-a776-ad34f0f9de1f down in Southbound
Dec 05 10:12:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:19.598 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:19.615 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:19 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:19.639 2 INFO neutron.agent.securitygroups_rpc [None req-20e10a9c-3ee9-4c67-b462-5bcd13a7c5f8 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:12:19 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:19.636 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-8adbc708-ce3a-4885-a714-2e1429dac54a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8adbc708-ce3a-4885-a714-2e1429dac54a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21ac3546-112a-41e6-a0c1-08447db40291, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=fa974c5a-64bf-4671-a776-ad34f0f9de1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:19 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:19.638 159503 INFO neutron.agent.ovn.metadata.agent [-] Port fa974c5a-64bf-4671-a776-ad34f0f9de1f in datapath 8adbc708-ce3a-4885-a714-2e1429dac54a unbound from our chassis
Dec 05 10:12:19 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:19.639 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8adbc708-ce3a-4885-a714-2e1429dac54a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:12:19 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:19.640 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[e319287c-32e5-457f-9c5b-936392004126]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:19 np0005546420.localdomain podman[321069]: 2025-12-05 10:12:19.653336888 +0000 UTC m=+0.085480330 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 10:12:19 np0005546420.localdomain podman[321069]: 2025-12-05 10:12:19.690703751 +0000 UTC m=+0.122847193 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 05 10:12:19 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:12:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:19.866 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:20.241 262769 INFO neutron.agent.dhcp.agent [None req-8c1ff9da-190a-4cd6-8a0c-8f75990ee91e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:20.280 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:20 np0005546420.localdomain ceph-mon[298353]: pgmap v391: 177 pgs: 177 active+clean; 193 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 38 KiB/s wr, 94 op/s
Dec 05 10:12:20 np0005546420.localdomain ceph-mon[298353]: osdmap e163: 6 total, 6 up, 6 in
Dec 05 10:12:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:20.426 262769 INFO neutron.agent.linux.ip_lib [None req-3b1c1803-5c54-45fb-9678-96fb778807f0 - - - - - -] Device tap87b79a6b-3a cannot be used as it has no MAC address
Dec 05 10:12:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-cd8589040cd66bb18a73c0b82d0bfb1a82d5ebf6c2d57b5f8f4dfcc3db952819-merged.mount: Deactivated successfully.
Dec 05 10:12:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb59ae13596ebae8864c300d3988eef99fa7034a00c637b25b55ee5c930c92e0-userdata-shm.mount: Deactivated successfully.
Dec 05 10:12:20 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d8adbc708\x2dce3a\x2d4885\x2da714\x2d2e1429dac54a.mount: Deactivated successfully.
Dec 05 10:12:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:20.551 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:20 np0005546420.localdomain kernel: device tap87b79a6b-3a entered promiscuous mode
Dec 05 10:12:20 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929540.5595] manager: (tap87b79a6b-3a): new Generic device (/org/freedesktop/NetworkManager/Devices/49)
Dec 05 10:12:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:20.560 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:20 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:20Z|00275|binding|INFO|Claiming lport 87b79a6b-3af4-4a0e-b30f-99cb763b9f03 for this chassis.
Dec 05 10:12:20 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:20Z|00276|binding|INFO|87b79a6b-3af4-4a0e-b30f-99cb763b9f03: Claiming unknown
Dec 05 10:12:20 np0005546420.localdomain systemd-udevd[321100]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:12:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:20.592 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-cdb43d27-4d46-49e9-864e-b80799fcfdab', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdb43d27-4d46-49e9-864e-b80799fcfdab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dba761eb9482439aa79c2d9ffe5c0dfa', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94d2bf61-fca5-4f59-a40d-200faae99a15, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=87b79a6b-3af4-4a0e-b30f-99cb763b9f03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:20.594 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 87b79a6b-3af4-4a0e-b30f-99cb763b9f03 in datapath cdb43d27-4d46-49e9-864e-b80799fcfdab bound to our chassis
Dec 05 10:12:20 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap87b79a6b-3a: No such device
Dec 05 10:12:20 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:20Z|00277|binding|INFO|Setting lport 87b79a6b-3af4-4a0e-b30f-99cb763b9f03 ovn-installed in OVS
Dec 05 10:12:20 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:20Z|00278|binding|INFO|Setting lport 87b79a6b-3af4-4a0e-b30f-99cb763b9f03 up in Southbound
Dec 05 10:12:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:20.595 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network cdb43d27-4d46-49e9-864e-b80799fcfdab or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:12:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:20.595 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:20 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:20.596 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb6e387-3703-47dc-8ff5-43deae3de7b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:20 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap87b79a6b-3a: No such device
Dec 05 10:12:20 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap87b79a6b-3a: No such device
Dec 05 10:12:20 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap87b79a6b-3a: No such device
Dec 05 10:12:20 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap87b79a6b-3a: No such device
Dec 05 10:12:20 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap87b79a6b-3a: No such device
Dec 05 10:12:20 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap87b79a6b-3a: No such device
Dec 05 10:12:20 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap87b79a6b-3a: No such device
Dec 05 10:12:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:20.637 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:20.664 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:20.725 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e164 e164: 6 total, 6 up, 6 in
Dec 05 10:12:20 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:20.909 2 INFO neutron.agent.securitygroups_rpc [None req-25e7fcf2-d6a4-4a73-97e0-30de885c71b5 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:12:21 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:21.009 2 INFO neutron.agent.securitygroups_rpc [None req-6f7c4e29-23fb-4261-b44a-d1c166858ed2 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:12:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:12:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3619747897' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:12:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3619747897' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:21.213 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:21 np0005546420.localdomain podman[321171]: 
Dec 05 10:12:21 np0005546420.localdomain podman[321171]: 2025-12-05 10:12:21.582668529 +0000 UTC m=+0.089340689 container create 085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdb43d27-4d46-49e9-864e-b80799fcfdab, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:12:21 np0005546420.localdomain systemd[1]: Started libpod-conmon-085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8.scope.
Dec 05 10:12:21 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:12:21 np0005546420.localdomain podman[321171]: 2025-12-05 10:12:21.538835236 +0000 UTC m=+0.045507406 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:12:21 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70473bcb6474f71172ecdf79dadd4e3cc190fce56c4f036607db7af990e03d4d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:12:21 np0005546420.localdomain podman[321171]: 2025-12-05 10:12:21.653002 +0000 UTC m=+0.159674150 container init 085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdb43d27-4d46-49e9-864e-b80799fcfdab, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:12:21 np0005546420.localdomain podman[321171]: 2025-12-05 10:12:21.661822483 +0000 UTC m=+0.168494643 container start 085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdb43d27-4d46-49e9-864e-b80799fcfdab, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:12:21 np0005546420.localdomain dnsmasq[321189]: started, version 2.85 cachesize 150
Dec 05 10:12:21 np0005546420.localdomain dnsmasq[321189]: DNS service limited to local subnets
Dec 05 10:12:21 np0005546420.localdomain dnsmasq[321189]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:12:21 np0005546420.localdomain dnsmasq[321189]: warning: no upstream servers configured
Dec 05 10:12:21 np0005546420.localdomain dnsmasq-dhcp[321189]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:12:21 np0005546420.localdomain dnsmasq[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/addn_hosts - 0 addresses
Dec 05 10:12:21 np0005546420.localdomain dnsmasq-dhcp[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/host
Dec 05 10:12:21 np0005546420.localdomain dnsmasq-dhcp[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/opts
Dec 05 10:12:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:21.718 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:21.724 262769 INFO neutron.agent.dhcp.agent [None req-3b1c1803-5c54-45fb-9678-96fb778807f0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:20Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e4c550>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e4c1c0>], id=eb731f22-c47c-4df6-9712-ec65db350d9a, ip_allocation=immediate, mac_address=fa:16:3e:f9:6c:f2, name=tempest-PortsIpV6TestJSON-581896740, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:16Z, description=, dns_domain=, id=cdb43d27-4d46-49e9-864e-b80799fcfdab, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1102785500, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=37228, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2404, status=ACTIVE, subnets=['a73713ea-b6c1-4bca-963d-89df375553ff'], tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:12:18Z, vlan_transparent=None, network_id=cdb43d27-4d46-49e9-864e-b80799fcfdab, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9c5d500d-a686-46a9-8ad0-737ee529f53d'], standard_attr_id=2421, status=DOWN, tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:12:20Z on network cdb43d27-4d46-49e9-864e-b80799fcfdab
Dec 05 10:12:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:21.778 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:21 np0005546420.localdomain ceph-mon[298353]: osdmap e164: 6 total, 6 up, 6 in
Dec 05 10:12:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3619747897' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3619747897' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:21 np0005546420.localdomain ceph-mon[298353]: pgmap v394: 177 pgs: 177 active+clean; 194 MiB data, 911 MiB used, 41 GiB / 42 GiB avail; 107 KiB/s rd, 79 KiB/s wr, 153 op/s
Dec 05 10:12:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:21.904 262769 INFO neutron.agent.dhcp.agent [None req-18c73fa3-016e-46ca-b77b-d0c2898bb182 - - - - - -] DHCP configuration for ports {'a59f8785-f516-46ed-a653-10fd8a06411e'} is completed
Dec 05 10:12:21 np0005546420.localdomain dnsmasq[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/addn_hosts - 1 addresses
Dec 05 10:12:21 np0005546420.localdomain dnsmasq-dhcp[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/host
Dec 05 10:12:21 np0005546420.localdomain dnsmasq-dhcp[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/opts
Dec 05 10:12:21 np0005546420.localdomain podman[321208]: 2025-12-05 10:12:21.941063064 +0000 UTC m=+0.066284608 container kill 085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdb43d27-4d46-49e9-864e-b80799fcfdab, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:12:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:22.250 262769 INFO neutron.agent.dhcp.agent [None req-996a39c2-c713-485a-acb5-1ee707156844 - - - - - -] DHCP configuration for ports {'eb731f22-c47c-4df6-9712-ec65db350d9a'} is completed
Dec 05 10:12:22 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "auth_id": "admin", "tenant_id": "f4c34f38ddb048808ef72391bdda40b5", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:12:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Dec 05 10:12:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:22.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:23.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:23.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:23.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:23 np0005546420.localdomain ceph-mon[298353]: pgmap v395: 177 pgs: 177 active+clean; 194 MiB data, 929 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 38 KiB/s wr, 52 op/s
Dec 05 10:12:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:25.131 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:20Z, description=, device_id=42e4e1da-39f2-4a93-b628-c40aba184e4c, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e31c70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a8d4640>], id=eb731f22-c47c-4df6-9712-ec65db350d9a, ip_allocation=immediate, mac_address=fa:16:3e:f9:6c:f2, name=tempest-PortsIpV6TestJSON-581896740, network_id=cdb43d27-4d46-49e9-864e-b80799fcfdab, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['9c5d500d-a686-46a9-8ad0-737ee529f53d'], standard_attr_id=2421, status=ACTIVE, tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:12:23Z on network cdb43d27-4d46-49e9-864e-b80799fcfdab
Dec 05 10:12:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 05 10:12:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/8c0406bc-b10e-4b16-b5be-dfcb9c87425a", "osd", "allow rw pool=manila_data namespace=fsvolumens_9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:12:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:25 np0005546420.localdomain dnsmasq[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/addn_hosts - 1 addresses
Dec 05 10:12:25 np0005546420.localdomain dnsmasq-dhcp[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/host
Dec 05 10:12:25 np0005546420.localdomain podman[321246]: 2025-12-05 10:12:25.334766602 +0000 UTC m=+0.059535299 container kill 085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdb43d27-4d46-49e9-864e-b80799fcfdab, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 10:12:25 np0005546420.localdomain dnsmasq-dhcp[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/opts
Dec 05 10:12:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:25.628 262769 INFO neutron.agent.dhcp.agent [None req-c09b6fca-2d0f-4f4d-8590-e8e15ae21fd3 - - - - - -] DHCP configuration for ports {'eb731f22-c47c-4df6-9712-ec65db350d9a'} is completed
Dec 05 10:12:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e165 e165: 6 total, 6 up, 6 in
Dec 05 10:12:25 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:25.959 2 INFO neutron.agent.securitygroups_rpc [None req-f44d7ee3-716d-4a7c-a97c-712507a87563 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:12:26 np0005546420.localdomain dnsmasq[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/addn_hosts - 0 addresses
Dec 05 10:12:26 np0005546420.localdomain dnsmasq-dhcp[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/host
Dec 05 10:12:26 np0005546420.localdomain dnsmasq-dhcp[321189]: read /var/lib/neutron/dhcp/cdb43d27-4d46-49e9-864e-b80799fcfdab/opts
Dec 05 10:12:26 np0005546420.localdomain podman[321283]: 2025-12-05 10:12:26.148989708 +0000 UTC m=+0.065309577 container kill 085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdb43d27-4d46-49e9-864e-b80799fcfdab, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:12:26 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "auth_id": "david", "tenant_id": "f4c34f38ddb048808ef72391bdda40b5", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:12:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/8c0406bc-b10e-4b16-b5be-dfcb9c87425a", "osd", "allow rw pool=manila_data namespace=fsvolumens_9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:12:26 np0005546420.localdomain ceph-mon[298353]: pgmap v396: 177 pgs: 177 active+clean; 194 MiB data, 929 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 29 KiB/s wr, 41 op/s
Dec 05 10:12:26 np0005546420.localdomain ceph-mon[298353]: osdmap e165: 6 total, 6 up, 6 in
Dec 05 10:12:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:26Z|00279|binding|INFO|Releasing lport 87b79a6b-3af4-4a0e-b30f-99cb763b9f03 from this chassis (sb_readonly=0)
Dec 05 10:12:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:26Z|00280|binding|INFO|Setting lport 87b79a6b-3af4-4a0e-b30f-99cb763b9f03 down in Southbound
Dec 05 10:12:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:26.368 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:26 np0005546420.localdomain kernel: device tap87b79a6b-3a left promiscuous mode
Dec 05 10:12:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:26.379 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-cdb43d27-4d46-49e9-864e-b80799fcfdab', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdb43d27-4d46-49e9-864e-b80799fcfdab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dba761eb9482439aa79c2d9ffe5c0dfa', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94d2bf61-fca5-4f59-a40d-200faae99a15, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=87b79a6b-3af4-4a0e-b30f-99cb763b9f03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:26.381 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 87b79a6b-3af4-4a0e-b30f-99cb763b9f03 in datapath cdb43d27-4d46-49e9-864e-b80799fcfdab unbound from our chassis
Dec 05 10:12:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:26.382 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network cdb43d27-4d46-49e9-864e-b80799fcfdab or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:12:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:26.383 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[32c11f4d-1a7b-442e-ac17-cce87684b3a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:26.392 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:26.771 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:26.780 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:27.119 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:27.120 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:27.121 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:12:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/483473754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:12:27 np0005546420.localdomain dnsmasq[321189]: exiting on receipt of SIGTERM
Dec 05 10:12:27 np0005546420.localdomain podman[321323]: 2025-12-05 10:12:27.741262414 +0000 UTC m=+0.070949791 container kill 085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdb43d27-4d46-49e9-864e-b80799fcfdab, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 05 10:12:27 np0005546420.localdomain systemd[1]: tmp-crun.0WVxfX.mount: Deactivated successfully.
Dec 05 10:12:27 np0005546420.localdomain systemd[1]: libpod-085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8.scope: Deactivated successfully.
Dec 05 10:12:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:12:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:12:27 np0005546420.localdomain podman[321337]: 2025-12-05 10:12:27.822913214 +0000 UTC m=+0.066125432 container died 085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdb43d27-4d46-49e9-864e-b80799fcfdab, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:12:27 np0005546420.localdomain systemd[1]: tmp-crun.uHQI3l.mount: Deactivated successfully.
Dec 05 10:12:27 np0005546420.localdomain podman[321351]: 2025-12-05 10:12:27.861278249 +0000 UTC m=+0.079877857 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Dec 05 10:12:27 np0005546420.localdomain podman[321351]: 2025-12-05 10:12:27.894192365 +0000 UTC m=+0.112791963 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:12:27 np0005546420.localdomain podman[321337]: 2025-12-05 10:12:27.907535347 +0000 UTC m=+0.150747585 container cleanup 085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdb43d27-4d46-49e9-864e-b80799fcfdab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:12:27 np0005546420.localdomain systemd[1]: libpod-conmon-085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8.scope: Deactivated successfully.
Dec 05 10:12:27 np0005546420.localdomain podman[321339]: 2025-12-05 10:12:27.93226238 +0000 UTC m=+0.169904256 container remove 085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdb43d27-4d46-49e9-864e-b80799fcfdab, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:12:27 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:12:28 np0005546420.localdomain podman[321350]: 2025-12-05 10:12:28.019755722 +0000 UTC m=+0.243021934 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:12:28 np0005546420.localdomain podman[321350]: 2025-12-05 10:12:28.053087131 +0000 UTC m=+0.276353333 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:12:28 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:12:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:28.210 262769 INFO neutron.agent.dhcp.agent [None req-7e415b97-cd69-4793-ac95-f7393b439212 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:28.211 262769 INFO neutron.agent.dhcp.agent [None req-7e415b97-cd69-4793-ac95-f7393b439212 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:28.365 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:28 np0005546420.localdomain ceph-mon[298353]: pgmap v398: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 50 KiB/s wr, 46 op/s
Dec 05 10:12:28 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2469474859' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:12:28 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:12:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:28.584 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-70473bcb6474f71172ecdf79dadd4e3cc190fce56c4f036607db7af990e03d4d-merged.mount: Deactivated successfully.
Dec 05 10:12:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-085783522b19c2f6f3b8d623f3358fa137d87b9dc8fefa4e765138f8734364e8-userdata-shm.mount: Deactivated successfully.
Dec 05 10:12:28 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2dcdb43d27\x2d4d46\x2d49e9\x2d864e\x2db80799fcfdab.mount: Deactivated successfully.
Dec 05 10:12:28 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:28.809 2 INFO neutron.agent.securitygroups_rpc [None req-f95cf61e-ac32-455c-9429-d352c925f648 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:12:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:28.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:12:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:28.894 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:12:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:28.894 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:12:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:28.895 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:12:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:28.895 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:12:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:28.895 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:12:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:12:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1389641252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:12:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:29.343 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:12:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:12:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "format": "json"}]: dispatch
Dec 05 10:12:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2282225310' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:12:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1389641252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:12:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e166 e166: 6 total, 6 up, 6 in
Dec 05 10:12:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:29.538 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:12:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:29.539 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11597MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:12:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:29.540 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:12:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:29.540 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:12:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:29.604 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:12:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:29.604 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:12:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:29.623 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:12:29 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:29.676 2 INFO neutron.agent.securitygroups_rpc [None req-0ed3eef7-102a-4eda-8931-e0e743290eeb 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']
Dec 05 10:12:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:12:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2580453998' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:12:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:30.042 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:12:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:30.050 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:12:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:30.129 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:12:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:30.132 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:12:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:30.132 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:12:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:12:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e167 e167: 6 total, 6 up, 6 in
Dec 05 10:12:30 np0005546420.localdomain ceph-mon[298353]: pgmap v399: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 47 KiB/s wr, 43 op/s
Dec 05 10:12:30 np0005546420.localdomain ceph-mon[298353]: osdmap e166: 6 total, 6 up, 6 in
Dec 05 10:12:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2580453998' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:12:30 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:30.494 2 INFO neutron.agent.securitygroups_rpc [None req-f71c19cb-1c49-4f2f-a17f-755852cba13d 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['12716e84-7d0a-49ef-b4ae-c42660f35fe6']
Dec 05 10:12:30 np0005546420.localdomain systemd[1]: tmp-crun.bGH47s.mount: Deactivated successfully.
Dec 05 10:12:30 np0005546420.localdomain podman[321450]: 2025-12-05 10:12:30.525154977 +0000 UTC m=+0.106148098 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 10:12:30 np0005546420.localdomain podman[321450]: 2025-12-05 10:12:30.54048558 +0000 UTC m=+0.121478721 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:12:30 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:12:31 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:31.123 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:12:31 np0005546420.localdomain ceph-mon[298353]: osdmap e167: 6 total, 6 up, 6 in
Dec 05 10:12:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e168 e168: 6 total, 6 up, 6 in
Dec 05 10:12:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:31.773 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:31.781 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:32 np0005546420.localdomain ceph-mon[298353]: pgmap v402: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 42 KiB/s wr, 63 op/s
Dec 05 10:12:32 np0005546420.localdomain ceph-mon[298353]: osdmap e168: 6 total, 6 up, 6 in
Dec 05 10:12:32 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "auth_id": "david", "tenant_id": "a1984fed702d4461879e97dd7c6fc401", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:12:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 05 10:12:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2488484770' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2488484770' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e169 e169: 6 total, 6 up, 6 in
Dec 05 10:12:33 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:33.228 2 INFO neutron.agent.securitygroups_rpc [None req-0beb9e78-8188-4d82-8b36-33bf8ed2e36d 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['12716e84-7d0a-49ef-b4ae-c42660f35fe6', 'd55adec3-95c0-449e-a33d-049a875e32be']
Dec 05 10:12:33 np0005546420.localdomain ceph-mon[298353]: osdmap e169: 6 total, 6 up, 6 in
Dec 05 10:12:34 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:34.336 2 INFO neutron.agent.securitygroups_rpc [None req-78d46b76-9a12-401f-a336-42fa87108c51 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['d55adec3-95c0-449e-a33d-049a875e32be']
Dec 05 10:12:34 np0005546420.localdomain ceph-mon[298353]: pgmap v405: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 23 KiB/s wr, 89 op/s
Dec 05 10:12:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:35 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/183dfc32-49d7-4c92-9c61-4b9f674605ac/50784130-a6ed-458b-a113-5ad377ba5a4b],prefix=session evict} (starting...)
Dec 05 10:12:35 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:12:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e170 e170: 6 total, 6 up, 6 in
Dec 05 10:12:36 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d8bc5d0f-8805-4ca6-8ce8-76816531c4d6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:12:36 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d8bc5d0f-8805-4ca6-8ce8-76816531c4d6", "format": "json"}]: dispatch
Dec 05 10:12:36 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "auth_id": "david", "format": "json"}]: dispatch
Dec 05 10:12:36 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "auth_id": "david", "format": "json"}]: dispatch
Dec 05 10:12:36 np0005546420.localdomain ceph-mon[298353]: pgmap v406: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 15 KiB/s wr, 60 op/s
Dec 05 10:12:36 np0005546420.localdomain ceph-mon[298353]: osdmap e170: 6 total, 6 up, 6 in
Dec 05 10:12:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:36.776 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:36.783 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:37 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e171 e171: 6 total, 6 up, 6 in
Dec 05 10:12:38 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:38.443 2 INFO neutron.agent.securitygroups_rpc [None req-204baa73-ce5c-45b4-97d9-f3e220eb0f4a 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['72125fb1-732c-46f4-bbef-3f4bc55bdbb5']
Dec 05 10:12:38 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/8c0406bc-b10e-4b16-b5be-dfcb9c87425a],prefix=session evict} (starting...)
Dec 05 10:12:38 np0005546420.localdomain ceph-mon[298353]: pgmap v408: 177 pgs: 177 active+clean; 194 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 16 KiB/s wr, 40 op/s
Dec 05 10:12:38 np0005546420.localdomain ceph-mon[298353]: osdmap e171: 6 total, 6 up, 6 in
Dec 05 10:12:38 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1868971788' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:12:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 05 10:12:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Dec 05 10:12:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Dec 05 10:12:39 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e172 e172: 6 total, 6 up, 6 in
Dec 05 10:12:39 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "auth_id": "david", "format": "json"}]: dispatch
Dec 05 10:12:39 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "auth_id": "david", "format": "json"}]: dispatch
Dec 05 10:12:39 np0005546420.localdomain ceph-mon[298353]: osdmap e172: 6 total, 6 up, 6 in
Dec 05 10:12:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:12:40 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:12:40 np0005546420.localdomain podman[321468]: 2025-12-05 10:12:40.512767519 +0000 UTC m=+0.088880185 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=minimal rhel9)
Dec 05 10:12:40 np0005546420.localdomain podman[321468]: 2025-12-05 10:12:40.524176781 +0000 UTC m=+0.100289447 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Dec 05 10:12:40 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:12:40 np0005546420.localdomain systemd[1]: tmp-crun.co6DU0.mount: Deactivated successfully.
Dec 05 10:12:40 np0005546420.localdomain podman[321469]: 2025-12-05 10:12:40.6125503 +0000 UTC m=+0.184964891 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:12:40 np0005546420.localdomain podman[321469]: 2025-12-05 10:12:40.619742261 +0000 UTC m=+0.192156832 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:12:40 np0005546420.localdomain ceph-mon[298353]: pgmap v410: 177 pgs: 177 active+clean; 194 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 14 KiB/s wr, 33 op/s
Dec 05 10:12:40 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:12:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:41.778 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:41.785 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:41 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:41.793 262769 INFO neutron.agent.linux.ip_lib [None req-49f5b48e-3d94-4c9a-bdba-58ff7eb2ed09 - - - - - -] Device tapc298b5db-a0 cannot be used as it has no MAC address
Dec 05 10:12:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:41.816 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:41 np0005546420.localdomain kernel: device tapc298b5db-a0 entered promiscuous mode
Dec 05 10:12:41 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:41Z|00281|binding|INFO|Claiming lport c298b5db-a053-4bf3-8b81-5bbf98f9690e for this chassis.
Dec 05 10:12:41 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:41Z|00282|binding|INFO|c298b5db-a053-4bf3-8b81-5bbf98f9690e: Claiming unknown
Dec 05 10:12:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:41.825 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:41 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929561.8273] manager: (tapc298b5db-a0): new Generic device (/org/freedesktop/NetworkManager/Devices/50)
Dec 05 10:12:41 np0005546420.localdomain systemd-udevd[321518]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:12:41 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:41Z|00283|binding|INFO|Setting lport c298b5db-a053-4bf3-8b81-5bbf98f9690e ovn-installed in OVS
Dec 05 10:12:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:41.867 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:41 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e173 e173: 6 total, 6 up, 6 in
Dec 05 10:12:41 np0005546420.localdomain ceph-mon[298353]: pgmap v412: 177 pgs: 177 active+clean; 194 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 106 KiB/s rd, 45 KiB/s wr, 149 op/s
Dec 05 10:12:41 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/133303656' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:41 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/133303656' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:41.905 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:41.939 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:42 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:42Z|00284|binding|INFO|Setting lport c298b5db-a053-4bf3-8b81-5bbf98f9690e up in Southbound
Dec 05 10:12:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:42.474 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-e5e79495-7110-4a22-a68c-ca87ef48bb59', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5e79495-7110-4a22-a68c-ca87ef48bb59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b09b27fd-84e8-4e0b-a6db-e8b36ae96720, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=c298b5db-a053-4bf3-8b81-5bbf98f9690e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:42.476 159503 INFO neutron.agent.ovn.metadata.agent [-] Port c298b5db-a053-4bf3-8b81-5bbf98f9690e in datapath e5e79495-7110-4a22-a68c-ca87ef48bb59 bound to our chassis
Dec 05 10:12:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:42.477 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e5e79495-7110-4a22-a68c-ca87ef48bb59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:12:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:42.479 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[1d531100-024d-4812-b367-7081cdf47b93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:42 np0005546420.localdomain podman[321573]: 
Dec 05 10:12:42 np0005546420.localdomain podman[321573]: 2025-12-05 10:12:42.7957964 +0000 UTC m=+0.098651766 container create faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5e79495-7110-4a22-a68c-ca87ef48bb59, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 10:12:42 np0005546420.localdomain systemd[1]: Started libpod-conmon-faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a.scope.
Dec 05 10:12:42 np0005546420.localdomain podman[321573]: 2025-12-05 10:12:42.747874191 +0000 UTC m=+0.050729537 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:12:42 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:12:42 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62215d530f09c6840332537e06321b0311e797d59f8e37ab590df650794bb10e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:12:42 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:42.870 2 INFO neutron.agent.securitygroups_rpc [None req-bf47f0b2-e468-4ec8-b957-b0b2af4ee729 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['6d484cfc-d88e-489c-af08-dd8717f1f0ef', '72125fb1-732c-46f4-bbef-3f4bc55bdbb5', 'a44de420-0955-49a8-bcc6-65991cbbb4d6']
Dec 05 10:12:42 np0005546420.localdomain podman[321573]: 2025-12-05 10:12:42.880203325 +0000 UTC m=+0.183058711 container init faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5e79495-7110-4a22-a68c-ca87ef48bb59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 05 10:12:42 np0005546420.localdomain ceph-mon[298353]: osdmap e173: 6 total, 6 up, 6 in
Dec 05 10:12:42 np0005546420.localdomain podman[321573]: 2025-12-05 10:12:42.89135419 +0000 UTC m=+0.194209536 container start faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5e79495-7110-4a22-a68c-ca87ef48bb59, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:12:42 np0005546420.localdomain dnsmasq[321591]: started, version 2.85 cachesize 150
Dec 05 10:12:42 np0005546420.localdomain dnsmasq[321591]: DNS service limited to local subnets
Dec 05 10:12:42 np0005546420.localdomain dnsmasq[321591]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:12:42 np0005546420.localdomain dnsmasq[321591]: warning: no upstream servers configured
Dec 05 10:12:42 np0005546420.localdomain dnsmasq-dhcp[321591]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:12:42 np0005546420.localdomain dnsmasq[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/addn_hosts - 0 addresses
Dec 05 10:12:42 np0005546420.localdomain dnsmasq-dhcp[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/host
Dec 05 10:12:42 np0005546420.localdomain dnsmasq-dhcp[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/opts
Dec 05 10:12:42 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:42.951 262769 INFO neutron.agent.dhcp.agent [None req-49f5b48e-3d94-4c9a-bdba-58ff7eb2ed09 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:41Z, description=, device_id=3e093e23-d62b-4445-a81a-730ed0bec4c1, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f0b130>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f0b880>], id=92e1d9d2-d7e0-4dff-9be4-fb9e2af51275, ip_allocation=immediate, mac_address=fa:16:3e:fa:f9:72, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:37Z, description=, dns_domain=, id=e5e79495-7110-4a22-a68c-ca87ef48bb59, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-342178120, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=46856, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2480, status=ACTIVE, subnets=['8e0ac34e-f7dc-4c0e-bbe7-68e8bf6ac274'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:38Z, vlan_transparent=None, network_id=e5e79495-7110-4a22-a68c-ca87ef48bb59, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2488, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:41Z on network e5e79495-7110-4a22-a68c-ca87ef48bb59
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1621841522' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1621841522' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:43 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:43.116 262769 INFO neutron.agent.dhcp.agent [None req-3e7e7a4d-dbbb-40c5-b3b2-085dc1bd41ea - - - - - -] DHCP configuration for ports {'8f8b64a3-a412-4905-b488-3fe26e00f7d8'} is completed
Dec 05 10:12:43 np0005546420.localdomain podman[321610]: 2025-12-05 10:12:43.183627173 +0000 UTC m=+0.068552798 container kill faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5e79495-7110-4a22-a68c-ca87ef48bb59, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 05 10:12:43 np0005546420.localdomain dnsmasq[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/addn_hosts - 1 addresses
Dec 05 10:12:43 np0005546420.localdomain dnsmasq-dhcp[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/host
Dec 05 10:12:43 np0005546420.localdomain dnsmasq-dhcp[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/opts
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/499706816' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/499706816' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:43 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:43.447 2 INFO neutron.agent.securitygroups_rpc [None req-23ad9585-8dde-473a-8067-24fa3b53a337 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['6d484cfc-d88e-489c-af08-dd8717f1f0ef', 'a44de420-0955-49a8-bcc6-65991cbbb4d6']
Dec 05 10:12:43 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:43.480 262769 INFO neutron.agent.dhcp.agent [None req-dd1ab389-3566-4ba5-a7ca-33503cb213fe - - - - - -] DHCP configuration for ports {'92e1d9d2-d7e0-4dff-9be4-fb9e2af51275'} is completed
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e174 e174: 6 total, 6 up, 6 in
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1621841522' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1621841522' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "format": "json"}]: dispatch
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "force": true, "format": "json"}]: dispatch
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/499706816' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/499706816' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:43 np0005546420.localdomain ceph-mon[298353]: pgmap v414: 177 pgs: 177 active+clean; 194 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 29 KiB/s wr, 115 op/s
Dec 05 10:12:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:44.005 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:41Z, description=, device_id=3e093e23-d62b-4445-a81a-730ed0bec4c1, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e53ac0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e53940>], id=92e1d9d2-d7e0-4dff-9be4-fb9e2af51275, ip_allocation=immediate, mac_address=fa:16:3e:fa:f9:72, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:37Z, description=, dns_domain=, id=e5e79495-7110-4a22-a68c-ca87ef48bb59, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-342178120, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=46856, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2480, status=ACTIVE, subnets=['8e0ac34e-f7dc-4c0e-bbe7-68e8bf6ac274'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:38Z, vlan_transparent=None, network_id=e5e79495-7110-4a22-a68c-ca87ef48bb59, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2488, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:41Z on network e5e79495-7110-4a22-a68c-ca87ef48bb59
Dec 05 10:12:44 np0005546420.localdomain dnsmasq[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/addn_hosts - 1 addresses
Dec 05 10:12:44 np0005546420.localdomain dnsmasq-dhcp[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/host
Dec 05 10:12:44 np0005546420.localdomain dnsmasq-dhcp[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/opts
Dec 05 10:12:44 np0005546420.localdomain podman[321650]: 2025-12-05 10:12:44.218909534 +0000 UTC m=+0.068259058 container kill faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5e79495-7110-4a22-a68c-ca87ef48bb59, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 10:12:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:44.535 262769 INFO neutron.agent.dhcp.agent [None req-0f2fc03a-9af8-4603-bf47-c9bc7c2ae125 - - - - - -] DHCP configuration for ports {'92e1d9d2-d7e0-4dff-9be4-fb9e2af51275'} is completed
Dec 05 10:12:44 np0005546420.localdomain ceph-mon[298353]: osdmap e174: 6 total, 6 up, 6 in
Dec 05 10:12:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:45 np0005546420.localdomain ceph-mon[298353]: pgmap v416: 177 pgs: 177 active+clean; 194 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 29 KiB/s wr, 115 op/s
Dec 05 10:12:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:46.817 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:46.818 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:12:46 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:46.952 262769 INFO neutron.agent.linux.ip_lib [None req-1e79e749-342c-404a-af2f-5749cf8a0556 - - - - - -] Device tap29aca406-9f cannot be used as it has no MAC address
Dec 05 10:12:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "format": "json"}]: dispatch
Dec 05 10:12:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "force": true, "format": "json"}]: dispatch
Dec 05 10:12:46 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e175 e175: 6 total, 6 up, 6 in
Dec 05 10:12:46 np0005546420.localdomain podman[321672]: 2025-12-05 10:12:46.975551816 +0000 UTC m=+0.081922700 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:12:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:46.981 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:46 np0005546420.localdomain kernel: device tap29aca406-9f entered promiscuous mode
Dec 05 10:12:46 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929566.9899] manager: (tap29aca406-9f): new Generic device (/org/freedesktop/NetworkManager/Devices/51)
Dec 05 10:12:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:46.991 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:46Z|00285|binding|INFO|Claiming lport 29aca406-9f78-4620-b9ac-6d5eab4a1529 for this chassis.
Dec 05 10:12:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:46Z|00286|binding|INFO|29aca406-9f78-4620-b9ac-6d5eab4a1529: Claiming unknown
Dec 05 10:12:46 np0005546420.localdomain systemd-udevd[321699]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:12:46 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:46.996 2 INFO neutron.agent.securitygroups_rpc [None req-3ec9d7a5-d1c6-4e3e-bf7f-f71f0310d08a 14a89074968e40cbb69c8c73a9492d34 66efd68a4ed34b1a976a072e82fd9b38 - - default default] Security group member updated ['4fcc1bd9-e5ed-4327-9c0c-7b6f78b91f24']
Dec 05 10:12:47 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:47.004 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66efd68a4ed34b1a976a072e82fd9b38', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40ea348f-a118-4b3d-8c8a-add0b0f34cc7, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=29aca406-9f78-4620-b9ac-6d5eab4a1529) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:47 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:47.006 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 29aca406-9f78-4620-b9ac-6d5eab4a1529 in datapath a5bd19d0-46af-4f26-bc95-af1d1d9d5f10 bound to our chassis
Dec 05 10:12:47 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:47.008 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a5bd19d0-46af-4f26-bc95-af1d1d9d5f10 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:12:47 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:47.009 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[38a76d67-f823-4c45-ad7e-672e6ba317b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:47.019 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:47 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:47Z|00287|binding|INFO|Setting lport 29aca406-9f78-4620-b9ac-6d5eab4a1529 ovn-installed in OVS
Dec 05 10:12:47 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:47Z|00288|binding|INFO|Setting lport 29aca406-9f78-4620-b9ac-6d5eab4a1529 up in Southbound
Dec 05 10:12:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:47.022 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:47.031 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:47 np0005546420.localdomain podman[321672]: 2025-12-05 10:12:47.053545293 +0000 UTC m=+0.159916097 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:12:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:47.067 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:47 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:12:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:47.081 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:12:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:12:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:12:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154908 "" "Go-http-client/1.1"
Dec 05 10:12:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:12:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18733 "" "Go-http-client/1.1"
Dec 05 10:12:47 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:12:47 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/568109930' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:47 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:12:47 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/568109930' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:47 np0005546420.localdomain podman[321760]: 
Dec 05 10:12:47 np0005546420.localdomain podman[321760]: 2025-12-05 10:12:47.88041288 +0000 UTC m=+0.092980751 container create 9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 10:12:47 np0005546420.localdomain systemd[1]: Started libpod-conmon-9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb.scope.
Dec 05 10:12:47 np0005546420.localdomain podman[321760]: 2025-12-05 10:12:47.830323614 +0000 UTC m=+0.042891515 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:12:47 np0005546420.localdomain systemd[1]: tmp-crun.TwuayK.mount: Deactivated successfully.
Dec 05 10:12:47 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:12:47 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c64c0c2d630d29e2d435a92e0692a2f8de137e6f8a84262bd6dbab5d12ce544d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:12:47 np0005546420.localdomain podman[321760]: 2025-12-05 10:12:47.959201892 +0000 UTC m=+0.171769763 container init 9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:12:47 np0005546420.localdomain podman[321760]: 2025-12-05 10:12:47.968748897 +0000 UTC m=+0.181316768 container start 9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:12:47 np0005546420.localdomain dnsmasq[321778]: started, version 2.85 cachesize 150
Dec 05 10:12:47 np0005546420.localdomain dnsmasq[321778]: DNS service limited to local subnets
Dec 05 10:12:47 np0005546420.localdomain dnsmasq[321778]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:12:47 np0005546420.localdomain dnsmasq[321778]: warning: no upstream servers configured
Dec 05 10:12:47 np0005546420.localdomain dnsmasq-dhcp[321778]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 05 10:12:47 np0005546420.localdomain dnsmasq[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/addn_hosts - 0 addresses
Dec 05 10:12:47 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/host
Dec 05 10:12:47 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/opts
Dec 05 10:12:47 np0005546420.localdomain ceph-mon[298353]: osdmap e175: 6 total, 6 up, 6 in
Dec 05 10:12:47 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d8bc5d0f-8805-4ca6-8ce8-76816531c4d6", "format": "json"}]: dispatch
Dec 05 10:12:47 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d8bc5d0f-8805-4ca6-8ce8-76816531c4d6", "force": true, "format": "json"}]: dispatch
Dec 05 10:12:47 np0005546420.localdomain ceph-mon[298353]: pgmap v418: 177 pgs: 177 active+clean; 194 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 22 KiB/s wr, 102 op/s
Dec 05 10:12:47 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/568109930' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:47 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/568109930' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:48.039 262769 INFO neutron.agent.dhcp.agent [None req-1e79e749-342c-404a-af2f-5749cf8a0556 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:46Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ed40a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ed44c0>], id=29823d42-6149-47ce-8f83-dd9891902c71, ip_allocation=immediate, mac_address=fa:16:3e:65:bd:09, name=tempest-ExtraDHCPOptionsIpV6TestJSON-180903060, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:42Z, description=, dns_domain=, id=a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1388680650, port_security_enabled=True, project_id=66efd68a4ed34b1a976a072e82fd9b38, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32706, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2494, status=ACTIVE, subnets=['cab1711d-d2e1-4a75-897f-ffcdf79046b3'], tags=[], tenant_id=66efd68a4ed34b1a976a072e82fd9b38, updated_at=2025-12-05T10:12:44Z, vlan_transparent=None, network_id=a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, port_security_enabled=True, project_id=66efd68a4ed34b1a976a072e82fd9b38, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['4fcc1bd9-e5ed-4327-9c0c-7b6f78b91f24'], standard_attr_id=2509, status=DOWN, tags=[], tenant_id=66efd68a4ed34b1a976a072e82fd9b38, updated_at=2025-12-05T10:12:46Z on network a5bd19d0-46af-4f26-bc95-af1d1d9d5f10
Dec 05 10:12:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:48.106 262769 INFO neutron.agent.dhcp.agent [None req-27444036-3372-472a-85e7-1790dff0cb58 - - - - - -] DHCP configuration for ports {'d250066c-e6eb-4422-b54e-9669583afacf'} is completed
Dec 05 10:12:48 np0005546420.localdomain dnsmasq[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/addn_hosts - 1 addresses
Dec 05 10:12:48 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/host
Dec 05 10:12:48 np0005546420.localdomain podman[321795]: 2025-12-05 10:12:48.244276983 +0000 UTC m=+0.061571222 container kill 9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 05 10:12:48 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/opts
Dec 05 10:12:48 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:48.341 2 INFO neutron.agent.securitygroups_rpc [None req-23e8adc4-f858-48c0-a5ac-96732445e499 14a89074968e40cbb69c8c73a9492d34 66efd68a4ed34b1a976a072e82fd9b38 - - default default] Security group member updated ['4fcc1bd9-e5ed-4327-9c0c-7b6f78b91f24']
Dec 05 10:12:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:48.702 262769 INFO neutron.agent.dhcp.agent [None req-61d8db45-d0c9-4fc7-af3c-2782d2f5413f - - - - - -] DHCP configuration for ports {'29823d42-6149-47ce-8f83-dd9891902c71'} is completed
Dec 05 10:12:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:48.722 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:47Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e61df0>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e985e0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e98c40>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e984f0>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e986d0>], id=76595615-0813-44f4-985f-501f8047836e, ip_allocation=immediate, mac_address=fa:16:3e:b5:33:bb, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1074692016, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:42Z, description=, dns_domain=, id=a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1388680650, port_security_enabled=True, project_id=66efd68a4ed34b1a976a072e82fd9b38, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32706, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2494, status=ACTIVE, subnets=['cab1711d-d2e1-4a75-897f-ffcdf79046b3'], tags=[], tenant_id=66efd68a4ed34b1a976a072e82fd9b38, updated_at=2025-12-05T10:12:44Z, vlan_transparent=None, network_id=a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, port_security_enabled=True, project_id=66efd68a4ed34b1a976a072e82fd9b38, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['4fcc1bd9-e5ed-4327-9c0c-7b6f78b91f24'], standard_attr_id=2512, status=DOWN, tags=[], tenant_id=66efd68a4ed34b1a976a072e82fd9b38, updated_at=2025-12-05T10:12:47Z on network a5bd19d0-46af-4f26-bc95-af1d1d9d5f10
Dec 05 10:12:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:48.745 262769 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions
Dec 05 10:12:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:48.746 262769 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions
Dec 05 10:12:48 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:48.747 262769 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions
Dec 05 10:12:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:12:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:12:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:12:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:12:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:12:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:12:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:12:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:12:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:12:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:12:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:12:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:12:48 np0005546420.localdomain dnsmasq[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/addn_hosts - 2 addresses
Dec 05 10:12:48 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/host
Dec 05 10:12:48 np0005546420.localdomain podman[321834]: 2025-12-05 10:12:48.935303565 +0000 UTC m=+0.059786597 container kill 9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:12:48 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/opts
Dec 05 10:12:49 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:49.264 262769 INFO neutron.agent.dhcp.agent [None req-928d7dcb-1a32-435b-bb78-99279cb87f3c - - - - - -] DHCP configuration for ports {'76595615-0813-44f4-985f-501f8047836e'} is completed
Dec 05 10:12:49 np0005546420.localdomain dnsmasq[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/addn_hosts - 0 addresses
Dec 05 10:12:49 np0005546420.localdomain podman[321873]: 2025-12-05 10:12:49.278250123 +0000 UTC m=+0.064523563 container kill faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5e79495-7110-4a22-a68c-ca87ef48bb59, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:12:49 np0005546420.localdomain dnsmasq-dhcp[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/host
Dec 05 10:12:49 np0005546420.localdomain dnsmasq-dhcp[321591]: read /var/lib/neutron/dhcp/e5e79495-7110-4a22-a68c-ca87ef48bb59/opts
Dec 05 10:12:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:49.481 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:49Z|00289|binding|INFO|Releasing lport c298b5db-a053-4bf3-8b81-5bbf98f9690e from this chassis (sb_readonly=0)
Dec 05 10:12:49 np0005546420.localdomain kernel: device tapc298b5db-a0 left promiscuous mode
Dec 05 10:12:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:49Z|00290|binding|INFO|Setting lport c298b5db-a053-4bf3-8b81-5bbf98f9690e down in Southbound
Dec 05 10:12:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:49.493 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-e5e79495-7110-4a22-a68c-ca87ef48bb59', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5e79495-7110-4a22-a68c-ca87ef48bb59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b09b27fd-84e8-4e0b-a6db-e8b36ae96720, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=c298b5db-a053-4bf3-8b81-5bbf98f9690e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:49.495 159503 INFO neutron.agent.ovn.metadata.agent [-] Port c298b5db-a053-4bf3-8b81-5bbf98f9690e in datapath e5e79495-7110-4a22-a68c-ca87ef48bb59 unbound from our chassis
Dec 05 10:12:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:49.497 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e5e79495-7110-4a22-a68c-ca87ef48bb59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:12:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:49.497 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[637bda29-aee3-451f-9bec-4735752b874a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:49.508 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:49 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:49.719 2 INFO neutron.agent.securitygroups_rpc [None req-31943e55-a76e-4108-a6e8-23e89683ccac 14a89074968e40cbb69c8c73a9492d34 66efd68a4ed34b1a976a072e82fd9b38 - - default default] Security group member updated ['4fcc1bd9-e5ed-4327-9c0c-7b6f78b91f24']
Dec 05 10:12:49 np0005546420.localdomain systemd[1]: tmp-crun.97OKfq.mount: Deactivated successfully.
Dec 05 10:12:49 np0005546420.localdomain dnsmasq[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/addn_hosts - 1 addresses
Dec 05 10:12:49 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/host
Dec 05 10:12:49 np0005546420.localdomain podman[321911]: 2025-12-05 10:12:49.958400451 +0000 UTC m=+0.066858725 container kill 9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:12:49 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/opts
Dec 05 10:12:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:12:50 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:50.020 2 INFO neutron.agent.securitygroups_rpc [None req-6d62a797-239e-4719-a132-43bb7e7d705c 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']
Dec 05 10:12:50 np0005546420.localdomain podman[321924]: 2025-12-05 10:12:50.077204728 +0000 UTC m=+0.098874213 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:12:50 np0005546420.localdomain podman[321924]: 2025-12-05 10:12:50.119525995 +0000 UTC m=+0.141195500 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:12:50 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:12:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:50.152 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:46Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99dea3d0>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99dea070>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d99dea490>, <neutron.agent.linux.dhcp.DictModel object at 0x7f6d99deab20>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99dea4f0>], id=29823d42-6149-47ce-8f83-dd9891902c71, ip_allocation=immediate, mac_address=fa:16:3e:65:bd:09, name=tempest-new-port-name-1093087226, network_id=a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, port_security_enabled=True, project_id=66efd68a4ed34b1a976a072e82fd9b38, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['4fcc1bd9-e5ed-4327-9c0c-7b6f78b91f24'], standard_attr_id=2509, status=DOWN, tags=[], tenant_id=66efd68a4ed34b1a976a072e82fd9b38, updated_at=2025-12-05T10:12:49Z on network a5bd19d0-46af-4f26-bc95-af1d1d9d5f10
Dec 05 10:12:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:50.172 262769 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions
Dec 05 10:12:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:50.173 262769 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions
Dec 05 10:12:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:50.174 262769 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions
Dec 05 10:12:50 np0005546420.localdomain dnsmasq[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/addn_hosts - 1 addresses
Dec 05 10:12:50 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/host
Dec 05 10:12:50 np0005546420.localdomain podman[321967]: 2025-12-05 10:12:50.359303857 +0000 UTC m=+0.061966104 container kill 9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 10:12:50 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/opts
Dec 05 10:12:50 np0005546420.localdomain ceph-mon[298353]: pgmap v419: 177 pgs: 177 active+clean; 194 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 17 KiB/s wr, 79 op/s
Dec 05 10:12:50 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "format": "json"}]: dispatch
Dec 05 10:12:50 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "force": true, "format": "json"}]: dispatch
Dec 05 10:12:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:12:50 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3316939438' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:12:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:50.633 262769 INFO neutron.agent.dhcp.agent [None req-09beaf63-3455-4e0c-bb74-a7d6ecb487dc - - - - - -] DHCP configuration for ports {'29823d42-6149-47ce-8f83-dd9891902c71'} is completed
Dec 05 10:12:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e176 e176: 6 total, 6 up, 6 in
Dec 05 10:12:51 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:12:51.090 2 INFO neutron.agent.securitygroups_rpc [None req-fc354d76-bc0c-445a-824b-9fa964ac8ee3 14a89074968e40cbb69c8c73a9492d34 66efd68a4ed34b1a976a072e82fd9b38 - - default default] Security group member updated ['4fcc1bd9-e5ed-4327-9c0c-7b6f78b91f24']
Dec 05 10:12:51 np0005546420.localdomain dnsmasq[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/addn_hosts - 0 addresses
Dec 05 10:12:51 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/host
Dec 05 10:12:51 np0005546420.localdomain podman[322016]: 2025-12-05 10:12:51.352309883 +0000 UTC m=+0.063398638 container kill 9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 10:12:51 np0005546420.localdomain dnsmasq-dhcp[321778]: read /var/lib/neutron/dhcp/a5bd19d0-46af-4f26-bc95-af1d1d9d5f10/opts
Dec 05 10:12:51 np0005546420.localdomain dnsmasq[321591]: exiting on receipt of SIGTERM
Dec 05 10:12:51 np0005546420.localdomain podman[322029]: 2025-12-05 10:12:51.400424639 +0000 UTC m=+0.063132291 container kill faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5e79495-7110-4a22-a68c-ca87ef48bb59, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:12:51 np0005546420.localdomain systemd[1]: libpod-faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a.scope: Deactivated successfully.
Dec 05 10:12:51 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3316939438' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:12:51 np0005546420.localdomain ceph-mon[298353]: osdmap e176: 6 total, 6 up, 6 in
Dec 05 10:12:51 np0005546420.localdomain podman[322047]: 2025-12-05 10:12:51.489154457 +0000 UTC m=+0.063096938 container died faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5e79495-7110-4a22-a68c-ca87ef48bb59, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:12:51 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a-userdata-shm.mount: Deactivated successfully.
Dec 05 10:12:51 np0005546420.localdomain podman[322047]: 2025-12-05 10:12:51.594380466 +0000 UTC m=+0.168322927 container remove faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5e79495-7110-4a22-a68c-ca87ef48bb59, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:12:51 np0005546420.localdomain systemd[1]: libpod-conmon-faf0b97ea6dba89efd5a25c6d698015d8e6daa76ef233ac967b1e4b9fe07b22a.scope: Deactivated successfully.
Dec 05 10:12:51 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:51.626 262769 INFO neutron.agent.dhcp.agent [None req-c667fbb2-6858-4c40-a99c-01361e502fe5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:51 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:51.739 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:51.857 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:51 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e177 e177: 6 total, 6 up, 6 in
Dec 05 10:12:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:51.945 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:52 np0005546420.localdomain dnsmasq[321778]: exiting on receipt of SIGTERM
Dec 05 10:12:52 np0005546420.localdomain podman[322093]: 2025-12-05 10:12:52.056706339 +0000 UTC m=+0.059935991 container kill 9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:12:52 np0005546420.localdomain systemd[1]: libpod-9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb.scope: Deactivated successfully.
Dec 05 10:12:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:52Z|00291|binding|INFO|Removing iface tap29aca406-9f ovn-installed in OVS
Dec 05 10:12:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:12:52Z|00292|binding|INFO|Removing lport 29aca406-9f78-4620-b9ac-6d5eab4a1529 ovn-installed in OVS
Dec 05 10:12:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:52.098 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:52.100 159503 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port f122b192-738f-45aa-826a-25b740c39488 with type ""
Dec 05 10:12:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:52.101 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66efd68a4ed34b1a976a072e82fd9b38', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40ea348f-a118-4b3d-8c8a-add0b0f34cc7, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=29aca406-9f78-4620-b9ac-6d5eab4a1529) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:12:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:52.105 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:52.106 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 29aca406-9f78-4620-b9ac-6d5eab4a1529 in datapath a5bd19d0-46af-4f26-bc95-af1d1d9d5f10 unbound from our chassis
Dec 05 10:12:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:52.107 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a5bd19d0-46af-4f26-bc95-af1d1d9d5f10 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:12:52 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:12:52.109 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6346f9-f7b0-4d9c-adcb-fed6d918c204]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:12:52 np0005546420.localdomain podman[322109]: 2025-12-05 10:12:52.14841558 +0000 UTC m=+0.067289988 container died 9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 05 10:12:52 np0005546420.localdomain podman[322109]: 2025-12-05 10:12:52.19216796 +0000 UTC m=+0.111042328 container remove 9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a5bd19d0-46af-4f26-bc95-af1d1d9d5f10, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:12:52 np0005546420.localdomain systemd[1]: libpod-conmon-9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb.scope: Deactivated successfully.
Dec 05 10:12:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:52.206 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:52 np0005546420.localdomain kernel: device tap29aca406-9f left promiscuous mode
Dec 05 10:12:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:52.220 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:52 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:52.246 262769 INFO neutron.agent.dhcp.agent [None req-ce22b576-d5d8-4386-8e9b-e8974ccb2438 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:52 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:12:52.320 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:12:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c64c0c2d630d29e2d435a92e0692a2f8de137e6f8a84262bd6dbab5d12ce544d-merged.mount: Deactivated successfully.
Dec 05 10:12:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9bb79089afefc68d7285d4dfe1617551d284227e0d35a1d0e1c9df9b0e52d4cb-userdata-shm.mount: Deactivated successfully.
Dec 05 10:12:52 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2da5bd19d0\x2d46af\x2d4f26\x2dbc95\x2daf1d1d9d5f10.mount: Deactivated successfully.
Dec 05 10:12:52 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-62215d530f09c6840332537e06321b0311e797d59f8e37ab590df650794bb10e-merged.mount: Deactivated successfully.
Dec 05 10:12:52 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2de5e79495\x2d7110\x2d4a22\x2da68c\x2dca87ef48bb59.mount: Deactivated successfully.
Dec 05 10:12:52 np0005546420.localdomain ceph-mon[298353]: pgmap v421: 177 pgs: 177 active+clean; 194 MiB data, 957 MiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 48 KiB/s wr, 118 op/s
Dec 05 10:12:52 np0005546420.localdomain ceph-mon[298353]: osdmap e177: 6 total, 6 up, 6 in
Dec 05 10:12:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:52.573 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:52 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e178 e178: 6 total, 6 up, 6 in
Dec 05 10:12:53 np0005546420.localdomain ceph-mon[298353]: osdmap e178: 6 total, 6 up, 6 in
Dec 05 10:12:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "auth_id": "admin", "format": "json"}]: dispatch
Dec 05 10:12:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "format": "json"}]: dispatch
Dec 05 10:12:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "force": true, "format": "json"}]: dispatch
Dec 05 10:12:53 np0005546420.localdomain ceph-mon[298353]: pgmap v424: 177 pgs: 177 active+clean; 194 MiB data, 958 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 38 KiB/s wr, 51 op/s
Dec 05 10:12:54 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:12:54 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e179 e179: 6 total, 6 up, 6 in
Dec 05 10:12:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 10:12:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e180 e180: 6 total, 6 up, 6 in
Dec 05 10:12:55 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1c9dbba2-5de3-4dd4-833c-092810bc5276", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:12:55 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1c9dbba2-5de3-4dd4-833c-092810bc5276", "format": "json"}]: dispatch
Dec 05 10:12:55 np0005546420.localdomain ceph-mon[298353]: osdmap e179: 6 total, 6 up, 6 in
Dec 05 10:12:55 np0005546420.localdomain ceph-mon[298353]: pgmap v426: 177 pgs: 177 active+clean; 194 MiB data, 958 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 51 KiB/s wr, 68 op/s
Dec 05 10:12:55 np0005546420.localdomain ceph-mon[298353]: osdmap e180: 6 total, 6 up, 6 in
Dec 05 10:12:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:56.858 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:12:56.861 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:12:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:12:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:12:58 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e181 e181: 6 total, 6 up, 6 in
Dec 05 10:12:58 np0005546420.localdomain ceph-mon[298353]: pgmap v428: 177 pgs: 177 active+clean; 194 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 30 KiB/s wr, 70 op/s
Dec 05 10:12:58 np0005546420.localdomain podman[322134]: 2025-12-05 10:12:58.506783702 +0000 UTC m=+0.083096236 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:12:58 np0005546420.localdomain podman[322134]: 2025-12-05 10:12:58.523402895 +0000 UTC m=+0.099715429 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:12:58 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:12:58 np0005546420.localdomain podman[322135]: 2025-12-05 10:12:58.577377252 +0000 UTC m=+0.152242512 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 10:12:58 np0005546420.localdomain podman[322135]: 2025-12-05 10:12:58.586412341 +0000 UTC m=+0.161277591 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:12:58 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:12:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:12:59 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1265255253' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:12:59 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1265255253' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:12:59 np0005546420.localdomain ceph-mon[298353]: osdmap e181: 6 total, 6 up, 6 in
Dec 05 10:12:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:12:59 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1265255253' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:12:59 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1265255253' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:00 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:13:00 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "format": "json"}]: dispatch
Dec 05 10:13:00 np0005546420.localdomain ceph-mon[298353]: pgmap v430: 177 pgs: 177 active+clean; 194 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 27 KiB/s wr, 63 op/s
Dec 05 10:13:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e182 e182: 6 total, 6 up, 6 in
Dec 05 10:13:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:13:01 np0005546420.localdomain podman[322174]: 2025-12-05 10:13:01.493623381 +0000 UTC m=+0.066719091 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 10:13:01 np0005546420.localdomain podman[322174]: 2025-12-05 10:13:01.51141089 +0000 UTC m=+0.084506610 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:13:01 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:13:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:01.861 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:13:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:01.864 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:13:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:01.864 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:13:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:01.864 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:13:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:01.897 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:01.898 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:13:01 np0005546420.localdomain ceph-mon[298353]: osdmap e182: 6 total, 6 up, 6 in
Dec 05 10:13:01 np0005546420.localdomain ceph-mon[298353]: pgmap v432: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 39 KiB/s wr, 96 op/s
Dec 05 10:13:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:13:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3063930161' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:13:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3063930161' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3063930161' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3063930161' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:04.131 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:13:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:04.132 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:13:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:04.132 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:13:04 np0005546420.localdomain ceph-mon[298353]: pgmap v433: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 31 KiB/s wr, 81 op/s
Dec 05 10:13:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e183 e183: 6 total, 6 up, 6 in
Dec 05 10:13:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "snap_name": "c5eca2b7-83a5-4542-85a4-aa8cf97f1b78", "format": "json"}]: dispatch
Dec 05 10:13:06 np0005546420.localdomain ceph-mon[298353]: pgmap v434: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 8.9 KiB/s wr, 28 op/s
Dec 05 10:13:06 np0005546420.localdomain ceph-mon[298353]: osdmap e183: 6 total, 6 up, 6 in
Dec 05 10:13:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:06.899 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:13:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:06.900 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:13:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:06.900 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:13:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:06.900 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:13:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:06.920 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:06.921 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:13:06 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:06.930 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:07 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:07.582 262769 INFO neutron.agent.linux.ip_lib [None req-28dd31e1-435e-4f7b-a4ed-e30d0091222a - - - - - -] Device tap91cef627-6f cannot be used as it has no MAC address
Dec 05 10:13:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:07.609 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:07 np0005546420.localdomain kernel: device tap91cef627-6f entered promiscuous mode
Dec 05 10:13:07 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929587.6174] manager: (tap91cef627-6f): new Generic device (/org/freedesktop/NetworkManager/Devices/52)
Dec 05 10:13:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:07Z|00293|binding|INFO|Claiming lport 91cef627-6f36-4510-8f3e-94dfe8246919 for this chassis.
Dec 05 10:13:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:07Z|00294|binding|INFO|91cef627-6f36-4510-8f3e-94dfe8246919: Claiming unknown
Dec 05 10:13:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:07.622 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:07 np0005546420.localdomain systemd-udevd[322204]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:13:07 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap91cef627-6f: No such device
Dec 05 10:13:07 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap91cef627-6f: No such device
Dec 05 10:13:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:07Z|00295|binding|INFO|Setting lport 91cef627-6f36-4510-8f3e-94dfe8246919 ovn-installed in OVS
Dec 05 10:13:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:07.664 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:07 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap91cef627-6f: No such device
Dec 05 10:13:07 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap91cef627-6f: No such device
Dec 05 10:13:07 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap91cef627-6f: No such device
Dec 05 10:13:07 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap91cef627-6f: No such device
Dec 05 10:13:07 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap91cef627-6f: No such device
Dec 05 10:13:07 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap91cef627-6f: No such device
Dec 05 10:13:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:07.702 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:07Z|00296|binding|INFO|Setting lport 91cef627-6f36-4510-8f3e-94dfe8246919 up in Southbound
Dec 05 10:13:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:07.735 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-1181d73f-762c-444f-9858-21942bdf30bb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1181d73f-762c-444f-9858-21942bdf30bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9312be8c-d96b-4517-98bf-48bf949ebff8, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=91cef627-6f36-4510-8f3e-94dfe8246919) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:13:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:07.735 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:07.737 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 91cef627-6f36-4510-8f3e-94dfe8246919 in datapath 1181d73f-762c-444f-9858-21942bdf30bb bound to our chassis
Dec 05 10:13:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:07.739 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1181d73f-762c-444f-9858-21942bdf30bb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:13:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:07.745 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[8b270a59-c528-4565-8318-f719cb7e7920]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:13:08 np0005546420.localdomain ceph-mon[298353]: pgmap v436: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 29 op/s
Dec 05 10:13:08 np0005546420.localdomain sudo[322250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:13:08 np0005546420.localdomain sudo[322250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:13:08 np0005546420.localdomain sudo[322250]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:08 np0005546420.localdomain sudo[322268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:13:08 np0005546420.localdomain sudo[322268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:13:09 np0005546420.localdomain podman[322325]: 
Dec 05 10:13:09 np0005546420.localdomain podman[322325]: 2025-12-05 10:13:09.14048356 +0000 UTC m=+0.072203789 container create 52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1181d73f-762c-444f-9858-21942bdf30bb, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:13:09 np0005546420.localdomain systemd[1]: Started libpod-conmon-52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5.scope.
Dec 05 10:13:09 np0005546420.localdomain podman[322325]: 2025-12-05 10:13:09.097911216 +0000 UTC m=+0.029631475 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:13:09 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:13:09 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04720658ea2cbc860b0138311f75a42acc8e64fb6f23544a14d2d231acc91c6b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:09 np0005546420.localdomain podman[322325]: 2025-12-05 10:13:09.222672138 +0000 UTC m=+0.154392377 container init 52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1181d73f-762c-444f-9858-21942bdf30bb, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 10:13:09 np0005546420.localdomain podman[322325]: 2025-12-05 10:13:09.23279594 +0000 UTC m=+0.164516169 container start 52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1181d73f-762c-444f-9858-21942bdf30bb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:13:09 np0005546420.localdomain dnsmasq[322348]: started, version 2.85 cachesize 150
Dec 05 10:13:09 np0005546420.localdomain dnsmasq[322348]: DNS service limited to local subnets
Dec 05 10:13:09 np0005546420.localdomain dnsmasq[322348]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:13:09 np0005546420.localdomain dnsmasq[322348]: warning: no upstream servers configured
Dec 05 10:13:09 np0005546420.localdomain dnsmasq-dhcp[322348]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:13:09 np0005546420.localdomain dnsmasq[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/addn_hosts - 0 addresses
Dec 05 10:13:09 np0005546420.localdomain dnsmasq-dhcp[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/host
Dec 05 10:13:09 np0005546420.localdomain dnsmasq-dhcp[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/opts
Dec 05 10:13:09 np0005546420.localdomain sudo[322268]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:09.412 262769 INFO neutron.agent.dhcp.agent [None req-bbe94e00-c341-4c27-b1eb-be5e437654ab - - - - - -] DHCP configuration for ports {'a197c55a-953c-4e00-a266-f3edd8ba390c'} is completed
Dec 05 10:13:09 np0005546420.localdomain sudo[322361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:13:09 np0005546420.localdomain sudo[322361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:13:09 np0005546420.localdomain sudo[322361]: pam_unix(sudo:session): session closed for user root
Dec 05 10:13:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:10 np0005546420.localdomain ceph-mon[298353]: pgmap v437: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 21 KiB/s wr, 28 op/s
Dec 05 10:13:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:13:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:13:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:13:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:13:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:13:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:13:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:13:11 np0005546420.localdomain podman[322379]: 2025-12-05 10:13:11.575205814 +0000 UTC m=+0.142619034 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, release=1755695350, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:13:11 np0005546420.localdomain podman[322379]: 2025-12-05 10:13:11.59161113 +0000 UTC m=+0.159024410 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41)
Dec 05 10:13:11 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:13:11 np0005546420.localdomain podman[322380]: 2025-12-05 10:13:11.543420503 +0000 UTC m=+0.110414219 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:13:11 np0005546420.localdomain podman[322380]: 2025-12-05 10:13:11.674028885 +0000 UTC m=+0.241022541 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:13:11 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:13:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:11.922 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:12 np0005546420.localdomain ceph-mon[298353]: pgmap v438: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s rd, 11 KiB/s wr, 4 op/s
Dec 05 10:13:13 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:13.028 262769 INFO neutron.agent.linux.ip_lib [None req-f00e8355-8e9a-4408-b02b-965fde7c4b96 - - - - - -] Device tapd74fa9e0-f8 cannot be used as it has no MAC address
Dec 05 10:13:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:13.045 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:13 np0005546420.localdomain kernel: device tapd74fa9e0-f8 entered promiscuous mode
Dec 05 10:13:13 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929593.0560] manager: (tapd74fa9e0-f8): new Generic device (/org/freedesktop/NetworkManager/Devices/53)
Dec 05 10:13:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:13Z|00297|binding|INFO|Claiming lport d74fa9e0-f805-49ac-823b-75cbc5c5fe00 for this chassis.
Dec 05 10:13:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:13Z|00298|binding|INFO|d74fa9e0-f805-49ac-823b-75cbc5c5fe00: Claiming unknown
Dec 05 10:13:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:13.057 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:13 np0005546420.localdomain systemd-udevd[322434]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:13:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:13.077 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e067bcb3-9bbb-49fd-bbd5-2ac7e7ce6906, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=d74fa9e0-f805-49ac-823b-75cbc5c5fe00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:13:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:13.079 159503 INFO neutron.agent.ovn.metadata.agent [-] Port d74fa9e0-f805-49ac-823b-75cbc5c5fe00 in datapath 8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2 bound to our chassis
Dec 05 10:13:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:13.082 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:13:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:13.084 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[7b39b77d-32aa-4204-bf51-12451d8cd64f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:13:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd74fa9e0-f8: No such device
Dec 05 10:13:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:13Z|00299|binding|INFO|Setting lport d74fa9e0-f805-49ac-823b-75cbc5c5fe00 ovn-installed in OVS
Dec 05 10:13:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:13Z|00300|binding|INFO|Setting lport d74fa9e0-f805-49ac-823b-75cbc5c5fe00 up in Southbound
Dec 05 10:13:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:13.093 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd74fa9e0-f8: No such device
Dec 05 10:13:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd74fa9e0-f8: No such device
Dec 05 10:13:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd74fa9e0-f8: No such device
Dec 05 10:13:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd74fa9e0-f8: No such device
Dec 05 10:13:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd74fa9e0-f8: No such device
Dec 05 10:13:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd74fa9e0-f8: No such device
Dec 05 10:13:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd74fa9e0-f8: No such device
Dec 05 10:13:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:13.131 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:13.161 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:14 np0005546420.localdomain podman[322505]: 2025-12-05 10:13:14.035828798 +0000 UTC m=+0.033410043 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:13:14 np0005546420.localdomain podman[322505]: 
Dec 05 10:13:14 np0005546420.localdomain podman[322505]: 2025-12-05 10:13:14.218992071 +0000 UTC m=+0.216573296 container create 9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:13:14 np0005546420.localdomain systemd[1]: Started libpod-conmon-9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e.scope.
Dec 05 10:13:14 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:13:14 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2483c80ab2e398411661cba4d35ba467b6e4e823de6b04dc47c73af4b0bf6da6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:14 np0005546420.localdomain podman[322505]: 2025-12-05 10:13:14.276383964 +0000 UTC m=+0.273965149 container init 9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:13:14 np0005546420.localdomain podman[322505]: 2025-12-05 10:13:14.289819698 +0000 UTC m=+0.287400923 container start 9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:13:14 np0005546420.localdomain dnsmasq[322523]: started, version 2.85 cachesize 150
Dec 05 10:13:14 np0005546420.localdomain dnsmasq[322523]: DNS service limited to local subnets
Dec 05 10:13:14 np0005546420.localdomain dnsmasq[322523]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:13:14 np0005546420.localdomain dnsmasq[322523]: warning: no upstream servers configured
Dec 05 10:13:14 np0005546420.localdomain dnsmasq-dhcp[322523]: DHCP, static leases only on 10.101.0.0, lease time 1d
Dec 05 10:13:14 np0005546420.localdomain dnsmasq[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/addn_hosts - 0 addresses
Dec 05 10:13:14 np0005546420.localdomain dnsmasq-dhcp[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/host
Dec 05 10:13:14 np0005546420.localdomain dnsmasq-dhcp[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/opts
Dec 05 10:13:14 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:14.520 262769 INFO neutron.agent.dhcp.agent [None req-403eefaa-2c9c-40a3-9c84-0a9eedd13586 - - - - - -] DHCP configuration for ports {'bb9a84bd-1a10-47de-abf4-c62dea4e2c41'} is completed
Dec 05 10:13:14 np0005546420.localdomain ceph-mon[298353]: pgmap v439: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s wr, 1 op/s
Dec 05 10:13:14 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:14.895 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:14Z, description=, device_id=873d8367-ebb3-4110-ad1e-ea201738792f, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ef9b50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ef9700>], id=37497e9e-f12f-4c28-86e1-d1a2ed04c6c1, ip_allocation=immediate, mac_address=fa:16:3e:79:9e:d2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:58Z, description=, dns_domain=, id=1181d73f-762c-444f-9858-21942bdf30bb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-126299358, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54018, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2543, status=ACTIVE, subnets=['45f69743-30e1-4d77-960c-c5ad56ba5ba1'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:05Z, vlan_transparent=None, network_id=1181d73f-762c-444f-9858-21942bdf30bb, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2615, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:14Z on network 1181d73f-762c-444f-9858-21942bdf30bb
Dec 05 10:13:15 np0005546420.localdomain dnsmasq[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/addn_hosts - 1 addresses
Dec 05 10:13:15 np0005546420.localdomain dnsmasq-dhcp[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/host
Dec 05 10:13:15 np0005546420.localdomain podman[322541]: 2025-12-05 10:13:15.085300625 +0000 UTC m=+0.051094768 container kill 52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1181d73f-762c-444f-9858-21942bdf30bb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:13:15 np0005546420.localdomain dnsmasq-dhcp[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/opts
Dec 05 10:13:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:15.381 262769 INFO neutron.agent.dhcp.agent [None req-101a75c6-07ff-45ca-a2ed-68efeeef4b37 - - - - - -] DHCP configuration for ports {'37497e9e-f12f-4c28-86e1-d1a2ed04c6c1'} is completed
Dec 05 10:13:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "snap_name": "c5eca2b7-83a5-4542-85a4-aa8cf97f1b78_0261bf7d-9639-41c3-83b1-85aebad662f3", "force": true, "format": "json"}]: dispatch
Dec 05 10:13:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "snap_name": "c5eca2b7-83a5-4542-85a4-aa8cf97f1b78", "force": true, "format": "json"}]: dispatch
Dec 05 10:13:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:16.276 262769 INFO neutron.agent.linux.ip_lib [None req-26d5e915-2448-46d5-b4e2-f8a1c0d49255 - - - - - -] Device tap52df937b-4b cannot be used as it has no MAC address
Dec 05 10:13:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:16.305 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:16 np0005546420.localdomain kernel: device tap52df937b-4b entered promiscuous mode
Dec 05 10:13:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:16Z|00301|binding|INFO|Claiming lport 52df937b-4bf6-46bc-8fc6-59028035a754 for this chassis.
Dec 05 10:13:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:16Z|00302|binding|INFO|52df937b-4bf6-46bc-8fc6-59028035a754: Claiming unknown
Dec 05 10:13:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:16.312 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:16 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929596.3138] manager: (tap52df937b-4b): new Generic device (/org/freedesktop/NetworkManager/Devices/54)
Dec 05 10:13:16 np0005546420.localdomain systemd-udevd[322571]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:13:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:16.321 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-3cd71a96-eeeb-4267-afc4-bc58e221457e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cd71a96-eeeb-4267-afc4-bc58e221457e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66727bc4-90ea-4e37-b240-9e0ba26ad0bd, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=52df937b-4bf6-46bc-8fc6-59028035a754) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:13:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:16.323 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 52df937b-4bf6-46bc-8fc6-59028035a754 in datapath 3cd71a96-eeeb-4267-afc4-bc58e221457e bound to our chassis
Dec 05 10:13:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:16.325 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3cd71a96-eeeb-4267-afc4-bc58e221457e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:13:16 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:16.326 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4dc693-00fb-4f0f-8eb0-8e0e8e110f35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:13:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:16Z|00303|binding|INFO|Setting lport 52df937b-4bf6-46bc-8fc6-59028035a754 ovn-installed in OVS
Dec 05 10:13:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:16Z|00304|binding|INFO|Setting lport 52df937b-4bf6-46bc-8fc6-59028035a754 up in Southbound
Dec 05 10:13:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap52df937b-4b: No such device
Dec 05 10:13:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:16.346 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap52df937b-4b: No such device
Dec 05 10:13:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:16.349 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap52df937b-4b: No such device
Dec 05 10:13:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap52df937b-4b: No such device
Dec 05 10:13:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap52df937b-4b: No such device
Dec 05 10:13:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap52df937b-4b: No such device
Dec 05 10:13:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap52df937b-4b: No such device
Dec 05 10:13:16 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap52df937b-4b: No such device
Dec 05 10:13:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:16.391 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:16.419 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:16 np0005546420.localdomain ceph-mon[298353]: pgmap v440: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s wr, 1 op/s
Dec 05 10:13:16 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:13:16.687 2 INFO neutron.agent.securitygroups_rpc [None req-ecee759d-30fd-48d6-8d6a-f8b6c3142cb3 9d972c557bac460788722cbb72a5063b 1a3d7fc340f84c5699757971056327c6 - - default default] Security group member updated ['8a2cc3b9-107e-4895-9989-8e73163dac8e']
Dec 05 10:13:16 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:13:16.863 2 INFO neutron.agent.securitygroups_rpc [None req-ecee759d-30fd-48d6-8d6a-f8b6c3142cb3 9d972c557bac460788722cbb72a5063b 1a3d7fc340f84c5699757971056327c6 - - default default] Security group member updated ['8a2cc3b9-107e-4895-9989-8e73163dac8e']
Dec 05 10:13:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:16.926 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:16.964 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:14Z, description=, device_id=873d8367-ebb3-4110-ad1e-ea201738792f, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0de3d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0dedc0>], id=37497e9e-f12f-4c28-86e1-d1a2ed04c6c1, ip_allocation=immediate, mac_address=fa:16:3e:79:9e:d2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:58Z, description=, dns_domain=, id=1181d73f-762c-444f-9858-21942bdf30bb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-126299358, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54018, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2543, status=ACTIVE, subnets=['45f69743-30e1-4d77-960c-c5ad56ba5ba1'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:05Z, vlan_transparent=None, network_id=1181d73f-762c-444f-9858-21942bdf30bb, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2615, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:14Z on network 1181d73f-762c-444f-9858-21942bdf30bb
Dec 05 10:13:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:13:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:13:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:13:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156739 "" "Go-http-client/1.1"
Dec 05 10:13:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:13:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19211 "" "Go-http-client/1.1"
Dec 05 10:13:17 np0005546420.localdomain podman[322642]: 
Dec 05 10:13:17 np0005546420.localdomain podman[322642]: 2025-12-05 10:13:17.38174021 +0000 UTC m=+0.148307799 container create a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3cd71a96-eeeb-4267-afc4-bc58e221457e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:13:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:13:17 np0005546420.localdomain systemd[1]: Started libpod-conmon-a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d.scope.
Dec 05 10:13:17 np0005546420.localdomain systemd[1]: tmp-crun.JDNWjl.mount: Deactivated successfully.
Dec 05 10:13:17 np0005546420.localdomain podman[322642]: 2025-12-05 10:13:17.339325991 +0000 UTC m=+0.105893610 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:13:17 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:13:17 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/818936f7aebec3d79eafb5dbd34e0dcf78d0b3b11dfeee7b907aba02a358ccd1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:17 np0005546420.localdomain podman[322642]: 2025-12-05 10:13:17.473773722 +0000 UTC m=+0.240341321 container init a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3cd71a96-eeeb-4267-afc4-bc58e221457e, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:13:17 np0005546420.localdomain dnsmasq[322671]: started, version 2.85 cachesize 150
Dec 05 10:13:17 np0005546420.localdomain dnsmasq[322671]: DNS service limited to local subnets
Dec 05 10:13:17 np0005546420.localdomain dnsmasq[322671]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:13:17 np0005546420.localdomain dnsmasq[322671]: warning: no upstream servers configured
Dec 05 10:13:17 np0005546420.localdomain dnsmasq-dhcp[322671]: DHCPv6, static leases only on 2001:db8:2::, lease time 1d
Dec 05 10:13:17 np0005546420.localdomain dnsmasq[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/addn_hosts - 0 addresses
Dec 05 10:13:17 np0005546420.localdomain dnsmasq-dhcp[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/host
Dec 05 10:13:17 np0005546420.localdomain dnsmasq-dhcp[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/opts
Dec 05 10:13:17 np0005546420.localdomain podman[322655]: 2025-12-05 10:13:17.514637824 +0000 UTC m=+0.101013550 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 05 10:13:17 np0005546420.localdomain podman[322642]: 2025-12-05 10:13:17.53976583 +0000 UTC m=+0.306333429 container start a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3cd71a96-eeeb-4267-afc4-bc58e221457e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 05 10:13:17 np0005546420.localdomain podman[322655]: 2025-12-05 10:13:17.553335278 +0000 UTC m=+0.139711034 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:13:17 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:13:17 np0005546420.localdomain sshd[322686]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:13:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:18.133 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:18.133 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:13:18 np0005546420.localdomain systemd[1]: tmp-crun.JcmdKn.mount: Deactivated successfully.
Dec 05 10:13:18 np0005546420.localdomain ceph-mon[298353]: pgmap v441: 177 pgs: 177 active+clean; 195 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 624 B/s rd, 19 KiB/s wr, 4 op/s
Dec 05 10:13:18 np0005546420.localdomain sshd[322686]: Received disconnect from 24.232.50.5 port 33464:11: Bye Bye [preauth]
Dec 05 10:13:18 np0005546420.localdomain sshd[322686]: Disconnected from authenticating user root 24.232.50.5 port 33464 [preauth]
Dec 05 10:13:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:18.801 262769 INFO neutron.agent.dhcp.agent [None req-fe324820-65ea-42e7-9261-42189c4eb87b - - - - - -] DHCP configuration for ports {'1e6f2fc3-6963-4909-9d5b-6dc044c1db51'} is completed
Dec 05 10:13:18 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:18.813 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:16Z, description=, device_id=e99b4337-1c94-4de2-ad86-a1f72f48d701, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e4c550>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e4c460>], id=46a38c51-a4ad-4bd4-ad93-dce4c723141a, ip_allocation=immediate, mac_address=fa:16:3e:39:39:14, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:13:13Z, description=, dns_domain=, id=3cd71a96-eeeb-4267-afc4-bc58e221457e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-810781640, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3403, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2608, status=ACTIVE, subnets=['3eb462c5-efb7-4293-9803-f625b75971b6'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:15Z, vlan_transparent=None, network_id=3cd71a96-eeeb-4267-afc4-bc58e221457e, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2634, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:16Z on network 3cd71a96-eeeb-4267-afc4-bc58e221457e
Dec 05 10:13:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:13:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:13:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:13:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:13:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:13:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:13:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:13:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:13:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:13:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:13:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:13:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:13:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:18.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:18.874 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:13:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:18.874 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:13:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:18.896 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:13:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:18.896 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:18 np0005546420.localdomain dnsmasq[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/addn_hosts - 1 addresses
Dec 05 10:13:18 np0005546420.localdomain dnsmasq-dhcp[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/host
Dec 05 10:13:18 np0005546420.localdomain podman[322716]: 2025-12-05 10:13:18.973798669 +0000 UTC m=+0.060961422 container kill 52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1181d73f-762c-444f-9858-21942bdf30bb, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:13:18 np0005546420.localdomain dnsmasq-dhcp[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/opts
Dec 05 10:13:19 np0005546420.localdomain dnsmasq[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/addn_hosts - 1 addresses
Dec 05 10:13:19 np0005546420.localdomain dnsmasq-dhcp[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/host
Dec 05 10:13:19 np0005546420.localdomain dnsmasq-dhcp[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/opts
Dec 05 10:13:19 np0005546420.localdomain podman[322730]: 2025-12-05 10:13:19.017746496 +0000 UTC m=+0.062462459 container kill a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3cd71a96-eeeb-4267-afc4-bc58e221457e, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:13:19 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:19.257 262769 INFO neutron.agent.dhcp.agent [None req-f2f002f3-df90-4c92-8132-831360c93ed2 - - - - - -] DHCP configuration for ports {'37497e9e-f12f-4c28-86e1-d1a2ed04c6c1'} is completed
Dec 05 10:13:19 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:19.407 262769 INFO neutron.agent.dhcp.agent [None req-53720499-faa6-4586-874a-05fcc1459c95 - - - - - -] DHCP configuration for ports {'46a38c51-a4ad-4bd4-ad93-dce4c723141a'} is completed
Dec 05 10:13:19 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:13:19.555 2 INFO neutron.agent.securitygroups_rpc [None req-0d4ab53e-befa-4191-ae98-022935d1a9c9 9d972c557bac460788722cbb72a5063b 1a3d7fc340f84c5699757971056327c6 - - default default] Security group member updated ['8a2cc3b9-107e-4895-9989-8e73163dac8e']
Dec 05 10:13:19 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:19.595 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:19.891 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:13:19 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/82379447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:13:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:20.018 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:19Z, description=, device_id=873d8367-ebb3-4110-ad1e-ea201738792f, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e42ca0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e42550>], id=059a6cd8-3136-4304-8db1-b80d5fc5f7d6, ip_allocation=immediate, mac_address=fa:16:3e:a5:02:ff, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:13:01Z, description=, dns_domain=, id=8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-300016059, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44508, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2567, status=ACTIVE, subnets=['fed18447-fb18-4e9c-9355-56502df950ae'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:09Z, vlan_transparent=None, network_id=8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2646, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:19Z on network 8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2
Dec 05 10:13:20 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:13:20.020 2 INFO neutron.agent.securitygroups_rpc [None req-b270bd1b-3b7d-41c8-aca6-410a9dbe297f 9d972c557bac460788722cbb72a5063b 1a3d7fc340f84c5699757971056327c6 - - default default] Security group member updated ['8a2cc3b9-107e-4895-9989-8e73163dac8e']
Dec 05 10:13:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:20.044 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:16Z, description=, device_id=e99b4337-1c94-4de2-ad86-a1f72f48d701, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ef9ca0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ef9550>], id=46a38c51-a4ad-4bd4-ad93-dce4c723141a, ip_allocation=immediate, mac_address=fa:16:3e:39:39:14, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:13:13Z, description=, dns_domain=, id=3cd71a96-eeeb-4267-afc4-bc58e221457e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-810781640, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3403, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2608, status=ACTIVE, subnets=['3eb462c5-efb7-4293-9803-f625b75971b6'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:15Z, vlan_transparent=None, network_id=3cd71a96-eeeb-4267-afc4-bc58e221457e, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2634, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:16Z on network 3cd71a96-eeeb-4267-afc4-bc58e221457e
Dec 05 10:13:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:20 np0005546420.localdomain systemd[1]: tmp-crun.qT1z7j.mount: Deactivated successfully.
Dec 05 10:13:20 np0005546420.localdomain podman[322795]: 2025-12-05 10:13:20.263556097 +0000 UTC m=+0.075426660 container kill 9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:13:20 np0005546420.localdomain dnsmasq[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/addn_hosts - 1 addresses
Dec 05 10:13:20 np0005546420.localdomain dnsmasq-dhcp[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/host
Dec 05 10:13:20 np0005546420.localdomain dnsmasq-dhcp[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/opts
Dec 05 10:13:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:13:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:20.284 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:20 np0005546420.localdomain systemd[1]: tmp-crun.AYxSTo.mount: Deactivated successfully.
Dec 05 10:13:20 np0005546420.localdomain podman[322808]: 2025-12-05 10:13:20.343543465 +0000 UTC m=+0.091878516 container kill a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3cd71a96-eeeb-4267-afc4-bc58e221457e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:13:20 np0005546420.localdomain dnsmasq[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/addn_hosts - 1 addresses
Dec 05 10:13:20 np0005546420.localdomain dnsmasq-dhcp[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/host
Dec 05 10:13:20 np0005546420.localdomain dnsmasq-dhcp[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/opts
Dec 05 10:13:20 np0005546420.localdomain podman[322824]: 2025-12-05 10:13:20.393625742 +0000 UTC m=+0.099973477 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Dec 05 10:13:20 np0005546420.localdomain podman[322824]: 2025-12-05 10:13:20.405387884 +0000 UTC m=+0.111735629 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 10:13:20 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:13:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "format": "json"}]: dispatch
Dec 05 10:13:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "force": true, "format": "json"}]: dispatch
Dec 05 10:13:20 np0005546420.localdomain ceph-mon[298353]: pgmap v442: 177 pgs: 177 active+clean; 195 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 9.4 KiB/s wr, 3 op/s
Dec 05 10:13:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:13:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/82379447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:13:20 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:20.913 262769 INFO neutron.agent.dhcp.agent [None req-05b610ab-e588-4e92-a5c4-9e7635dec7b1 - - - - - -] DHCP configuration for ports {'059a6cd8-3136-4304-8db1-b80d5fc5f7d6'} is completed
Dec 05 10:13:21 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:21.056 262769 INFO neutron.agent.dhcp.agent [None req-0b311522-cca0-4f41-b2d1-95436b460d51 - - - - - -] DHCP configuration for ports {'46a38c51-a4ad-4bd4-ad93-dce4c723141a'} is completed
Dec 05 10:13:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:13:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "format": "json"}]: dispatch
Dec 05 10:13:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2587116746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:13:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e184 e184: 6 total, 6 up, 6 in
Dec 05 10:13:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:21.929 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:22 np0005546420.localdomain ceph-mon[298353]: pgmap v443: 177 pgs: 177 active+clean; 195 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 19 KiB/s wr, 6 op/s
Dec 05 10:13:22 np0005546420.localdomain ceph-mon[298353]: osdmap e184: 6 total, 6 up, 6 in
Dec 05 10:13:23 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1c9dbba2-5de3-4dd4-833c-092810bc5276", "format": "json"}]: dispatch
Dec 05 10:13:23 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1c9dbba2-5de3-4dd4-833c-092810bc5276", "force": true, "format": "json"}]: dispatch
Dec 05 10:13:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:23.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:13:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/621558643' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:13:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/621558643' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:24 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:24.396 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:19Z, description=, device_id=873d8367-ebb3-4110-ad1e-ea201738792f, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e324f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e6dd60>], id=059a6cd8-3136-4304-8db1-b80d5fc5f7d6, ip_allocation=immediate, mac_address=fa:16:3e:a5:02:ff, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:13:01Z, description=, dns_domain=, id=8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-300016059, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44508, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2567, status=ACTIVE, subnets=['fed18447-fb18-4e9c-9355-56502df950ae'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:09Z, vlan_transparent=None, network_id=8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2646, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:19Z on network 8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2
Dec 05 10:13:24 np0005546420.localdomain dnsmasq[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/addn_hosts - 1 addresses
Dec 05 10:13:24 np0005546420.localdomain dnsmasq-dhcp[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/host
Dec 05 10:13:24 np0005546420.localdomain dnsmasq-dhcp[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/opts
Dec 05 10:13:24 np0005546420.localdomain podman[322879]: 2025-12-05 10:13:24.614146126 +0000 UTC m=+0.055912157 container kill 9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:13:24 np0005546420.localdomain ceph-mon[298353]: pgmap v445: 177 pgs: 177 active+clean; 195 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 2.4 KiB/s rd, 22 KiB/s wr, 8 op/s
Dec 05 10:13:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/621558643' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/621558643' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2870778165' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2870778165' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:24.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:25.029 262769 INFO neutron.agent.dhcp.agent [None req-f91484a7-9411-41fc-89b2-27ba81206853 - - - - - -] DHCP configuration for ports {'059a6cd8-3136-4304-8db1-b80d5fc5f7d6'} is completed
Dec 05 10:13:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:25 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "snap_name": "49ab100e-1829-4d48-9ef2-7980a4e6fb8c", "format": "json"}]: dispatch
Dec 05 10:13:25 np0005546420.localdomain ceph-mon[298353]: pgmap v446: 177 pgs: 177 active+clean; 195 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 2.4 KiB/s rd, 22 KiB/s wr, 8 op/s
Dec 05 10:13:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/562691739' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/562691739' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:25.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:25.887 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:25.888 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:25 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:25.915 262769 INFO neutron.agent.linux.ip_lib [None req-dc694066-941f-4199-959a-a60438e05994 - - - - - -] Device tapb539bbdd-3c cannot be used as it has no MAC address
Dec 05 10:13:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:25.988 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:25 np0005546420.localdomain kernel: device tapb539bbdd-3c entered promiscuous mode
Dec 05 10:13:25 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929605.9955] manager: (tapb539bbdd-3c): new Generic device (/org/freedesktop/NetworkManager/Devices/55)
Dec 05 10:13:25 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:25Z|00305|binding|INFO|Claiming lport b539bbdd-3c45-4b9f-aa73-3b5046888b33 for this chassis.
Dec 05 10:13:25 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:25Z|00306|binding|INFO|b539bbdd-3c45-4b9f-aa73-3b5046888b33: Claiming unknown
Dec 05 10:13:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:25.996 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:26 np0005546420.localdomain systemd-udevd[322911]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:13:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:26.009 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-2714780e-a5cc-453b-a2bc-14c5e0c23c32', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2714780e-a5cc-453b-a2bc-14c5e0c23c32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fba3f70f-5dc8-43e7-973d-a4f9864222ad, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=b539bbdd-3c45-4b9f-aa73-3b5046888b33) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:13:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:26.012 159503 INFO neutron.agent.ovn.metadata.agent [-] Port b539bbdd-3c45-4b9f-aa73-3b5046888b33 in datapath 2714780e-a5cc-453b-a2bc-14c5e0c23c32 bound to our chassis
Dec 05 10:13:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:26.014 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2714780e-a5cc-453b-a2bc-14c5e0c23c32 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:13:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:26.015 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[07af0d3a-4c05-4d41-b461-b11333e9dcb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:13:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:26.017 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:26Z|00307|binding|INFO|Setting lport b539bbdd-3c45-4b9f-aa73-3b5046888b33 ovn-installed in OVS
Dec 05 10:13:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:26Z|00308|binding|INFO|Setting lport b539bbdd-3c45-4b9f-aa73-3b5046888b33 up in Southbound
Dec 05 10:13:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:26.032 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:26.064 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:26.103 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:26 np0005546420.localdomain dnsmasq[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/addn_hosts - 0 addresses
Dec 05 10:13:26 np0005546420.localdomain dnsmasq-dhcp[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/host
Dec 05 10:13:26 np0005546420.localdomain dnsmasq-dhcp[322523]: read /var/lib/neutron/dhcp/8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2/opts
Dec 05 10:13:26 np0005546420.localdomain systemd[1]: tmp-crun.TogUrN.mount: Deactivated successfully.
Dec 05 10:13:26 np0005546420.localdomain podman[322941]: 2025-12-05 10:13:26.244529348 +0000 UTC m=+0.062347445 container kill 9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:13:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:26.477 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:26Z|00309|binding|INFO|Releasing lport d74fa9e0-f805-49ac-823b-75cbc5c5fe00 from this chassis (sb_readonly=0)
Dec 05 10:13:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:26Z|00310|binding|INFO|Setting lport d74fa9e0-f805-49ac-823b-75cbc5c5fe00 down in Southbound
Dec 05 10:13:26 np0005546420.localdomain kernel: device tapd74fa9e0-f8 left promiscuous mode
Dec 05 10:13:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:26.498 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e067bcb3-9bbb-49fd-bbd5-2ac7e7ce6906, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=d74fa9e0-f805-49ac-823b-75cbc5c5fe00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:13:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:26.503 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:26.503 159503 INFO neutron.agent.ovn.metadata.agent [-] Port d74fa9e0-f805-49ac-823b-75cbc5c5fe00 in datapath 8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2 unbound from our chassis
Dec 05 10:13:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:26.505 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:13:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:26.506 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[d267b255-4544-4494-8ceb-890cefded8da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:13:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3267671454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3267671454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:26 np0005546420.localdomain podman[323005]: 
Dec 05 10:13:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:26.930 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:26 np0005546420.localdomain podman[323005]: 2025-12-05 10:13:26.934496858 +0000 UTC m=+0.069699282 container create 79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2714780e-a5cc-453b-a2bc-14c5e0c23c32, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:13:26 np0005546420.localdomain systemd[1]: Started libpod-conmon-79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a.scope.
Dec 05 10:13:26 np0005546420.localdomain podman[323005]: 2025-12-05 10:13:26.893666489 +0000 UTC m=+0.028868903 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:13:26 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:13:27 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3bb1bf920290864292b67d6f59413908467099deead744846ae103f1b50c206/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:27 np0005546420.localdomain podman[323005]: 2025-12-05 10:13:27.012911549 +0000 UTC m=+0.148113943 container init 79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2714780e-a5cc-453b-a2bc-14c5e0c23c32, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:13:27 np0005546420.localdomain podman[323005]: 2025-12-05 10:13:27.020268687 +0000 UTC m=+0.155471081 container start 79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2714780e-a5cc-453b-a2bc-14c5e0c23c32, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:13:27 np0005546420.localdomain dnsmasq[323023]: started, version 2.85 cachesize 150
Dec 05 10:13:27 np0005546420.localdomain dnsmasq[323023]: DNS service limited to local subnets
Dec 05 10:13:27 np0005546420.localdomain dnsmasq[323023]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:13:27 np0005546420.localdomain dnsmasq[323023]: warning: no upstream servers configured
Dec 05 10:13:27 np0005546420.localdomain dnsmasq-dhcp[323023]: DHCPv6, static leases only on 2001:db8:3::, lease time 1d
Dec 05 10:13:27 np0005546420.localdomain dnsmasq[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/addn_hosts - 0 addresses
Dec 05 10:13:27 np0005546420.localdomain dnsmasq-dhcp[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/host
Dec 05 10:13:27 np0005546420.localdomain dnsmasq-dhcp[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/opts
Dec 05 10:13:27 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:27.068 262769 INFO neutron.agent.dhcp.agent [None req-dc694066-941f-4199-959a-a60438e05994 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:25Z, description=, device_id=e99b4337-1c94-4de2-ad86-a1f72f48d701, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99daa160>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99daa310>], id=98a5e3ff-76d2-481a-8d45-cfeff7fd12a0, ip_allocation=immediate, mac_address=fa:16:3e:a1:52:9a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:13:20Z, description=, dns_domain=, id=2714780e-a5cc-453b-a2bc-14c5e0c23c32, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1930802838, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44194, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2649, status=ACTIVE, subnets=['360a4921-2775-46f4-9b5f-bb8a741d4d95'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:24Z, vlan_transparent=None, network_id=2714780e-a5cc-453b-a2bc-14c5e0c23c32, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2663, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:25Z on network 2714780e-a5cc-453b-a2bc-14c5e0c23c32
Dec 05 10:13:27 np0005546420.localdomain dnsmasq[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/addn_hosts - 1 addresses
Dec 05 10:13:27 np0005546420.localdomain dnsmasq-dhcp[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/host
Dec 05 10:13:27 np0005546420.localdomain dnsmasq-dhcp[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/opts
Dec 05 10:13:27 np0005546420.localdomain podman[323042]: 2025-12-05 10:13:27.255618772 +0000 UTC m=+0.062137019 container kill 79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2714780e-a5cc-453b-a2bc-14c5e0c23c32, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:13:27 np0005546420.localdomain ceph-mon[298353]: pgmap v447: 177 pgs: 177 active+clean; 195 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 29 KiB/s wr, 69 op/s
Dec 05 10:13:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:28.299 262769 INFO neutron.agent.dhcp.agent [None req-f8ecfe15-83e5-4f5e-b2e7-9d6f9cdb9e27 - - - - - -] DHCP configuration for ports {'d121c058-3121-4512-a031-251b375031ca'} is completed
Dec 05 10:13:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:28.405 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:25Z, description=, device_id=e99b4337-1c94-4de2-ad86-a1f72f48d701, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ed4100>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ed4130>], id=98a5e3ff-76d2-481a-8d45-cfeff7fd12a0, ip_allocation=immediate, mac_address=fa:16:3e:a1:52:9a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:13:20Z, description=, dns_domain=, id=2714780e-a5cc-453b-a2bc-14c5e0c23c32, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1930802838, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44194, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2649, status=ACTIVE, subnets=['360a4921-2775-46f4-9b5f-bb8a741d4d95'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:24Z, vlan_transparent=None, network_id=2714780e-a5cc-453b-a2bc-14c5e0c23c32, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2663, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:25Z on network 2714780e-a5cc-453b-a2bc-14c5e0c23c32
Dec 05 10:13:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:28.461 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:28.463 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:13:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:28.465 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:13:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:28.468 262769 INFO neutron.agent.dhcp.agent [None req-113189fb-96f4-401d-8014-fd443ae2d67d - - - - - -] DHCP configuration for ports {'98a5e3ff-76d2-481a-8d45-cfeff7fd12a0'} is completed
Dec 05 10:13:28 np0005546420.localdomain systemd[1]: tmp-crun.0WCmKU.mount: Deactivated successfully.
Dec 05 10:13:28 np0005546420.localdomain podman[323082]: 2025-12-05 10:13:28.520026536 +0000 UTC m=+0.069862938 container kill 52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1181d73f-762c-444f-9858-21942bdf30bb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:13:28 np0005546420.localdomain dnsmasq[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/addn_hosts - 0 addresses
Dec 05 10:13:28 np0005546420.localdomain dnsmasq-dhcp[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/host
Dec 05 10:13:28 np0005546420.localdomain dnsmasq-dhcp[322348]: read /var/lib/neutron/dhcp/1181d73f-762c-444f-9858-21942bdf30bb/opts
Dec 05 10:13:28 np0005546420.localdomain dnsmasq[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/addn_hosts - 1 addresses
Dec 05 10:13:28 np0005546420.localdomain dnsmasq-dhcp[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/host
Dec 05 10:13:28 np0005546420.localdomain podman[323107]: 2025-12-05 10:13:28.616190875 +0000 UTC m=+0.057741473 container kill 79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2714780e-a5cc-453b-a2bc-14c5e0c23c32, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 05 10:13:28 np0005546420.localdomain dnsmasq-dhcp[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/opts
Dec 05 10:13:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:13:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:13:28 np0005546420.localdomain podman[323126]: 2025-12-05 10:13:28.719190535 +0000 UTC m=+0.072621593 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:13:28 np0005546420.localdomain podman[323126]: 2025-12-05 10:13:28.725617343 +0000 UTC m=+0.079048471 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:13:28 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:13:28 np0005546420.localdomain podman[323127]: 2025-12-05 10:13:28.772582173 +0000 UTC m=+0.122170012 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:13:28 np0005546420.localdomain podman[323127]: 2025-12-05 10:13:28.804430727 +0000 UTC m=+0.154018636 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:13:28 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:13:28 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "snap_name": "49ab100e-1829-4d48-9ef2-7980a4e6fb8c_2141957f-76e9-4664-9cf3-8ecf8547c27d", "force": true, "format": "json"}]: dispatch
Dec 05 10:13:29 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:29.175 262769 INFO neutron.agent.dhcp.agent [None req-b2bd6c34-08e0-478d-a284-58063395b6b4 - - - - - -] DHCP configuration for ports {'98a5e3ff-76d2-481a-8d45-cfeff7fd12a0'} is completed
Dec 05 10:13:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:29.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:13:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "snap_name": "49ab100e-1829-4d48-9ef2-7980a4e6fb8c", "force": true, "format": "json"}]: dispatch
Dec 05 10:13:29 np0005546420.localdomain ceph-mon[298353]: pgmap v448: 177 pgs: 177 active+clean; 195 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 29 KiB/s wr, 69 op/s
Dec 05 10:13:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/252827312' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:13:30 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:30Z|00311|binding|INFO|Releasing lport 91cef627-6f36-4510-8f3e-94dfe8246919 from this chassis (sb_readonly=0)
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.040 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:30 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:30Z|00312|binding|INFO|Setting lport 91cef627-6f36-4510-8f3e-94dfe8246919 down in Southbound
Dec 05 10:13:30 np0005546420.localdomain kernel: device tap91cef627-6f left promiscuous mode
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.050 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:13:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:30.051 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-1181d73f-762c-444f-9858-21942bdf30bb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1181d73f-762c-444f-9858-21942bdf30bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9312be8c-d96b-4517-98bf-48bf949ebff8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=91cef627-6f36-4510-8f3e-94dfe8246919) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.052 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.053 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.053 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:13:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:30.053 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 91cef627-6f36-4510-8f3e-94dfe8246919 in datapath 1181d73f-762c-444f-9858-21942bdf30bb unbound from our chassis
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.054 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:13:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:30.056 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1181d73f-762c-444f-9858-21942bdf30bb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:13:30 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:30.057 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[14a9ce47-fbec-49b8-a46c-51193f9fe1b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.073 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:30 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3191798703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3191798703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/456821070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.510 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.707 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.709 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11572MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.709 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.710 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.792 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.793 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:13:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:30.814 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e185 e185: 6 total, 6 up, 6 in
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:30.905615) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929610905648, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1591, "num_deletes": 259, "total_data_size": 2005179, "memory_usage": 2042384, "flush_reason": "Manual Compaction"}
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929610913222, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1049338, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28071, "largest_seqno": 29656, "table_properties": {"data_size": 1043570, "index_size": 2919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15643, "raw_average_key_size": 22, "raw_value_size": 1030874, "raw_average_value_size": 1466, "num_data_blocks": 126, "num_entries": 703, "num_filter_entries": 703, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929537, "oldest_key_time": 1764929537, "file_creation_time": 1764929610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 7635 microseconds, and 2058 cpu microseconds.
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:30.913251) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1049338 bytes OK
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:30.913269) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:30.914994) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:30.915006) EVENT_LOG_v1 {"time_micros": 1764929610915002, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:30.915019) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1997431, prev total WAL file size 1997431, number of live WAL files 2.
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:30.915826) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303037' seq:72057594037927935, type:22 .. '6D6772737461740034323538' seq:0, type:0; will stop at (end)
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1024KB)], [48(16MB)]
Dec 05 10:13:30 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929610915906, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 18812668, "oldest_snapshot_seqno": -1}
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 12948 keys, 16958545 bytes, temperature: kUnknown
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929611015459, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 16958545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16886269, "index_size": 38831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32389, "raw_key_size": 346707, "raw_average_key_size": 26, "raw_value_size": 16667404, "raw_average_value_size": 1287, "num_data_blocks": 1457, "num_entries": 12948, "num_filter_entries": 12948, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:31.015790) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 16958545 bytes
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:31.017340) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.8 rd, 170.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 16.9 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(34.1) write-amplify(16.2) OK, records in: 13452, records dropped: 504 output_compression: NoCompression
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:31.017367) EVENT_LOG_v1 {"time_micros": 1764929611017355, "job": 28, "event": "compaction_finished", "compaction_time_micros": 99641, "compaction_time_cpu_micros": 51315, "output_level": 6, "num_output_files": 1, "total_output_size": 16958545, "num_input_records": 13452, "num_output_records": 12948, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929611017642, "job": 28, "event": "table_file_deletion", "file_number": 50}
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929611020369, "job": 28, "event": "table_file_deletion", "file_number": 48}
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:30.915663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:31.020421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:31.020429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:31.020432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:31.020435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:13:31.020438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3191798703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3191798703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/456821070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3512323628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: osdmap e185: 6 total, 6 up, 6 in
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:13:31 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1572928006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:13:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:31.254 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:13:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:31.262 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:13:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:31.294 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:13:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:31.297 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:13:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:31.298 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:13:31 np0005546420.localdomain dnsmasq[322523]: exiting on receipt of SIGTERM
Dec 05 10:13:31 np0005546420.localdomain podman[323240]: 2025-12-05 10:13:31.658161205 +0000 UTC m=+0.058987722 container kill 9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:13:31 np0005546420.localdomain systemd[1]: libpod-9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e.scope: Deactivated successfully.
Dec 05 10:13:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:13:31 np0005546420.localdomain podman[323253]: 2025-12-05 10:13:31.730188419 +0000 UTC m=+0.056948659 container died 9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:13:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e-userdata-shm.mount: Deactivated successfully.
Dec 05 10:13:31 np0005546420.localdomain podman[323253]: 2025-12-05 10:13:31.764853449 +0000 UTC m=+0.091613619 container cleanup 9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:13:31 np0005546420.localdomain systemd[1]: libpod-conmon-9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e.scope: Deactivated successfully.
Dec 05 10:13:31 np0005546420.localdomain podman[323256]: 2025-12-05 10:13:31.802336616 +0000 UTC m=+0.119536021 container remove 9f5e4b324964378bb0af90f213aaeacacf7781b0ab8ed70b2870aa720b04595e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8bf6791b-ab44-45b4-884c-b4bf3bbfb2c2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 05 10:13:31 np0005546420.localdomain podman[323255]: 2025-12-05 10:13:31.861666088 +0000 UTC m=+0.178490522 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 10:13:31 np0005546420.localdomain podman[323255]: 2025-12-05 10:13:31.873321137 +0000 UTC m=+0.190145591 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:13:31 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:13:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:31.958 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:32 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:32.051 262769 INFO neutron.agent.dhcp.agent [None req-3a02a1fd-47a0-45e7-8a7a-e7837691b1a1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1572928006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:13:32 np0005546420.localdomain ceph-mon[298353]: pgmap v450: 177 pgs: 177 active+clean; 195 MiB data, 964 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 31 KiB/s wr, 90 op/s
Dec 05 10:13:32 np0005546420.localdomain dnsmasq[322348]: exiting on receipt of SIGTERM
Dec 05 10:13:32 np0005546420.localdomain podman[323315]: 2025-12-05 10:13:32.085312822 +0000 UTC m=+0.064194113 container kill 52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1181d73f-762c-444f-9858-21942bdf30bb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:13:32 np0005546420.localdomain systemd[1]: libpod-52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5.scope: Deactivated successfully.
Dec 05 10:13:32 np0005546420.localdomain podman[323330]: 2025-12-05 10:13:32.166808518 +0000 UTC m=+0.062251793 container died 52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1181d73f-762c-444f-9858-21942bdf30bb, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:13:32 np0005546420.localdomain podman[323330]: 2025-12-05 10:13:32.198449085 +0000 UTC m=+0.093892340 container cleanup 52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1181d73f-762c-444f-9858-21942bdf30bb, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:13:32 np0005546420.localdomain systemd[1]: libpod-conmon-52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5.scope: Deactivated successfully.
Dec 05 10:13:32 np0005546420.localdomain podman[323331]: 2025-12-05 10:13:32.241928197 +0000 UTC m=+0.128329323 container remove 52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1181d73f-762c-444f-9858-21942bdf30bb, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:13:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-2483c80ab2e398411661cba4d35ba467b6e4e823de6b04dc47c73af4b0bf6da6-merged.mount: Deactivated successfully.
Dec 05 10:13:32 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d8bf6791b\x2dab44\x2d45b4\x2d884c\x2db4bf3bbfb2c2.mount: Deactivated successfully.
Dec 05 10:13:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-04720658ea2cbc860b0138311f75a42acc8e64fb6f23544a14d2d231acc91c6b-merged.mount: Deactivated successfully.
Dec 05 10:13:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-52ffc8f92872945c2efbb5998fe9e2cc3e8d24536858c89f0e7490d527c4e7d5-userdata-shm.mount: Deactivated successfully.
Dec 05 10:13:32 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:32.929 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:32 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:32.964 262769 INFO neutron.agent.dhcp.agent [None req-e94a474c-8845-4658-9cf3-0a1524a3df95 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:32 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d1181d73f\x2d762c\x2d444f\x2d9858\x2d21942bdf30bb.mount: Deactivated successfully.
Dec 05 10:13:33 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "format": "json"}]: dispatch
Dec 05 10:13:33 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "force": true, "format": "json"}]: dispatch
Dec 05 10:13:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:33.342 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:34 np0005546420.localdomain ceph-mon[298353]: pgmap v451: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 29 KiB/s wr, 85 op/s
Dec 05 10:13:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:36 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:36.011 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:36 np0005546420.localdomain ceph-mon[298353]: pgmap v452: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 29 KiB/s wr, 85 op/s
Dec 05 10:13:36 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:36.467 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:13:36 np0005546420.localdomain sshd[323360]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:13:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e186 e186: 6 total, 6 up, 6 in
Dec 05 10:13:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:36.989 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:37 np0005546420.localdomain dnsmasq[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/addn_hosts - 0 addresses
Dec 05 10:13:37 np0005546420.localdomain dnsmasq-dhcp[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/host
Dec 05 10:13:37 np0005546420.localdomain dnsmasq-dhcp[323023]: read /var/lib/neutron/dhcp/2714780e-a5cc-453b-a2bc-14c5e0c23c32/opts
Dec 05 10:13:37 np0005546420.localdomain podman[323378]: 2025-12-05 10:13:37.861512322 +0000 UTC m=+0.068111674 container kill 79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2714780e-a5cc-453b-a2bc-14c5e0c23c32, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:13:37 np0005546420.localdomain ceph-mon[298353]: osdmap e186: 6 total, 6 up, 6 in
Dec 05 10:13:37 np0005546420.localdomain ceph-mon[298353]: pgmap v454: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 32 KiB/s wr, 29 op/s
Dec 05 10:13:38 np0005546420.localdomain kernel: device tapb539bbdd-3c left promiscuous mode
Dec 05 10:13:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:38.104 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:38 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:38Z|00313|binding|INFO|Releasing lport b539bbdd-3c45-4b9f-aa73-3b5046888b33 from this chassis (sb_readonly=0)
Dec 05 10:13:38 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:38Z|00314|binding|INFO|Setting lport b539bbdd-3c45-4b9f-aa73-3b5046888b33 down in Southbound
Dec 05 10:13:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:38.121 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:38 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:38.126 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-2714780e-a5cc-453b-a2bc-14c5e0c23c32', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2714780e-a5cc-453b-a2bc-14c5e0c23c32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fba3f70f-5dc8-43e7-973d-a4f9864222ad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=b539bbdd-3c45-4b9f-aa73-3b5046888b33) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:13:38 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:38.128 159503 INFO neutron.agent.ovn.metadata.agent [-] Port b539bbdd-3c45-4b9f-aa73-3b5046888b33 in datapath 2714780e-a5cc-453b-a2bc-14c5e0c23c32 unbound from our chassis
Dec 05 10:13:38 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:38.129 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2714780e-a5cc-453b-a2bc-14c5e0c23c32 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:13:38 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:38.131 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[2d045219-db23-48f9-9d69-559b5b5cfd42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:13:38 np0005546420.localdomain dnsmasq[323023]: exiting on receipt of SIGTERM
Dec 05 10:13:38 np0005546420.localdomain podman[323417]: 2025-12-05 10:13:38.514303265 +0000 UTC m=+0.046006362 container kill 79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2714780e-a5cc-453b-a2bc-14c5e0c23c32, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:13:38 np0005546420.localdomain systemd[1]: libpod-79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a.scope: Deactivated successfully.
Dec 05 10:13:38 np0005546420.localdomain podman[323431]: 2025-12-05 10:13:38.573620456 +0000 UTC m=+0.048761657 container died 79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2714780e-a5cc-453b-a2bc-14c5e0c23c32, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:13:38 np0005546420.localdomain podman[323431]: 2025-12-05 10:13:38.604079346 +0000 UTC m=+0.079220487 container cleanup 79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2714780e-a5cc-453b-a2bc-14c5e0c23c32, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:13:38 np0005546420.localdomain systemd[1]: libpod-conmon-79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a.scope: Deactivated successfully.
Dec 05 10:13:38 np0005546420.localdomain podman[323433]: 2025-12-05 10:13:38.647798905 +0000 UTC m=+0.115338251 container remove 79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2714780e-a5cc-453b-a2bc-14c5e0c23c32, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:13:38 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:38.721 262769 INFO neutron.agent.dhcp.agent [None req-c78c3dde-207d-4aee-9c0e-563bb3a67d95 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:38 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:38.819 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d3bb1bf920290864292b67d6f59413908467099deead744846ae103f1b50c206-merged.mount: Deactivated successfully.
Dec 05 10:13:38 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79eab1686b5ec3f3a7e3f6837590b94bcd9d294bbb68d99eee1b9caf04881d9a-userdata-shm.mount: Deactivated successfully.
Dec 05 10:13:38 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d2714780e\x2da5cc\x2d453b\x2da2bc\x2d14c5e0c23c32.mount: Deactivated successfully.
Dec 05 10:13:39 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:39.069 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:39.419 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:40 np0005546420.localdomain ceph-mon[298353]: pgmap v455: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 30 KiB/s wr, 27 op/s
Dec 05 10:13:41 np0005546420.localdomain podman[323479]: 2025-12-05 10:13:41.220364395 +0000 UTC m=+0.070147997 container kill a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3cd71a96-eeeb-4267-afc4-bc58e221457e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:13:41 np0005546420.localdomain dnsmasq[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/addn_hosts - 0 addresses
Dec 05 10:13:41 np0005546420.localdomain dnsmasq-dhcp[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/host
Dec 05 10:13:41 np0005546420.localdomain dnsmasq-dhcp[322671]: read /var/lib/neutron/dhcp/3cd71a96-eeeb-4267-afc4-bc58e221457e/opts
Dec 05 10:13:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:41.455 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:41 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:41Z|00315|binding|INFO|Releasing lport 52df937b-4bf6-46bc-8fc6-59028035a754 from this chassis (sb_readonly=0)
Dec 05 10:13:41 np0005546420.localdomain kernel: device tap52df937b-4b left promiscuous mode
Dec 05 10:13:41 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:41Z|00316|binding|INFO|Setting lport 52df937b-4bf6-46bc-8fc6-59028035a754 down in Southbound
Dec 05 10:13:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:41.480 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:41 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:41.554 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-3cd71a96-eeeb-4267-afc4-bc58e221457e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cd71a96-eeeb-4267-afc4-bc58e221457e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66727bc4-90ea-4e37-b240-9e0ba26ad0bd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=52df937b-4bf6-46bc-8fc6-59028035a754) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:13:41 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:41.556 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 52df937b-4bf6-46bc-8fc6-59028035a754 in datapath 3cd71a96-eeeb-4267-afc4-bc58e221457e unbound from our chassis
Dec 05 10:13:41 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:41.558 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3cd71a96-eeeb-4267-afc4-bc58e221457e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:13:41 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:41.559 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[4855a940-eba1-4f26-a987-a1a6ccdc7a81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:13:41 np0005546420.localdomain ceph-mon[298353]: pgmap v456: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 16 KiB/s wr, 2 op/s
Dec 05 10:13:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:42.080 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:42 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:13:42.249 2 INFO neutron.agent.securitygroups_rpc [None req-87213855-0084-4785-a016-e26749e5f546 44355f1bf7d041b79ae4db9ce4fe218d ecb85ff3c88d49d6b771a6e34a36ee4c - - default default] Security group member updated ['74a3c9d8-4ec2-42ae-8d2d-0e9d9384fe30']
Dec 05 10:13:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:13:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:13:42 np0005546420.localdomain podman[323501]: 2025-12-05 10:13:42.510227914 +0000 UTC m=+0.085824031 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6)
Dec 05 10:13:42 np0005546420.localdomain podman[323501]: 2025-12-05 10:13:42.521284256 +0000 UTC m=+0.096880433 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:13:42 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:13:42 np0005546420.localdomain systemd[1]: tmp-crun.aF7cas.mount: Deactivated successfully.
Dec 05 10:13:42 np0005546420.localdomain podman[323502]: 2025-12-05 10:13:42.579685349 +0000 UTC m=+0.152193159 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:13:42 np0005546420.localdomain podman[323502]: 2025-12-05 10:13:42.586701095 +0000 UTC m=+0.159208905 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:13:42 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:13:42 np0005546420.localdomain systemd[1]: tmp-crun.RETTMm.mount: Deactivated successfully.
Dec 05 10:13:42 np0005546420.localdomain podman[323560]: 2025-12-05 10:13:42.911244275 +0000 UTC m=+0.061583713 container kill a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3cd71a96-eeeb-4267-afc4-bc58e221457e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 10:13:42 np0005546420.localdomain dnsmasq[322671]: exiting on receipt of SIGTERM
Dec 05 10:13:42 np0005546420.localdomain systemd[1]: libpod-a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d.scope: Deactivated successfully.
Dec 05 10:13:42 np0005546420.localdomain podman[323572]: 2025-12-05 10:13:42.985092155 +0000 UTC m=+0.059351744 container died a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3cd71a96-eeeb-4267-afc4-bc58e221457e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:13:43 np0005546420.localdomain podman[323572]: 2025-12-05 10:13:43.017140864 +0000 UTC m=+0.091400423 container cleanup a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3cd71a96-eeeb-4267-afc4-bc58e221457e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:13:43 np0005546420.localdomain systemd[1]: libpod-conmon-a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d.scope: Deactivated successfully.
Dec 05 10:13:43 np0005546420.localdomain podman[323574]: 2025-12-05 10:13:43.065233128 +0000 UTC m=+0.129920701 container remove a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3cd71a96-eeeb-4267-afc4-bc58e221457e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:13:43 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:43.425 262769 INFO neutron.agent.dhcp.agent [None req-80173a66-8918-43bb-a118-89afd32168ec - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:43 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:43.426 262769 INFO neutron.agent.dhcp.agent [None req-80173a66-8918-43bb-a118-89afd32168ec - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-818936f7aebec3d79eafb5dbd34e0dcf78d0b3b11dfeee7b907aba02a358ccd1-merged.mount: Deactivated successfully.
Dec 05 10:13:43 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5df8606f68c6ba12d789085418701ba361ed2884949a120d9632949f742538d-userdata-shm.mount: Deactivated successfully.
Dec 05 10:13:43 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d3cd71a96\x2deeeb\x2d4267\x2dafc4\x2dbc58e221457e.mount: Deactivated successfully.
Dec 05 10:13:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:44.044 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:13:44 np0005546420.localdomain ceph-mon[298353]: pgmap v457: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 16 KiB/s wr, 1 op/s
Dec 05 10:13:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:44.789 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e187 e187: 6 total, 6 up, 6 in
Dec 05 10:13:46 np0005546420.localdomain ceph-mon[298353]: pgmap v458: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 16 KiB/s wr, 1 op/s
Dec 05 10:13:46 np0005546420.localdomain ceph-mon[298353]: osdmap e187: 6 total, 6 up, 6 in
Dec 05 10:13:47 np0005546420.localdomain sshd[323360]: error: kex_exchange_identification: read: Connection timed out
Dec 05 10:13:47 np0005546420.localdomain sshd[323360]: banner exchange: Connection from 115.216.53.216 port 48480: Connection timed out
Dec 05 10:13:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:47.082 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:47.084 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:47 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:13:47 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1140726160' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:47 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:13:47 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1140726160' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:13:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:13:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:13:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:13:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:13:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18267 "" "Go-http-client/1.1"
Dec 05 10:13:47 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1140726160' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:47 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1140726160' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:13:48 np0005546420.localdomain ceph-mon[298353]: pgmap v460: 177 pgs: 177 active+clean; 195 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 8.4 KiB/s rd, 9.1 KiB/s wr, 12 op/s
Dec 05 10:13:48 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:13:48 np0005546420.localdomain podman[323602]: 2025-12-05 10:13:48.504001152 +0000 UTC m=+0.081339433 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:13:48 np0005546420.localdomain podman[323602]: 2025-12-05 10:13:48.573562279 +0000 UTC m=+0.150900520 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:13:48 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:13:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:13:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:13:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:13:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:13:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:13:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:13:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:13:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:13:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:13:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:13:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:13:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:13:49 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:13:49.284 2 INFO neutron.agent.securitygroups_rpc [None req-71635bc4-7947-4b62-9354-b7ffa3a8d0a7 44355f1bf7d041b79ae4db9ce4fe218d ecb85ff3c88d49d6b771a6e34a36ee4c - - default default] Security group member updated ['74a3c9d8-4ec2-42ae-8d2d-0e9d9384fe30']
Dec 05 10:13:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:13:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "format": "json"}]: dispatch
Dec 05 10:13:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:50 np0005546420.localdomain ceph-mon[298353]: pgmap v461: 177 pgs: 177 active+clean; 195 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 8.4 KiB/s rd, 9.1 KiB/s wr, 12 op/s
Dec 05 10:13:51 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:13:51 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1153115314' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:51 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:13:51 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1153115314' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:13:51 np0005546420.localdomain podman[323628]: 2025-12-05 10:13:51.498797316 +0000 UTC m=+0.078096033 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:13:51 np0005546420.localdomain podman[323628]: 2025-12-05 10:13:51.569529559 +0000 UTC m=+0.148828286 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:13:51 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:13:51 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1153115314' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:13:51 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1153115314' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:13:51 np0005546420.localdomain ceph-mon[298353]: pgmap v462: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 12 KiB/s wr, 35 op/s
Dec 05 10:13:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:52.085 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:13:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:52.086 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:52.087 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:13:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:52.087 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:13:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:52.087 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:13:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:52.089 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:54 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "1adf68f6-3438-4b98-a816-a549a3420ad9", "format": "json"}]: dispatch
Dec 05 10:13:54 np0005546420.localdomain ceph-mon[298353]: pgmap v463: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 12 KiB/s wr, 35 op/s
Dec 05 10:13:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:13:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e188 e188: 6 total, 6 up, 6 in
Dec 05 10:13:56 np0005546420.localdomain ceph-mon[298353]: pgmap v464: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 12 KiB/s wr, 35 op/s
Dec 05 10:13:56 np0005546420.localdomain ceph-mon[298353]: osdmap e188: 6 total, 6 up, 6 in
Dec 05 10:13:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:57.089 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:57 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "6db860ce-42d9-4edb-b0ff-07bd3a36139e", "format": "json"}]: dispatch
Dec 05 10:13:57 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e189 e189: 6 total, 6 up, 6 in
Dec 05 10:13:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:57.806 262769 INFO neutron.agent.linux.ip_lib [None req-8798d2a7-f458-4945-a054-e12bd78fdfac - - - - - -] Device tapd0fd5a68-40 cannot be used as it has no MAC address
Dec 05 10:13:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:57.863 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:57 np0005546420.localdomain kernel: device tapd0fd5a68-40 entered promiscuous mode
Dec 05 10:13:57 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929637.8727] manager: (tapd0fd5a68-40): new Generic device (/org/freedesktop/NetworkManager/Devices/56)
Dec 05 10:13:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:57.872 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:57 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:57Z|00317|binding|INFO|Claiming lport d0fd5a68-40d8-44e3-8e59-0c57711ee314 for this chassis.
Dec 05 10:13:57 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:57Z|00318|binding|INFO|d0fd5a68-40d8-44e3-8e59-0c57711ee314: Claiming unknown
Dec 05 10:13:57 np0005546420.localdomain systemd-udevd[323656]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:13:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:57.879 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:57 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd0fd5a68-40: No such device
Dec 05 10:13:57 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:57Z|00319|binding|INFO|Setting lport d0fd5a68-40d8-44e3-8e59-0c57711ee314 ovn-installed in OVS
Dec 05 10:13:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:57.908 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:57 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd0fd5a68-40: No such device
Dec 05 10:13:57 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd0fd5a68-40: No such device
Dec 05 10:13:57 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd0fd5a68-40: No such device
Dec 05 10:13:57 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd0fd5a68-40: No such device
Dec 05 10:13:57 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd0fd5a68-40: No such device
Dec 05 10:13:57 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd0fd5a68-40: No such device
Dec 05 10:13:57 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd0fd5a68-40: No such device
Dec 05 10:13:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:57.948 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:57 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:13:57Z|00320|binding|INFO|Setting lport d0fd5a68-40d8-44e3-8e59-0c57711ee314 up in Southbound
Dec 05 10:13:57 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:57.970 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb34ce1d-c25e-4079-8648-553af8fc46b0, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=d0fd5a68-40d8-44e3-8e59-0c57711ee314) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:13:57 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:57.971 159503 INFO neutron.agent.ovn.metadata.agent [-] Port d0fd5a68-40d8-44e3-8e59-0c57711ee314 in datapath a76a2750-b5d5-4005-a3cc-d1f8d1afd19a bound to our chassis
Dec 05 10:13:57 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:57.972 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a76a2750-b5d5-4005-a3cc-d1f8d1afd19a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:13:57 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:13:57.973 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[ba057093-eec6-4ccd-8a60-9cfc125dc9e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:13:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:13:57.979 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:13:58 np0005546420.localdomain ceph-mon[298353]: pgmap v466: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 12 KiB/s wr, 33 op/s
Dec 05 10:13:58 np0005546420.localdomain ceph-mon[298353]: osdmap e189: 6 total, 6 up, 6 in
Dec 05 10:13:58 np0005546420.localdomain podman[323727]: 
Dec 05 10:13:58 np0005546420.localdomain podman[323727]: 2025-12-05 10:13:58.902470978 +0000 UTC m=+0.094976543 container create 26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:13:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:13:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:13:58 np0005546420.localdomain systemd[1]: Started libpod-conmon-26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f.scope.
Dec 05 10:13:58 np0005546420.localdomain podman[323727]: 2025-12-05 10:13:58.858416248 +0000 UTC m=+0.050921783 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:13:58 np0005546420.localdomain systemd[1]: tmp-crun.DEk5aa.mount: Deactivated successfully.
Dec 05 10:13:58 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:13:58 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c308be3a6afcdbd77535fe6857b02f529c90678ededf4a8eca68ca23d7a34b8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:13:58 np0005546420.localdomain podman[323727]: 2025-12-05 10:13:58.998433411 +0000 UTC m=+0.190938906 container init 26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 10:13:59 np0005546420.localdomain dnsmasq[323769]: started, version 2.85 cachesize 150
Dec 05 10:13:59 np0005546420.localdomain dnsmasq[323769]: DNS service limited to local subnets
Dec 05 10:13:59 np0005546420.localdomain dnsmasq[323769]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:13:59 np0005546420.localdomain dnsmasq[323769]: warning: no upstream servers configured
Dec 05 10:13:59 np0005546420.localdomain dnsmasq-dhcp[323769]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:13:59 np0005546420.localdomain dnsmasq[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/addn_hosts - 0 addresses
Dec 05 10:13:59 np0005546420.localdomain dnsmasq-dhcp[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/host
Dec 05 10:13:59 np0005546420.localdomain dnsmasq-dhcp[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/opts
Dec 05 10:13:59 np0005546420.localdomain podman[323741]: 2025-12-05 10:13:59.038780676 +0000 UTC m=+0.084633704 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:13:59 np0005546420.localdomain podman[323741]: 2025-12-05 10:13:59.047512995 +0000 UTC m=+0.093365983 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:13:59 np0005546420.localdomain podman[323727]: 2025-12-05 10:13:59.057929927 +0000 UTC m=+0.250435422 container start 26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 05 10:13:59 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:13:59 np0005546420.localdomain podman[323742]: 2025-12-05 10:13:59.145889552 +0000 UTC m=+0.181720230 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 10:13:59 np0005546420.localdomain podman[323742]: 2025-12-05 10:13:59.152238359 +0000 UTC m=+0.188069017 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 10:13:59 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:13:59 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:13:59.199 262769 INFO neutron.agent.dhcp.agent [None req-b6b28059-7608-4986-ad44-58709ef74af0 - - - - - -] DHCP configuration for ports {'ae78f1db-53f6-4218-afad-6247327f0610'} is completed
Dec 05 10:13:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e190 e190: 6 total, 6 up, 6 in
Dec 05 10:14:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:00 np0005546420.localdomain ceph-mon[298353]: pgmap v468: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 8.6 KiB/s rd, 7.1 KiB/s wr, 13 op/s
Dec 05 10:14:00 np0005546420.localdomain ceph-mon[298353]: osdmap e190: 6 total, 6 up, 6 in
Dec 05 10:14:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e191 e191: 6 total, 6 up, 6 in
Dec 05 10:14:01 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:01.353 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:59Z, description=, device_id=24eef7c7-3722-4c2f-85ba-0da55dccb435, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f1b9a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99ea3a00>], id=982950dc-d14b-4138-bab4-b317fbceac90, ip_allocation=immediate, mac_address=fa:16:3e:ab:30:fe, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:13:54Z, description=, dns_domain=, id=a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1456954336, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17712, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2760, status=ACTIVE, subnets=['44de7d9e-a581-4717-9e2b-7ea6a593e1f6'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:56Z, vlan_transparent=None, network_id=a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2804, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:00Z on network a76a2750-b5d5-4005-a3cc-d1f8d1afd19a
Dec 05 10:14:01 np0005546420.localdomain ceph-mon[298353]: osdmap e191: 6 total, 6 up, 6 in
Dec 05 10:14:01 np0005546420.localdomain podman[323804]: 2025-12-05 10:14:01.564603732 +0000 UTC m=+0.048519949 container kill 26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:14:01 np0005546420.localdomain dnsmasq[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/addn_hosts - 1 addresses
Dec 05 10:14:01 np0005546420.localdomain dnsmasq-dhcp[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/host
Dec 05 10:14:01 np0005546420.localdomain dnsmasq-dhcp[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/opts
Dec 05 10:14:01 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:01.942 262769 INFO neutron.agent.dhcp.agent [None req-cd2603c8-cba0-4db8-9c44-ec99daa94eb9 - - - - - -] DHCP configuration for ports {'982950dc-d14b-4138-bab4-b317fbceac90'} is completed
Dec 05 10:14:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:02.092 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:14:02 np0005546420.localdomain systemd[1]: tmp-crun.R21iFw.mount: Deactivated successfully.
Dec 05 10:14:02 np0005546420.localdomain podman[323826]: 2025-12-05 10:14:02.525272599 +0000 UTC m=+0.094967663 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 05 10:14:02 np0005546420.localdomain podman[323826]: 2025-12-05 10:14:02.542536292 +0000 UTC m=+0.112231426 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Dec 05 10:14:02 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "6db860ce-42d9-4edb-b0ff-07bd3a36139e_3724db05-5f8c-43ab-b696-20bf24668fa6", "force": true, "format": "json"}]: dispatch
Dec 05 10:14:02 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "6db860ce-42d9-4edb-b0ff-07bd3a36139e", "force": true, "format": "json"}]: dispatch
Dec 05 10:14:02 np0005546420.localdomain ceph-mon[298353]: pgmap v471: 177 pgs: 177 active+clean; 196 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 38 KiB/s wr, 78 op/s
Dec 05 10:14:02 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/542252184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:02 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/542252184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:02 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:14:03 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:03.527 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:59Z, description=, device_id=24eef7c7-3722-4c2f-85ba-0da55dccb435, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99daa8b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a8a3d90>], id=982950dc-d14b-4138-bab4-b317fbceac90, ip_allocation=immediate, mac_address=fa:16:3e:ab:30:fe, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:13:54Z, description=, dns_domain=, id=a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1456954336, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17712, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2760, status=ACTIVE, subnets=['44de7d9e-a581-4717-9e2b-7ea6a593e1f6'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:56Z, vlan_transparent=None, network_id=a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2804, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:00Z on network a76a2750-b5d5-4005-a3cc-d1f8d1afd19a
Dec 05 10:14:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3525768209' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3525768209' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:03 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:03.782 262769 INFO neutron.agent.linux.ip_lib [None req-29a58245-a683-4761-a866-f11f45d5166a - - - - - -] Device tapa37fd622-10 cannot be used as it has no MAC address
Dec 05 10:14:03 np0005546420.localdomain podman[323865]: 2025-12-05 10:14:03.802671075 +0000 UTC m=+0.063294696 container kill 26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:14:03 np0005546420.localdomain dnsmasq[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/addn_hosts - 1 addresses
Dec 05 10:14:03 np0005546420.localdomain dnsmasq-dhcp[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/host
Dec 05 10:14:03 np0005546420.localdomain dnsmasq-dhcp[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/opts
Dec 05 10:14:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:03.805 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:03 np0005546420.localdomain kernel: device tapa37fd622-10 entered promiscuous mode
Dec 05 10:14:03 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929643.8126] manager: (tapa37fd622-10): new Generic device (/org/freedesktop/NetworkManager/Devices/57)
Dec 05 10:14:03 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:03Z|00321|binding|INFO|Claiming lport a37fd622-10aa-4a64-8d6d-a2d9fe92452b for this chassis.
Dec 05 10:14:03 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:03Z|00322|binding|INFO|a37fd622-10aa-4a64-8d6d-a2d9fe92452b: Claiming unknown
Dec 05 10:14:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:03.816 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:03 np0005546420.localdomain systemd-udevd[323883]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:14:03 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:03.831 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-de7b370c-b329-459e-a644-ff68f6112395', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de7b370c-b329-459e-a644-ff68f6112395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8730b222fadf4a249823e59d8b326dde', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2cc7c18-2a28-45cd-9e3a-7a28f4302155, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=a37fd622-10aa-4a64-8d6d-a2d9fe92452b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:14:03 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:03.833 159503 INFO neutron.agent.ovn.metadata.agent [-] Port a37fd622-10aa-4a64-8d6d-a2d9fe92452b in datapath de7b370c-b329-459e-a644-ff68f6112395 bound to our chassis
Dec 05 10:14:03 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:03.836 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network de7b370c-b329-459e-a644-ff68f6112395 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:14:03 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:03.837 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[34716cb8-09a2-4244-9162-9deb8e44d447]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:14:03 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:03Z|00323|binding|INFO|Setting lport a37fd622-10aa-4a64-8d6d-a2d9fe92452b ovn-installed in OVS
Dec 05 10:14:03 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:03Z|00324|binding|INFO|Setting lport a37fd622-10aa-4a64-8d6d-a2d9fe92452b up in Southbound
Dec 05 10:14:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:03.868 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:03.870 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:03.945 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:03.982 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:04.132 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:14:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:04.132 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:14:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:04.133 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:14:04 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:04.240 262769 INFO neutron.agent.dhcp.agent [None req-e9a891e3-05b5-49ac-a61e-cda8f71764ff - - - - - -] DHCP configuration for ports {'982950dc-d14b-4138-bab4-b317fbceac90'} is completed
Dec 05 10:14:04 np0005546420.localdomain ceph-mon[298353]: pgmap v472: 177 pgs: 177 active+clean; 196 MiB data, 996 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 28 KiB/s wr, 62 op/s
Dec 05 10:14:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e192 e192: 6 total, 6 up, 6 in
Dec 05 10:14:04 np0005546420.localdomain podman[323946]: 
Dec 05 10:14:04 np0005546420.localdomain podman[323946]: 2025-12-05 10:14:04.992270589 +0000 UTC m=+0.073106658 container create 3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7b370c-b329-459e-a644-ff68f6112395, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:14:05 np0005546420.localdomain systemd[1]: Started libpod-conmon-3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177.scope.
Dec 05 10:14:05 np0005546420.localdomain systemd[1]: tmp-crun.mCgo1Z.mount: Deactivated successfully.
Dec 05 10:14:05 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:14:05 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f54e29a409463f5531ff3abed8d1c5fc75fca6017ccce223ab913d590b7461/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:05 np0005546420.localdomain podman[323946]: 2025-12-05 10:14:05.055698967 +0000 UTC m=+0.136535036 container init 3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7b370c-b329-459e-a644-ff68f6112395, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:14:05 np0005546420.localdomain podman[323946]: 2025-12-05 10:14:04.960659473 +0000 UTC m=+0.041495512 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:14:05 np0005546420.localdomain podman[323946]: 2025-12-05 10:14:05.069927526 +0000 UTC m=+0.150763585 container start 3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7b370c-b329-459e-a644-ff68f6112395, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 10:14:05 np0005546420.localdomain dnsmasq[323965]: started, version 2.85 cachesize 150
Dec 05 10:14:05 np0005546420.localdomain dnsmasq[323965]: DNS service limited to local subnets
Dec 05 10:14:05 np0005546420.localdomain dnsmasq[323965]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:14:05 np0005546420.localdomain dnsmasq[323965]: warning: no upstream servers configured
Dec 05 10:14:05 np0005546420.localdomain dnsmasq-dhcp[323965]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:14:05 np0005546420.localdomain dnsmasq[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/addn_hosts - 0 addresses
Dec 05 10:14:05 np0005546420.localdomain dnsmasq-dhcp[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/host
Dec 05 10:14:05 np0005546420.localdomain dnsmasq-dhcp[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/opts
Dec 05 10:14:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e193 e193: 6 total, 6 up, 6 in
Dec 05 10:14:05 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "1adf68f6-3438-4b98-a816-a549a3420ad9_7a19f9f1-6dc0-4639-89b3-f05e8a347b46", "force": true, "format": "json"}]: dispatch
Dec 05 10:14:05 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "1adf68f6-3438-4b98-a816-a549a3420ad9", "force": true, "format": "json"}]: dispatch
Dec 05 10:14:05 np0005546420.localdomain ceph-mon[298353]: osdmap e192: 6 total, 6 up, 6 in
Dec 05 10:14:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e194 e194: 6 total, 6 up, 6 in
Dec 05 10:14:05 np0005546420.localdomain systemd[1]: tmp-crun.kUTMfZ.mount: Deactivated successfully.
Dec 05 10:14:06 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:06.005 262769 INFO neutron.agent.dhcp.agent [None req-8dc69b86-5b50-4e33-8e43-96280c486752 - - - - - -] DHCP configuration for ports {'9cc0a497-9177-4b9c-95cf-2dbfe4cf4408'} is completed
Dec 05 10:14:06 np0005546420.localdomain dnsmasq[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/addn_hosts - 0 addresses
Dec 05 10:14:06 np0005546420.localdomain dnsmasq-dhcp[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/host
Dec 05 10:14:06 np0005546420.localdomain dnsmasq-dhcp[323769]: read /var/lib/neutron/dhcp/a76a2750-b5d5-4005-a3cc-d1f8d1afd19a/opts
Dec 05 10:14:06 np0005546420.localdomain podman[323983]: 2025-12-05 10:14:06.215928475 +0000 UTC m=+0.083886160 container kill 26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:14:06 np0005546420.localdomain systemd[1]: tmp-crun.sAklkD.mount: Deactivated successfully.
Dec 05 10:14:06 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:06Z|00325|binding|INFO|Releasing lport d0fd5a68-40d8-44e3-8e59-0c57711ee314 from this chassis (sb_readonly=0)
Dec 05 10:14:06 np0005546420.localdomain kernel: device tapd0fd5a68-40 left promiscuous mode
Dec 05 10:14:06 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:06Z|00326|binding|INFO|Setting lport d0fd5a68-40d8-44e3-8e59-0c57711ee314 down in Southbound
Dec 05 10:14:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:06.473 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:06 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:06.485 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb34ce1d-c25e-4079-8648-553af8fc46b0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=d0fd5a68-40d8-44e3-8e59-0c57711ee314) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:14:06 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:06.488 159503 INFO neutron.agent.ovn.metadata.agent [-] Port d0fd5a68-40d8-44e3-8e59-0c57711ee314 in datapath a76a2750-b5d5-4005-a3cc-d1f8d1afd19a unbound from our chassis
Dec 05 10:14:06 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:06.490 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:14:06 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:06.491 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[23910460-af92-4ca2-8eae-c9151da61448]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:14:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:06.502 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:06 np0005546420.localdomain ceph-mon[298353]: pgmap v474: 177 pgs: 177 active+clean; 196 MiB data, 996 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 28 KiB/s wr, 62 op/s
Dec 05 10:14:06 np0005546420.localdomain ceph-mon[298353]: osdmap e193: 6 total, 6 up, 6 in
Dec 05 10:14:06 np0005546420.localdomain ceph-mon[298353]: osdmap e194: 6 total, 6 up, 6 in
Dec 05 10:14:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:07.094 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:08 np0005546420.localdomain ceph-mon[298353]: pgmap v477: 177 pgs: 177 active+clean; 196 MiB data, 996 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 25 KiB/s wr, 43 op/s
Dec 05 10:14:08 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "format": "json"}]: dispatch
Dec 05 10:14:08 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "force": true, "format": "json"}]: dispatch
Dec 05 10:14:09 np0005546420.localdomain ceph-mon[298353]: pgmap v478: 177 pgs: 177 active+clean; 196 MiB data, 996 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 25 KiB/s wr, 41 op/s
Dec 05 10:14:09 np0005546420.localdomain systemd[1]: tmp-crun.hU9b8g.mount: Deactivated successfully.
Dec 05 10:14:09 np0005546420.localdomain dnsmasq[323769]: exiting on receipt of SIGTERM
Dec 05 10:14:09 np0005546420.localdomain podman[324023]: 2025-12-05 10:14:09.832228076 +0000 UTC m=+0.091047191 container kill 26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:14:09 np0005546420.localdomain systemd[1]: libpod-26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f.scope: Deactivated successfully.
Dec 05 10:14:09 np0005546420.localdomain sudo[324033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:14:09 np0005546420.localdomain sudo[324033]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:14:09 np0005546420.localdomain sudo[324033]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:09 np0005546420.localdomain podman[324055]: 2025-12-05 10:14:09.904490287 +0000 UTC m=+0.049192750 container died 26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:14:09 np0005546420.localdomain sudo[324074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:14:09 np0005546420.localdomain sudo[324074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:14:09 np0005546420.localdomain systemd[1]: tmp-crun.TC0M1t.mount: Deactivated successfully.
Dec 05 10:14:09 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f-userdata-shm.mount: Deactivated successfully.
Dec 05 10:14:10 np0005546420.localdomain podman[324055]: 2025-12-05 10:14:10.013803862 +0000 UTC m=+0.158506295 container remove 26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a76a2750-b5d5-4005-a3cc-d1f8d1afd19a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 10:14:10 np0005546420.localdomain systemd[1]: libpod-conmon-26ea0cd1b94ffcd007b23a9239d2f154e5cb628b76ace616241b7cc1ebdebd9f.scope: Deactivated successfully.
Dec 05 10:14:10 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:10.133 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:14:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:10 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:10.394 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:14:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:10.657 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:10 np0005546420.localdomain sudo[324074]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:10 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:10.776 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:10Z, description=, device_id=a4b4da2c-1ffd-4e75-b1d7-e9db0c653845, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e1b160>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e1b130>], id=6c2f074d-a451-4630-9b39-c17910323318, ip_allocation=immediate, mac_address=fa:16:3e:1b:b2:3e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:13:59Z, description=, dns_domain=, id=de7b370c-b329-459e-a644-ff68f6112395, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1714996894-network, port_security_enabled=True, project_id=8730b222fadf4a249823e59d8b326dde, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26214, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2799, status=ACTIVE, subnets=['7287635e-aa2a-48af-90f7-64dfbe34eae5'], tags=[], tenant_id=8730b222fadf4a249823e59d8b326dde, updated_at=2025-12-05T10:14:02Z, vlan_transparent=None, network_id=de7b370c-b329-459e-a644-ff68f6112395, port_security_enabled=False, project_id=8730b222fadf4a249823e59d8b326dde, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2840, status=DOWN, tags=[], tenant_id=8730b222fadf4a249823e59d8b326dde, updated_at=2025-12-05T10:14:10Z on network de7b370c-b329-459e-a644-ff68f6112395
Dec 05 10:14:10 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-1c308be3a6afcdbd77535fe6857b02f529c90678ededf4a8eca68ca23d7a34b8-merged.mount: Deactivated successfully.
Dec 05 10:14:10 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2da76a2750\x2db5d5\x2d4005\x2da3cc\x2dd1f8d1afd19a.mount: Deactivated successfully.
Dec 05 10:14:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:14:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:14:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e195 e195: 6 total, 6 up, 6 in
Dec 05 10:14:11 np0005546420.localdomain dnsmasq[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/addn_hosts - 1 addresses
Dec 05 10:14:11 np0005546420.localdomain dnsmasq-dhcp[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/host
Dec 05 10:14:11 np0005546420.localdomain dnsmasq-dhcp[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/opts
Dec 05 10:14:11 np0005546420.localdomain podman[324149]: 2025-12-05 10:14:11.007013573 +0000 UTC m=+0.061059495 container kill 3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7b370c-b329-459e-a644-ff68f6112395, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:14:11 np0005546420.localdomain sudo[324155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:14:11 np0005546420.localdomain sudo[324155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:14:11 np0005546420.localdomain sudo[324155]: pam_unix(sudo:session): session closed for user root
Dec 05 10:14:11 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:11.711 262769 INFO neutron.agent.dhcp.agent [None req-245b1e91-2985-4a4d-b1ce-f84ead608cfc - - - - - -] DHCP configuration for ports {'6c2f074d-a451-4630-9b39-c17910323318'} is completed
Dec 05 10:14:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:12.096 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:14:12 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:14:12 np0005546420.localdomain ceph-mon[298353]: osdmap e195: 6 total, 6 up, 6 in
Dec 05 10:14:12 np0005546420.localdomain ceph-mon[298353]: pgmap v480: 177 pgs: 177 active+clean; 196 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 75 KiB/s rd, 51 KiB/s wr, 104 op/s
Dec 05 10:14:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e196 e196: 6 total, 6 up, 6 in
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:14:12.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:14:13 np0005546420.localdomain ceph-mon[298353]: osdmap e196: 6 total, 6 up, 6 in
Dec 05 10:14:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:14:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:14:13 np0005546420.localdomain podman[324191]: 2025-12-05 10:14:13.515755732 +0000 UTC m=+0.086972516 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:14:13 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:13.529 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:10Z, description=, device_id=a4b4da2c-1ffd-4e75-b1d7-e9db0c653845, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e981f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e98fa0>], id=6c2f074d-a451-4630-9b39-c17910323318, ip_allocation=immediate, mac_address=fa:16:3e:1b:b2:3e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:13:59Z, description=, dns_domain=, id=de7b370c-b329-459e-a644-ff68f6112395, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1714996894-network, port_security_enabled=True, project_id=8730b222fadf4a249823e59d8b326dde, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26214, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2799, status=ACTIVE, subnets=['7287635e-aa2a-48af-90f7-64dfbe34eae5'], tags=[], tenant_id=8730b222fadf4a249823e59d8b326dde, updated_at=2025-12-05T10:14:02Z, vlan_transparent=None, network_id=de7b370c-b329-459e-a644-ff68f6112395, port_security_enabled=False, project_id=8730b222fadf4a249823e59d8b326dde, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2840, status=DOWN, tags=[], tenant_id=8730b222fadf4a249823e59d8b326dde, updated_at=2025-12-05T10:14:10Z on network de7b370c-b329-459e-a644-ff68f6112395
Dec 05 10:14:13 np0005546420.localdomain podman[324191]: 2025-12-05 10:14:13.554427516 +0000 UTC m=+0.125644290 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:14:13 np0005546420.localdomain systemd[1]: tmp-crun.4EBgZZ.mount: Deactivated successfully.
Dec 05 10:14:13 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:14:13 np0005546420.localdomain podman[324190]: 2025-12-05 10:14:13.557736608 +0000 UTC m=+0.132308855 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64)
Dec 05 10:14:13 np0005546420.localdomain podman[324190]: 2025-12-05 10:14:13.644542158 +0000 UTC m=+0.219114425 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, release=1755695350, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:14:13 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:14:13 np0005546420.localdomain podman[324249]: 2025-12-05 10:14:13.785005434 +0000 UTC m=+0.060487247 container kill 3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7b370c-b329-459e-a644-ff68f6112395, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:14:13 np0005546420.localdomain dnsmasq[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/addn_hosts - 1 addresses
Dec 05 10:14:13 np0005546420.localdomain dnsmasq-dhcp[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/host
Dec 05 10:14:13 np0005546420.localdomain dnsmasq-dhcp[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/opts
Dec 05 10:14:14 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:14.075 262769 INFO neutron.agent.dhcp.agent [None req-cb44901c-d450-4bc0-839c-d811aa20ec17 - - - - - -] DHCP configuration for ports {'6c2f074d-a451-4630-9b39-c17910323318'} is completed
Dec 05 10:14:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e197 e197: 6 total, 6 up, 6 in
Dec 05 10:14:14 np0005546420.localdomain ceph-mon[298353]: pgmap v482: 177 pgs: 177 active+clean; 196 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 42 KiB/s wr, 85 op/s
Dec 05 10:14:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:14:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:14.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:14:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "format": "json"}]: dispatch
Dec 05 10:14:15 np0005546420.localdomain ceph-mon[298353]: osdmap e197: 6 total, 6 up, 6 in
Dec 05 10:14:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:14:15 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2271945552' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:14:15 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2271945552' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:16 np0005546420.localdomain ceph-mon[298353]: pgmap v484: 177 pgs: 177 active+clean; 196 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 27 KiB/s wr, 64 op/s
Dec 05 10:14:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:14:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2271945552' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2271945552' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:16Z|00327|ovn_bfd|INFO|Enabled BFD on interface ovn-473cc8-0
Dec 05 10:14:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:16Z|00328|ovn_bfd|INFO|Enabled BFD on interface ovn-f5bb44-0
Dec 05 10:14:16 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:16Z|00329|ovn_bfd|INFO|Enabled BFD on interface ovn-40c64e-0
Dec 05 10:14:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:16.702 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:16.719 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:16.725 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:16.748 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:16.789 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:17.098 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:14:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:14:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:14:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154915 "" "Go-http-client/1.1"
Dec 05 10:14:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:14:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18738 "" "Go-http-client/1.1"
Dec 05 10:14:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:17.678 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:14:17 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/667829338' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:14:17 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/667829338' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:17.883 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:17.883 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:14:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:18.389 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:18 np0005546420.localdomain ceph-mon[298353]: pgmap v485: 177 pgs: 177 active+clean; 196 MiB data, 981 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 36 KiB/s wr, 103 op/s
Dec 05 10:14:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/667829338' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/667829338' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 05 10:14:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:14:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:14:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:14:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:14:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:14:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:14:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:14:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:14:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:14:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:14:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:14:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:14:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:14:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:14:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:18.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:18.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:14:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:18.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:14:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:14:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve49", "tenant_id": "fb88a523f48e4990b7617051dc3491c9", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:14:19 np0005546420.localdomain podman[324271]: 2025-12-05 10:14:19.511722747 +0000 UTC m=+0.085376187 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 05 10:14:19 np0005546420.localdomain podman[324271]: 2025-12-05 10:14:19.578470607 +0000 UTC m=+0.152124027 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:14:19 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:14:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:20 np0005546420.localdomain ceph-mon[298353]: pgmap v486: 177 pgs: 177 active+clean; 196 MiB data, 981 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 9.0 KiB/s wr, 36 op/s
Dec 05 10:14:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e198 e198: 6 total, 6 up, 6 in
Dec 05 10:14:21 np0005546420.localdomain ceph-mon[298353]: osdmap e198: 6 total, 6 up, 6 in
Dec 05 10:14:21 np0005546420.localdomain ceph-mon[298353]: pgmap v488: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 34 KiB/s wr, 58 op/s
Dec 05 10:14:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:22.101 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:14:22 np0005546420.localdomain podman[324297]: 2025-12-05 10:14:22.502174417 +0000 UTC m=+0.081322192 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 10:14:22 np0005546420.localdomain podman[324297]: 2025-12-05 10:14:22.512786875 +0000 UTC m=+0.091934690 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:14:22 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:14:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:22.635 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:14:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:22.637 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:23.633 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:24 np0005546420.localdomain ceph-mon[298353]: pgmap v489: 177 pgs: 177 active+clean; 196 MiB data, 966 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 30 KiB/s wr, 52 op/s
Dec 05 10:14:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/163726682' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:14:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:24.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:24.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 10:14:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:25.908 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 10:14:26 np0005546420.localdomain ceph-mon[298353]: pgmap v490: 177 pgs: 177 active+clean; 196 MiB data, 966 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 27 KiB/s wr, 47 op/s
Dec 05 10:14:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:26.910 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:26.911 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:26.912 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:26.912 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:27.103 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve48", "tenant_id": "fb88a523f48e4990b7617051dc3491c9", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:14:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 05 10:14:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:14:27 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:14:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/166280889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:14:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:28.098 262769 INFO neutron.agent.linux.ip_lib [None req-a4ca54a3-a50e-43db-99ce-d13fc24a16f5 - - - - - -] Device tap729201e5-27 cannot be used as it has no MAC address
Dec 05 10:14:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:28.155 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:28 np0005546420.localdomain kernel: device tap729201e5-27 entered promiscuous mode
Dec 05 10:14:28 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:28Z|00330|binding|INFO|Claiming lport 729201e5-27a4-41de-b705-7bf2c1631601 for this chassis.
Dec 05 10:14:28 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:28Z|00331|binding|INFO|729201e5-27a4-41de-b705-7bf2c1631601: Claiming unknown
Dec 05 10:14:28 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929668.1661] manager: (tap729201e5-27): new Generic device (/org/freedesktop/NetworkManager/Devices/58)
Dec 05 10:14:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:28.165 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:28 np0005546420.localdomain systemd-udevd[324326]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:14:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:28.176 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-4cdfb463-f1fe-4042-bcfb-e1732ca2300a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4cdfb463-f1fe-4042-bcfb-e1732ca2300a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bee55dc-5006-49e8-8ca1-9c68a4318919, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=729201e5-27a4-41de-b705-7bf2c1631601) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:14:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:28.178 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 729201e5-27a4-41de-b705-7bf2c1631601 in datapath 4cdfb463-f1fe-4042-bcfb-e1732ca2300a bound to our chassis
Dec 05 10:14:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:28.180 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4cdfb463-f1fe-4042-bcfb-e1732ca2300a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:14:28 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:28.182 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[9f35e18c-a809-4089-a66e-d1b89d1c64ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:14:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap729201e5-27: No such device
Dec 05 10:14:28 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:28Z|00332|binding|INFO|Setting lport 729201e5-27a4-41de-b705-7bf2c1631601 ovn-installed in OVS
Dec 05 10:14:28 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:28Z|00333|binding|INFO|Setting lport 729201e5-27a4-41de-b705-7bf2c1631601 up in Southbound
Dec 05 10:14:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap729201e5-27: No such device
Dec 05 10:14:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:28.207 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap729201e5-27: No such device
Dec 05 10:14:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap729201e5-27: No such device
Dec 05 10:14:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap729201e5-27: No such device
Dec 05 10:14:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap729201e5-27: No such device
Dec 05 10:14:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap729201e5-27: No such device
Dec 05 10:14:28 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap729201e5-27: No such device
Dec 05 10:14:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:28.250 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:28.285 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:28 np0005546420.localdomain ceph-mon[298353]: pgmap v491: 177 pgs: 177 active+clean; 196 MiB data, 966 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 23 KiB/s wr, 19 op/s
Dec 05 10:14:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:28.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:28.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 10:14:29 np0005546420.localdomain podman[324397]: 
Dec 05 10:14:29 np0005546420.localdomain podman[324397]: 2025-12-05 10:14:29.317024911 +0000 UTC m=+0.113479074 container create a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cdfb463-f1fe-4042-bcfb-e1732ca2300a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 05 10:14:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:14:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:14:29 np0005546420.localdomain podman[324397]: 2025-12-05 10:14:29.246137063 +0000 UTC m=+0.042591286 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:14:29 np0005546420.localdomain systemd[1]: Started libpod-conmon-a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff.scope.
Dec 05 10:14:29 np0005546420.localdomain systemd[1]: tmp-crun.AiQ9TL.mount: Deactivated successfully.
Dec 05 10:14:29 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:14:29 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/addd07d256bd7129a4e49ad47e6a463023f4c83973768039786a4e3664e3aaf1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:29 np0005546420.localdomain podman[324397]: 2025-12-05 10:14:29.402057336 +0000 UTC m=+0.198511469 container init a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cdfb463-f1fe-4042-bcfb-e1732ca2300a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:14:29 np0005546420.localdomain podman[324397]: 2025-12-05 10:14:29.411423195 +0000 UTC m=+0.207877328 container start a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cdfb463-f1fe-4042-bcfb-e1732ca2300a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:14:29 np0005546420.localdomain dnsmasq[324437]: started, version 2.85 cachesize 150
Dec 05 10:14:29 np0005546420.localdomain dnsmasq[324437]: DNS service limited to local subnets
Dec 05 10:14:29 np0005546420.localdomain dnsmasq[324437]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:14:29 np0005546420.localdomain dnsmasq[324437]: warning: no upstream servers configured
Dec 05 10:14:29 np0005546420.localdomain dnsmasq-dhcp[324437]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:14:29 np0005546420.localdomain dnsmasq[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/addn_hosts - 0 addresses
Dec 05 10:14:29 np0005546420.localdomain dnsmasq-dhcp[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/host
Dec 05 10:14:29 np0005546420.localdomain dnsmasq-dhcp[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/opts
Dec 05 10:14:29 np0005546420.localdomain podman[324412]: 2025-12-05 10:14:29.490822177 +0000 UTC m=+0.139218149 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:14:29 np0005546420.localdomain podman[324412]: 2025-12-05 10:14:29.526106666 +0000 UTC m=+0.174502628 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 10:14:29 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:14:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:29.544 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:29.546 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:14:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:29.547 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:14:29 np0005546420.localdomain podman[324410]: 2025-12-05 10:14:29.548293781 +0000 UTC m=+0.194189786 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:14:29 np0005546420.localdomain podman[324410]: 2025-12-05 10:14:29.576751309 +0000 UTC m=+0.222647324 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:14:29 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:14:29 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:29.637 262769 INFO neutron.agent.dhcp.agent [None req-444ecb9f-c8a0-4c9b-b8c6-2b02cdf0fa20 - - - - - -] DHCP configuration for ports {'183e8a55-7448-4f57-af44-b2a74236600b'} is completed
Dec 05 10:14:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:30.013 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:29Z, description=, device_id=b0d3afa1-cb69-4a85-881c-dd3365624b87, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99db3d30>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99db39d0>], id=8af80160-0765-4e11-a969-4d1a37d4ff39, ip_allocation=immediate, mac_address=fa:16:3e:37:41:06, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:26Z, description=, dns_domain=, id=4cdfb463-f1fe-4042-bcfb-e1732ca2300a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1378411370, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=19735, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2895, status=ACTIVE, subnets=['33095363-f104-4a84-860d-c8137f15cb55'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:27Z, vlan_transparent=None, network_id=4cdfb463-f1fe-4042-bcfb-e1732ca2300a, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2907, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:29Z on network 4cdfb463-f1fe-4042-bcfb-e1732ca2300a
Dec 05 10:14:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:30 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6],prefix=session evict} (starting...)
Dec 05 10:14:30 np0005546420.localdomain dnsmasq[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/addn_hosts - 1 addresses
Dec 05 10:14:30 np0005546420.localdomain dnsmasq-dhcp[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/host
Dec 05 10:14:30 np0005546420.localdomain dnsmasq-dhcp[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/opts
Dec 05 10:14:30 np0005546420.localdomain podman[324473]: 2025-12-05 10:14:30.22784109 +0000 UTC m=+0.046648201 container kill a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cdfb463-f1fe-4042-bcfb-e1732ca2300a, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:14:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:30.479 262769 INFO neutron.agent.dhcp.agent [None req-ca571960-f629-419e-bfb3-07f1617e999f - - - - - -] DHCP configuration for ports {'8af80160-0765-4e11-a969-4d1a37d4ff39'} is completed
Dec 05 10:14:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:14:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/916249070' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:14:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/916249070' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:30 np0005546420.localdomain ceph-mon[298353]: pgmap v492: 177 pgs: 177 active+clean; 196 MiB data, 966 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 23 KiB/s wr, 19 op/s
Dec 05 10:14:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 05 10:14:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Dec 05 10:14:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Dec 05 10:14:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/916249070' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/916249070' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:30 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:30.880 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:29Z, description=, device_id=b0d3afa1-cb69-4a85-881c-dd3365624b87, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e42eb0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e42ac0>], id=8af80160-0765-4e11-a969-4d1a37d4ff39, ip_allocation=immediate, mac_address=fa:16:3e:37:41:06, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:26Z, description=, dns_domain=, id=4cdfb463-f1fe-4042-bcfb-e1732ca2300a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1378411370, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=19735, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2895, status=ACTIVE, subnets=['33095363-f104-4a84-860d-c8137f15cb55'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:27Z, vlan_transparent=None, network_id=4cdfb463-f1fe-4042-bcfb-e1732ca2300a, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2907, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:29Z on network 4cdfb463-f1fe-4042-bcfb-e1732ca2300a
Dec 05 10:14:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:30.894 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:14:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:30.920 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:14:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:30.921 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:14:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:30.921 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:14:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:30.922 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:14:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:30.922 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:14:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:14:31 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/842639612' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:14:31 np0005546420.localdomain podman[324521]: 2025-12-05 10:14:31.12738229 +0000 UTC m=+0.054864145 container kill a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cdfb463-f1fe-4042-bcfb-e1732ca2300a, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:14:31 np0005546420.localdomain dnsmasq[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/addn_hosts - 1 addresses
Dec 05 10:14:31 np0005546420.localdomain dnsmasq-dhcp[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/host
Dec 05 10:14:31 np0005546420.localdomain dnsmasq-dhcp[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/opts
Dec 05 10:14:31 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:31.321 262769 INFO neutron.agent.dhcp.agent [None req-7c5ad05d-274d-4277-bd6c-447db1c8fff3 - - - - - -] DHCP configuration for ports {'8af80160-0765-4e11-a969-4d1a37d4ff39'} is completed
Dec 05 10:14:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:14:31 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1393570881' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:14:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:31.429 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:14:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:31.608 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:14:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:31.609 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11531MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:14:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:31.609 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:14:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:31.609 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:14:31 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 05 10:14:31 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 05 10:14:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/842639612' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:14:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3331145327' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:14:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1393570881' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:14:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:31.920 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:14:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:31.921 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:14:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e199 e199: 6 total, 6 up, 6 in
Dec 05 10:14:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:32.004 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:14:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:32.152 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:14:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1804302479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:14:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:32.456 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:14:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:32.461 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:14:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:32.487 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:14:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:32.489 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:14:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:32.489 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:14:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:32.549 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:14:32 np0005546420.localdomain podman[324593]: 2025-12-05 10:14:32.605949596 +0000 UTC m=+0.043543256 container kill a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cdfb463-f1fe-4042-bcfb-e1732ca2300a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:14:32 np0005546420.localdomain dnsmasq[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/addn_hosts - 0 addresses
Dec 05 10:14:32 np0005546420.localdomain dnsmasq-dhcp[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/host
Dec 05 10:14:32 np0005546420.localdomain dnsmasq-dhcp[324437]: read /var/lib/neutron/dhcp/4cdfb463-f1fe-4042-bcfb-e1732ca2300a/opts
Dec 05 10:14:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:14:32 np0005546420.localdomain ceph-mon[298353]: pgmap v493: 177 pgs: 177 active+clean; 252 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 5.4 MiB/s wr, 64 op/s
Dec 05 10:14:32 np0005546420.localdomain ceph-mon[298353]: osdmap e199: 6 total, 6 up, 6 in
Dec 05 10:14:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1804302479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:14:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2594813420' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:14:32 np0005546420.localdomain systemd[1]: tmp-crun.1csWI9.mount: Deactivated successfully.
Dec 05 10:14:32 np0005546420.localdomain podman[324608]: 2025-12-05 10:14:32.726724664 +0000 UTC m=+0.085821451 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd)
Dec 05 10:14:32 np0005546420.localdomain podman[324608]: 2025-12-05 10:14:32.743594385 +0000 UTC m=+0.102691202 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:14:32 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:14:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:14:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2066558129' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:14:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2066558129' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:32 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:32Z|00334|binding|INFO|Releasing lport 729201e5-27a4-41de-b705-7bf2c1631601 from this chassis (sb_readonly=0)
Dec 05 10:14:32 np0005546420.localdomain kernel: device tap729201e5-27 left promiscuous mode
Dec 05 10:14:32 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:32Z|00335|binding|INFO|Setting lport 729201e5-27a4-41de-b705-7bf2c1631601 down in Southbound
Dec 05 10:14:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:32.805 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:32.822 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-4cdfb463-f1fe-4042-bcfb-e1732ca2300a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4cdfb463-f1fe-4042-bcfb-e1732ca2300a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bee55dc-5006-49e8-8ca1-9c68a4318919, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=729201e5-27a4-41de-b705-7bf2c1631601) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:14:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:32.824 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 729201e5-27a4-41de-b705-7bf2c1631601 in datapath 4cdfb463-f1fe-4042-bcfb-e1732ca2300a unbound from our chassis
Dec 05 10:14:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:32.826 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4cdfb463-f1fe-4042-bcfb-e1732ca2300a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:14:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:32.828 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[73af8ed5-f3e2-4a40-abc0-046567993212]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:14:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:32.836 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:33 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e200 e200: 6 total, 6 up, 6 in
Dec 05 10:14:33 np0005546420.localdomain dnsmasq[324437]: exiting on receipt of SIGTERM
Dec 05 10:14:33 np0005546420.localdomain podman[324652]: 2025-12-05 10:14:33.683068758 +0000 UTC m=+0.071263001 container kill a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cdfb463-f1fe-4042-bcfb-e1732ca2300a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:14:33 np0005546420.localdomain systemd[1]: libpod-a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff.scope: Deactivated successfully.
Dec 05 10:14:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2066558129' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2066558129' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:33 np0005546420.localdomain ceph-mon[298353]: osdmap e200: 6 total, 6 up, 6 in
Dec 05 10:14:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 05 10:14:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:14:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:14:33 np0005546420.localdomain podman[324666]: 2025-12-05 10:14:33.788605606 +0000 UTC m=+0.082581171 container died a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cdfb463-f1fe-4042-bcfb-e1732ca2300a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 10:14:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff-userdata-shm.mount: Deactivated successfully.
Dec 05 10:14:33 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-addd07d256bd7129a4e49ad47e6a463023f4c83973768039786a4e3664e3aaf1-merged.mount: Deactivated successfully.
Dec 05 10:14:33 np0005546420.localdomain podman[324666]: 2025-12-05 10:14:33.891211943 +0000 UTC m=+0.185187478 container remove a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cdfb463-f1fe-4042-bcfb-e1732ca2300a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:14:33 np0005546420.localdomain systemd[1]: libpod-conmon-a20d896000e51ff97aee334e884e54a864f9ad6f9ac8f3580cde357db6d0edff.scope: Deactivated successfully.
Dec 05 10:14:34 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:34.090 262769 INFO neutron.agent.dhcp.agent [None req-6902b3d1-9582-469e-91cf-a5fe80784f20 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:14:34 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:34.251 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:14:34 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d4cdfb463\x2df1fe\x2d4042\x2dbcfb\x2de1732ca2300a.mount: Deactivated successfully.
Dec 05 10:14:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:34.841 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:35 np0005546420.localdomain ceph-mon[298353]: pgmap v496: 177 pgs: 177 active+clean; 252 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 7.0 MiB/s wr, 63 op/s
Dec 05 10:14:35 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve47", "tenant_id": "fb88a523f48e4990b7617051dc3491c9", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:14:36 np0005546420.localdomain ceph-mon[298353]: pgmap v497: 177 pgs: 177 active+clean; 252 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 7.0 MiB/s wr, 62 op/s
Dec 05 10:14:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e201 e201: 6 total, 6 up, 6 in
Dec 05 10:14:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:37.182 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:37 np0005546420.localdomain ceph-mon[298353]: osdmap e201: 6 total, 6 up, 6 in
Dec 05 10:14:37 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6],prefix=session evict} (starting...)
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: pgmap v499: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 349 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 16 MiB/s wr, 105 op/s
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.323375) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678323440, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1375, "num_deletes": 257, "total_data_size": 1669313, "memory_usage": 1695832, "flush_reason": "Manual Compaction"}
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678333256, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 1090279, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29661, "largest_seqno": 31031, "table_properties": {"data_size": 1084612, "index_size": 2945, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13886, "raw_average_key_size": 21, "raw_value_size": 1072614, "raw_average_value_size": 1652, "num_data_blocks": 128, "num_entries": 649, "num_filter_entries": 649, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929610, "oldest_key_time": 1764929610, "file_creation_time": 1764929678, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 9930 microseconds, and 4321 cpu microseconds.
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.333309) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 1090279 bytes OK
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.333334) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.335005) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.335031) EVENT_LOG_v1 {"time_micros": 1764929678335023, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.335062) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1662539, prev total WAL file size 1662539, number of live WAL files 2.
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.336004) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end)
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(1064KB)], [51(16MB)]
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678336076, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 18048824, "oldest_snapshot_seqno": -1}
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 13063 keys, 16758671 bytes, temperature: kUnknown
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678460195, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 16758671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16685612, "index_size": 39294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32709, "raw_key_size": 350023, "raw_average_key_size": 26, "raw_value_size": 16464808, "raw_average_value_size": 1260, "num_data_blocks": 1471, "num_entries": 13063, "num_filter_entries": 13063, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929678, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.460519) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 16758671 bytes
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.462440) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.3 rd, 134.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 16.2 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(31.9) write-amplify(15.4) OK, records in: 13597, records dropped: 534 output_compression: NoCompression
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.462467) EVENT_LOG_v1 {"time_micros": 1764929678462455, "job": 30, "event": "compaction_finished", "compaction_time_micros": 124224, "compaction_time_cpu_micros": 47826, "output_level": 6, "num_output_files": 1, "total_output_size": 16758671, "num_input_records": 13597, "num_output_records": 13063, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678462743, "job": 30, "event": "table_file_deletion", "file_number": 53}
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678465494, "job": 30, "event": "table_file_deletion", "file_number": 51}
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.335846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.465524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.465529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.465532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.465535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:38 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:14:38.465538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:14:39 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 05 10:14:39 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e202 e202: 6 total, 6 up, 6 in
Dec 05 10:14:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:40 np0005546420.localdomain ceph-mon[298353]: osdmap e202: 6 total, 6 up, 6 in
Dec 05 10:14:40 np0005546420.localdomain ceph-mon[298353]: pgmap v501: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 349 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 15 MiB/s wr, 94 op/s
Dec 05 10:14:41 np0005546420.localdomain ceph-mon[298353]: pgmap v502: 177 pgs: 177 active+clean; 477 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 111 KiB/s rd, 28 MiB/s wr, 163 op/s
Dec 05 10:14:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:42.184 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:14:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:42.186 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:14:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:42.187 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:14:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:42.187 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:14:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:42.217 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:42.217 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:14:43 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/605687792' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:43 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/605687792' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:44 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6],prefix=session evict} (starting...)
Dec 05 10:14:44 np0005546420.localdomain ceph-mon[298353]: pgmap v503: 177 pgs: 177 active+clean; 485 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 114 KiB/s rd, 29 MiB/s wr, 168 op/s
Dec 05 10:14:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 05 10:14:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Dec 05 10:14:44 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Dec 05 10:14:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:14:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:14:44 np0005546420.localdomain podman[324692]: 2025-12-05 10:14:44.514340685 +0000 UTC m=+0.087633956 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:14:44 np0005546420.localdomain podman[324692]: 2025-12-05 10:14:44.521352392 +0000 UTC m=+0.094645713 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:14:44 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:14:44 np0005546420.localdomain podman[324691]: 2025-12-05 10:14:44.613488366 +0000 UTC m=+0.189898723 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm)
Dec 05 10:14:44 np0005546420.localdomain podman[324691]: 2025-12-05 10:14:44.629330545 +0000 UTC m=+0.205740942 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, com.redhat.component=ubi9-minimal-container, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 10:14:44 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:14:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:45 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 05 10:14:45 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 05 10:14:45 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "format": "json"}]: dispatch
Dec 05 10:14:45 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "force": true, "format": "json"}]: dispatch
Dec 05 10:14:45 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3381983602' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:45 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3381983602' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e203 e203: 6 total, 6 up, 6 in
Dec 05 10:14:46 np0005546420.localdomain ceph-mon[298353]: pgmap v504: 177 pgs: 177 active+clean; 485 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 98 KiB/s rd, 21 MiB/s wr, 141 op/s
Dec 05 10:14:46 np0005546420.localdomain ceph-mon[298353]: osdmap e203: 6 total, 6 up, 6 in
Dec 05 10:14:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:14:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:14:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:47.218 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:14:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:47.220 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:14:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:14:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154915 "" "Go-http-client/1.1"
Dec 05 10:14:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:47.221 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:14:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:47.221 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:14:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:47.264 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:47.265 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:14:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:14:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18747 "" "Go-http-client/1.1"
Dec 05 10:14:47 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1223691897' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:47 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1223691897' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:48 np0005546420.localdomain ceph-mon[298353]: pgmap v506: 177 pgs: 177 active+clean; 613 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 107 KiB/s rd, 33 MiB/s wr, 157 op/s
Dec 05 10:14:48 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:14:48 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2887205306' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:48 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:14:48 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2887205306' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:14:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:14:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:14:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:14:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:14:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:14:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:14:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:14:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:14:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:14:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:14:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:14:48 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:48.992 2 INFO neutron.agent.securitygroups_rpc [None req-b4eb535c-8204-4d25-b0f4-40572269384f 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['3b13fc28-6bef-461a-b25f-c885640f870a']
Dec 05 10:14:49 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:49.254 2 INFO neutron.agent.securitygroups_rpc [None req-8953229a-9500-4eaa-91a9-0656e1e69b95 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['3b13fc28-6bef-461a-b25f-c885640f870a']
Dec 05 10:14:49 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2887205306' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:14:49 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2887205306' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:14:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:14:50 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:50.448 2 INFO neutron.agent.securitygroups_rpc [None req-acff6c3c-20c2-42e9-8367-8bf60c08998e 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']
Dec 05 10:14:50 np0005546420.localdomain ceph-mon[298353]: pgmap v507: 177 pgs: 177 active+clean; 613 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 86 KiB/s rd, 26 MiB/s wr, 126 op/s
Dec 05 10:14:50 np0005546420.localdomain systemd[1]: tmp-crun.hFU7tv.mount: Deactivated successfully.
Dec 05 10:14:50 np0005546420.localdomain podman[324734]: 2025-12-05 10:14:50.518235535 +0000 UTC m=+0.093879559 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 05 10:14:50 np0005546420.localdomain podman[324734]: 2025-12-05 10:14:50.558260881 +0000 UTC m=+0.133904905 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 10:14:50 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:14:50 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:50.774 2 INFO neutron.agent.securitygroups_rpc [None req-b4d6abfd-89cc-4b34-8e57-eddd9213293a 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']
Dec 05 10:14:51 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:51.218 2 INFO neutron.agent.securitygroups_rpc [None req-00e7fa0b-d6ce-4a61-9e9d-6e5fdfe76d15 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']
Dec 05 10:14:51 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:51.427 2 INFO neutron.agent.securitygroups_rpc [None req-55df744d-37b9-4d9b-82c5-600a5711382e 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']
Dec 05 10:14:51 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:51.628 2 INFO neutron.agent.securitygroups_rpc [None req-3d510268-4f8b-42df-9063-a848385b1789 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']
Dec 05 10:14:51 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:51.766 2 INFO neutron.agent.securitygroups_rpc [None req-08fa5ba3-71c4-467c-9744-a34ca4d77e1c 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']
Dec 05 10:14:51 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:51.962 2 INFO neutron.agent.securitygroups_rpc [None req-7daa08e9-f9ca-432e-a5fb-eae3684adb76 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']
Dec 05 10:14:52 np0005546420.localdomain ceph-mon[298353]: pgmap v508: 177 pgs: 177 active+clean; 709 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 3.3 MiB/s rd, 23 MiB/s wr, 99 op/s
Dec 05 10:14:52 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:52.243 2 INFO neutron.agent.securitygroups_rpc [None req-0cf65179-4b03-4637-aa4f-57aa5de6772b 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']
Dec 05 10:14:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:52.266 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:14:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:52.268 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:14:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:52.268 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:14:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:52.269 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:14:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:52.302 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:52.303 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:14:52 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:52.467 2 INFO neutron.agent.securitygroups_rpc [None req-b74fea50-bc3a-432f-a098-d6c59815cd89 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']
Dec 05 10:14:52 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:52.769 2 INFO neutron.agent.securitygroups_rpc [None req-96912d19-e3e5-4dfa-9b21-f05bcc4a1e8c 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']
Dec 05 10:14:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:14:53 np0005546420.localdomain systemd[1]: tmp-crun.2ErQh3.mount: Deactivated successfully.
Dec 05 10:14:53 np0005546420.localdomain podman[324759]: 2025-12-05 10:14:53.517684502 +0000 UTC m=+0.096289473 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:14:53 np0005546420.localdomain podman[324759]: 2025-12-05 10:14:53.531745166 +0000 UTC m=+0.110350127 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 10:14:53 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:14:53 np0005546420.localdomain sshd[324778]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:14:54 np0005546420.localdomain ceph-mon[298353]: pgmap v509: 177 pgs: 177 active+clean; 741 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 4.1 MiB/s rd, 26 MiB/s wr, 96 op/s
Dec 05 10:14:54 np0005546420.localdomain sshd[324778]: Received disconnect from 193.46.255.217 port 23918:11:  [preauth]
Dec 05 10:14:54 np0005546420.localdomain sshd[324778]: Disconnected from authenticating user root 193.46.255.217 port 23918 [preauth]
Dec 05 10:14:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:14:56 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:56.502 262769 INFO neutron.agent.linux.ip_lib [None req-0c3e2fb6-d6cd-4c71-bff7-892bfcd9883b - - - - - -] Device tap50145118-81 cannot be used as it has no MAC address
Dec 05 10:14:56 np0005546420.localdomain ceph-mon[298353]: pgmap v510: 177 pgs: 177 active+clean; 741 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 4.1 MiB/s rd, 26 MiB/s wr, 96 op/s
Dec 05 10:14:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:56.557 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:56 np0005546420.localdomain kernel: device tap50145118-81 entered promiscuous mode
Dec 05 10:14:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:56.568 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:56 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:56Z|00336|binding|INFO|Claiming lport 50145118-8113-4e6b-8c05-9f1ed9cd9f4d for this chassis.
Dec 05 10:14:56 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:56Z|00337|binding|INFO|50145118-8113-4e6b-8c05-9f1ed9cd9f4d: Claiming unknown
Dec 05 10:14:56 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929696.5758] manager: (tap50145118-81): new Generic device (/org/freedesktop/NetworkManager/Devices/59)
Dec 05 10:14:56 np0005546420.localdomain systemd-udevd[324790]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:14:56 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap50145118-81: No such device
Dec 05 10:14:56 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:56Z|00338|binding|INFO|Setting lport 50145118-8113-4e6b-8c05-9f1ed9cd9f4d ovn-installed in OVS
Dec 05 10:14:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:56.610 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:56 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap50145118-81: No such device
Dec 05 10:14:56 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap50145118-81: No such device
Dec 05 10:14:56 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap50145118-81: No such device
Dec 05 10:14:56 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap50145118-81: No such device
Dec 05 10:14:56 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap50145118-81: No such device
Dec 05 10:14:56 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap50145118-81: No such device
Dec 05 10:14:56 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap50145118-81: No such device
Dec 05 10:14:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:56.650 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:56.674 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:56 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:14:56Z|00339|binding|INFO|Setting lport 50145118-8113-4e6b-8c05-9f1ed9cd9f4d up in Southbound
Dec 05 10:14:56 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:56.835 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-caa6a74a-c61f-4553-a3e4-7b8097b1c04f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-caa6a74a-c61f-4553-a3e4-7b8097b1c04f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55023c0-8714-49cd-8d75-1d71ac8e18a9, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=50145118-8113-4e6b-8c05-9f1ed9cd9f4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:14:56 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:56.837 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 50145118-8113-4e6b-8c05-9f1ed9cd9f4d in datapath caa6a74a-c61f-4553-a3e4-7b8097b1c04f bound to our chassis
Dec 05 10:14:56 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:56.839 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 91e8318d-7b82-498a-ae9d-c8f6b8485e23 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:14:56 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:56.840 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network caa6a74a-c61f-4553-a3e4-7b8097b1c04f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:14:56 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:14:56.841 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[eebc061a-2ab1-4f10-aa50-8c8ebb619298]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:14:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:57.305 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:14:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:57.308 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:14:57 np0005546420.localdomain podman[324861]: 
Dec 05 10:14:57 np0005546420.localdomain podman[324861]: 2025-12-05 10:14:57.64113026 +0000 UTC m=+0.104122806 container create a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa6a74a-c61f-4553-a3e4-7b8097b1c04f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:14:57 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:57.689 2 INFO neutron.agent.securitygroups_rpc [None req-5b721f9b-4734-47df-b750-a96ce7d95809 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['2d77f4a6-d9e8-4008-9e9c-4856540450f6']
Dec 05 10:14:57 np0005546420.localdomain podman[324861]: 2025-12-05 10:14:57.591015682 +0000 UTC m=+0.054008208 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:14:57 np0005546420.localdomain systemd[1]: Started libpod-conmon-a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4.scope.
Dec 05 10:14:57 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:14:57 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d60f6e41f8c0ab377b6dd1e9e9764ce2746bb499637868ceb4c576616d8703c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:14:57 np0005546420.localdomain podman[324861]: 2025-12-05 10:14:57.764800217 +0000 UTC m=+0.227792753 container init a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa6a74a-c61f-4553-a3e4-7b8097b1c04f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 05 10:14:57 np0005546420.localdomain podman[324861]: 2025-12-05 10:14:57.773890129 +0000 UTC m=+0.236882675 container start a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa6a74a-c61f-4553-a3e4-7b8097b1c04f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:14:57 np0005546420.localdomain dnsmasq[324879]: started, version 2.85 cachesize 150
Dec 05 10:14:57 np0005546420.localdomain dnsmasq[324879]: DNS service limited to local subnets
Dec 05 10:14:57 np0005546420.localdomain dnsmasq[324879]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:14:57 np0005546420.localdomain dnsmasq[324879]: warning: no upstream servers configured
Dec 05 10:14:57 np0005546420.localdomain dnsmasq-dhcp[324879]: DHCP, static leases only on 10.102.0.0, lease time 1d
Dec 05 10:14:57 np0005546420.localdomain dnsmasq[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/addn_hosts - 0 addresses
Dec 05 10:14:57 np0005546420.localdomain dnsmasq-dhcp[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/host
Dec 05 10:14:57 np0005546420.localdomain dnsmasq-dhcp[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/opts
Dec 05 10:14:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:57.833 262769 INFO neutron.agent.dhcp.agent [None req-1370a13b-dac8-4113-8bae-88f7ce6c9105 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:53Z, description=, device_id=5d4b4894-3319-4abc-b826-a288e4e446f6, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e42d60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e42dc0>], id=96af4e53-45d4-4e8d-9d23-09885a3b4c5a, ip_allocation=immediate, mac_address=fa:16:3e:e7:55:11, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:50Z, description=, dns_domain=, id=caa6a74a-c61f-4553-a3e4-7b8097b1c04f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1609032282, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13223, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2994, status=ACTIVE, subnets=['2e19f6ab-7e2e-4a6e-9fe1-1289764d6442'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:51Z, vlan_transparent=None, network_id=caa6a74a-c61f-4553-a3e4-7b8097b1c04f, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3013, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:53Z on network caa6a74a-c61f-4553-a3e4-7b8097b1c04f
Dec 05 10:14:57 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:57.959 262769 INFO neutron.agent.dhcp.agent [None req-ab6339c7-18d9-4d5d-b864-d8b09e955824 - - - - - -] DHCP configuration for ports {'afe40fe3-7160-40e5-99dd-205c1843d3a0'} is completed
Dec 05 10:14:58 np0005546420.localdomain dnsmasq[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/addn_hosts - 1 addresses
Dec 05 10:14:58 np0005546420.localdomain podman[324897]: 2025-12-05 10:14:58.061848278 +0000 UTC m=+0.065237645 container kill a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa6a74a-c61f-4553-a3e4-7b8097b1c04f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 10:14:58 np0005546420.localdomain dnsmasq-dhcp[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/host
Dec 05 10:14:58 np0005546420.localdomain dnsmasq-dhcp[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/opts
Dec 05 10:14:58 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:58.360 262769 INFO neutron.agent.dhcp.agent [None req-85d873d4-c15f-4daf-a8d1-953a2f963425 - - - - - -] DHCP configuration for ports {'96af4e53-45d4-4e8d-9d23-09885a3b4c5a'} is completed
Dec 05 10:14:58 np0005546420.localdomain ceph-mon[298353]: pgmap v511: 177 pgs: 177 active+clean; 859 MiB data, 2.8 GiB used, 39 GiB / 42 GiB avail; 3.7 MiB/s rd, 31 MiB/s wr, 138 op/s
Dec 05 10:14:58 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:14:58.657 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:53Z, description=, device_id=5d4b4894-3319-4abc-b826-a288e4e446f6, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e075b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99db3790>], id=96af4e53-45d4-4e8d-9d23-09885a3b4c5a, ip_allocation=immediate, mac_address=fa:16:3e:e7:55:11, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:50Z, description=, dns_domain=, id=caa6a74a-c61f-4553-a3e4-7b8097b1c04f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1609032282, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13223, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2994, status=ACTIVE, subnets=['2e19f6ab-7e2e-4a6e-9fe1-1289764d6442'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:51Z, vlan_transparent=None, network_id=caa6a74a-c61f-4553-a3e4-7b8097b1c04f, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3013, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:53Z on network caa6a74a-c61f-4553-a3e4-7b8097b1c04f
Dec 05 10:14:58 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:58.670 2 INFO neutron.agent.securitygroups_rpc [None req-ae0c85e9-ac65-4efd-bde1-c55d644e4787 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['6a825810-26b1-4733-b049-419f3c6eaa8b']
Dec 05 10:14:58 np0005546420.localdomain dnsmasq[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/addn_hosts - 1 addresses
Dec 05 10:14:58 np0005546420.localdomain dnsmasq-dhcp[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/host
Dec 05 10:14:58 np0005546420.localdomain dnsmasq-dhcp[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/opts
Dec 05 10:14:58 np0005546420.localdomain podman[324935]: 2025-12-05 10:14:58.878137858 +0000 UTC m=+0.053207573 container kill a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa6a74a-c61f-4553-a3e4-7b8097b1c04f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:14:58 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:14:58.922 2 INFO neutron.agent.securitygroups_rpc [None req-eb36a368-79c3-4dae-a106-964d225968bf 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['6a825810-26b1-4733-b049-419f3c6eaa8b']
Dec 05 10:14:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:14:58.928 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:00 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:00.227 262769 INFO neutron.agent.dhcp.agent [None req-ba02189d-7082-4054-9f83-917319698f9f - - - - - -] DHCP configuration for ports {'96af4e53-45d4-4e8d-9d23-09885a3b4c5a'} is completed
Dec 05 10:15:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:15:00 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:15:00 np0005546420.localdomain podman[324956]: 2025-12-05 10:15:00.522530973 +0000 UTC m=+0.090845086 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:15:00 np0005546420.localdomain podman[324956]: 2025-12-05 10:15:00.528792436 +0000 UTC m=+0.097106499 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:15:00 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:15:00 np0005546420.localdomain podman[324957]: 2025-12-05 10:15:00.567247124 +0000 UTC m=+0.133254446 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 05 10:15:00 np0005546420.localdomain ceph-mon[298353]: pgmap v512: 177 pgs: 177 active+clean; 859 MiB data, 2.8 GiB used, 39 GiB / 42 GiB avail; 3.5 MiB/s rd, 18 MiB/s wr, 89 op/s
Dec 05 10:15:00 np0005546420.localdomain podman[324957]: 2025-12-05 10:15:00.599445657 +0000 UTC m=+0.165452949 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:15:00 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:15:01 np0005546420.localdomain ceph-mon[298353]: pgmap v513: 177 pgs: 177 active+clean; 1008 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 3.5 MiB/s rd, 29 MiB/s wr, 134 op/s
Dec 05 10:15:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:02.311 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/460869024' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/460869024' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:15:03 np0005546420.localdomain systemd[1]: tmp-crun.OuW0r2.mount: Deactivated successfully.
Dec 05 10:15:03 np0005546420.localdomain podman[324997]: 2025-12-05 10:15:03.50608798 +0000 UTC m=+0.084876402 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 10:15:03 np0005546420.localdomain podman[324997]: 2025-12-05 10:15:03.522566879 +0000 UTC m=+0.101355311 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Dec 05 10:15:03 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:15:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1684029995' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1684029995' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:04 np0005546420.localdomain ceph-mon[298353]: pgmap v514: 177 pgs: 177 active+clean; 1010 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 819 KiB/s rd, 21 MiB/s wr, 98 op/s
Dec 05 10:15:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:04.132 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:15:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:04.132 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:15:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:04.133 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:15:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:05 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1288745077' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:05 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1288745077' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:05 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:15:05.517 2 INFO neutron.agent.securitygroups_rpc [None req-0e5293fa-6190-4c30-acc0-1d137f88da19 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['e53a6f68-d93a-4c1f-b949-d3e9b2de74b1']
Dec 05 10:15:05 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:15:05.725 2 INFO neutron.agent.securitygroups_rpc [None req-e790b3b3-b7d1-4362-aafa-fb0bf024c01a 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['e53a6f68-d93a-4c1f-b949-d3e9b2de74b1']
Dec 05 10:15:06 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:06.087 262769 INFO neutron.agent.linux.ip_lib [None req-d8bd6e87-f2fd-40cd-b57a-4e65b04feead - - - - - -] Device tapd8b7fc52-b4 cannot be used as it has no MAC address
Dec 05 10:15:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:06.164 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:06 np0005546420.localdomain kernel: device tapd8b7fc52-b4 entered promiscuous mode
Dec 05 10:15:06 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929706.1731] manager: (tapd8b7fc52-b4): new Generic device (/org/freedesktop/NetworkManager/Devices/60)
Dec 05 10:15:06 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:06Z|00340|binding|INFO|Claiming lport d8b7fc52-b4bc-4ae6-997a-8744dbdd4e21 for this chassis.
Dec 05 10:15:06 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:06Z|00341|binding|INFO|d8b7fc52-b4bc-4ae6-997a-8744dbdd4e21: Claiming unknown
Dec 05 10:15:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:06.173 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:06 np0005546420.localdomain systemd-udevd[325027]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:15:06 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:06.184 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.103.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-b9b7a6e8-1766-48ed-8517-54544d7a5b75', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b7a6e8-1766-48ed-8517-54544d7a5b75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=382848de-ed8e-44b3-9741-2099acc0a033, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=d8b7fc52-b4bc-4ae6-997a-8744dbdd4e21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:15:06 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:06.186 159503 INFO neutron.agent.ovn.metadata.agent [-] Port d8b7fc52-b4bc-4ae6-997a-8744dbdd4e21 in datapath b9b7a6e8-1766-48ed-8517-54544d7a5b75 bound to our chassis
Dec 05 10:15:06 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:06.188 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b9b7a6e8-1766-48ed-8517-54544d7a5b75 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:15:06 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:06.190 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a562d76b-f4d3-4545-a79d-bf32ef68b3d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:15:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e204 e204: 6 total, 6 up, 6 in
Dec 05 10:15:06 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd8b7fc52-b4: No such device
Dec 05 10:15:06 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:06Z|00342|binding|INFO|Setting lport d8b7fc52-b4bc-4ae6-997a-8744dbdd4e21 ovn-installed in OVS
Dec 05 10:15:06 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:06Z|00343|binding|INFO|Setting lport d8b7fc52-b4bc-4ae6-997a-8744dbdd4e21 up in Southbound
Dec 05 10:15:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:06.211 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:06 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd8b7fc52-b4: No such device
Dec 05 10:15:06 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd8b7fc52-b4: No such device
Dec 05 10:15:06 np0005546420.localdomain ceph-mon[298353]: pgmap v515: 177 pgs: 177 active+clean; 1010 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 62 KiB/s rd, 18 MiB/s wr, 97 op/s
Dec 05 10:15:06 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd8b7fc52-b4: No such device
Dec 05 10:15:06 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd8b7fc52-b4: No such device
Dec 05 10:15:06 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd8b7fc52-b4: No such device
Dec 05 10:15:06 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd8b7fc52-b4: No such device
Dec 05 10:15:06 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapd8b7fc52-b4: No such device
Dec 05 10:15:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:06.248 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:06.275 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:06 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:15:06.397 2 INFO neutron.agent.securitygroups_rpc [None req-562688ca-98a5-44ea-a36a-018c1f537f62 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']
Dec 05 10:15:06 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:15:06.629 2 INFO neutron.agent.securitygroups_rpc [None req-f454cbbc-1bd4-4c89-8f88-4d0686edf94e 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']
Dec 05 10:15:07 np0005546420.localdomain ceph-mon[298353]: osdmap e204: 6 total, 6 up, 6 in
Dec 05 10:15:07 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1568474033' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:07 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1568474033' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:07.343 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:07 np0005546420.localdomain podman[325098]: 
Dec 05 10:15:07 np0005546420.localdomain podman[325098]: 2025-12-05 10:15:07.538473266 +0000 UTC m=+0.052710619 container create 1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b9b7a6e8-1766-48ed-8517-54544d7a5b75, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:15:07 np0005546420.localdomain systemd[1]: Started libpod-conmon-1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093.scope.
Dec 05 10:15:07 np0005546420.localdomain systemd[1]: tmp-crun.QGx78s.mount: Deactivated successfully.
Dec 05 10:15:07 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:15:07 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcde3b4ee2d4b9661b4f907e491388959a3501aaa34eb134fd16a248dbf52d7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:15:07 np0005546420.localdomain podman[325098]: 2025-12-05 10:15:07.605881027 +0000 UTC m=+0.120118400 container init 1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b9b7a6e8-1766-48ed-8517-54544d7a5b75, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:15:07 np0005546420.localdomain podman[325098]: 2025-12-05 10:15:07.512114162 +0000 UTC m=+0.026351535 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:15:07 np0005546420.localdomain podman[325098]: 2025-12-05 10:15:07.616776203 +0000 UTC m=+0.131013586 container start 1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b9b7a6e8-1766-48ed-8517-54544d7a5b75, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:15:07 np0005546420.localdomain dnsmasq[325117]: started, version 2.85 cachesize 150
Dec 05 10:15:07 np0005546420.localdomain dnsmasq[325117]: DNS service limited to local subnets
Dec 05 10:15:07 np0005546420.localdomain dnsmasq[325117]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:15:07 np0005546420.localdomain dnsmasq[325117]: warning: no upstream servers configured
Dec 05 10:15:07 np0005546420.localdomain dnsmasq-dhcp[325117]: DHCP, static leases only on 10.103.0.0, lease time 1d
Dec 05 10:15:07 np0005546420.localdomain dnsmasq[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/addn_hosts - 0 addresses
Dec 05 10:15:07 np0005546420.localdomain dnsmasq-dhcp[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/host
Dec 05 10:15:07 np0005546420.localdomain dnsmasq-dhcp[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/opts
Dec 05 10:15:08 np0005546420.localdomain ceph-mon[298353]: pgmap v517: 177 pgs: 177 active+clean; 1.0 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 78 KiB/s rd, 25 MiB/s wr, 120 op/s
Dec 05 10:15:08 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:15:08.255 2 INFO neutron.agent.securitygroups_rpc [None req-7a2675cb-7b38-4211-acfb-ffbaf5662b1d 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']
Dec 05 10:15:08 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:08.276 262769 INFO neutron.agent.dhcp.agent [None req-a8b1408a-f404-4c48-b4ac-0afe56c05207 - - - - - -] DHCP configuration for ports {'48894b33-ec07-41ff-b117-acee60f4dc8c'} is completed
Dec 05 10:15:08 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:15:08.699 2 INFO neutron.agent.securitygroups_rpc [None req-82d19382-8b47-4f74-a75a-d5e52337093c 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']
Dec 05 10:15:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e205 e205: 6 total, 6 up, 6 in
Dec 05 10:15:09 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:15:09.317 2 INFO neutron.agent.securitygroups_rpc [None req-121fcc22-4c23-49fd-9195-231305396d34 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']
Dec 05 10:15:09 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:09.792 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:15:09Z, description=, device_id=5d4b4894-3319-4abc-b826-a288e4e446f6, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f63cd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99f63610>], id=0c87bf9f-ad78-4048-98c0-7aa83aab1a91, ip_allocation=immediate, mac_address=fa:16:3e:21:91:a0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:58Z, description=, dns_domain=, id=b9b7a6e8-1766-48ed-8517-54544d7a5b75, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1667906822, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8115, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3038, status=ACTIVE, subnets=['c1802284-182c-4299-8a88-48418eb87d18'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:15:04Z, vlan_transparent=None, network_id=b9b7a6e8-1766-48ed-8517-54544d7a5b75, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3069, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:15:09Z on network b9b7a6e8-1766-48ed-8517-54544d7a5b75
Dec 05 10:15:09 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:15:09.886 2 INFO neutron.agent.securitygroups_rpc [None req-f71bc6c2-560f-406b-a2c3-c442701bff84 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']
Dec 05 10:15:10 np0005546420.localdomain podman[325134]: 2025-12-05 10:15:10.01824662 +0000 UTC m=+0.057741823 container kill 1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b9b7a6e8-1766-48ed-8517-54544d7a5b75, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:15:10 np0005546420.localdomain systemd[1]: tmp-crun.Y8To9l.mount: Deactivated successfully.
Dec 05 10:15:10 np0005546420.localdomain dnsmasq[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/addn_hosts - 1 addresses
Dec 05 10:15:10 np0005546420.localdomain dnsmasq-dhcp[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/host
Dec 05 10:15:10 np0005546420.localdomain dnsmasq-dhcp[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/opts
Dec 05 10:15:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:10 np0005546420.localdomain ceph-mon[298353]: osdmap e205: 6 total, 6 up, 6 in
Dec 05 10:15:10 np0005546420.localdomain ceph-mon[298353]: pgmap v519: 177 pgs: 177 active+clean; 1.0 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 54 KiB/s rd, 15 MiB/s wr, 82 op/s
Dec 05 10:15:10 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:10.450 262769 INFO neutron.agent.dhcp.agent [None req-dfdd6eac-4400-4bf8-bb73-8d92c67922be - - - - - -] DHCP configuration for ports {'0c87bf9f-ad78-4048-98c0-7aa83aab1a91'} is completed
Dec 05 10:15:11 np0005546420.localdomain sudo[325155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:15:11 np0005546420.localdomain sudo[325155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:15:11 np0005546420.localdomain sudo[325155]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:11 np0005546420.localdomain sudo[325173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:15:11 np0005546420.localdomain sudo[325173]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:15:11 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:15:11.630 2 INFO neutron.agent.securitygroups_rpc [None req-0c4cfa16-1548-4253-961f-4264df7b6723 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['9c106a56-f234-42ba-b5ee-f23da94e12de']
Dec 05 10:15:11 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:11.828 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:15:09Z, description=, device_id=5d4b4894-3319-4abc-b826-a288e4e446f6, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0d5a00>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a0d5af0>], id=0c87bf9f-ad78-4048-98c0-7aa83aab1a91, ip_allocation=immediate, mac_address=fa:16:3e:21:91:a0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:58Z, description=, dns_domain=, id=b9b7a6e8-1766-48ed-8517-54544d7a5b75, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1667906822, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8115, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3038, status=ACTIVE, subnets=['c1802284-182c-4299-8a88-48418eb87d18'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:15:04Z, vlan_transparent=None, network_id=b9b7a6e8-1766-48ed-8517-54544d7a5b75, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3069, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:15:09Z on network b9b7a6e8-1766-48ed-8517-54544d7a5b75
Dec 05 10:15:12 np0005546420.localdomain podman[325225]: 2025-12-05 10:15:12.053144861 +0000 UTC m=+0.053478612 container kill 1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b9b7a6e8-1766-48ed-8517-54544d7a5b75, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:15:12 np0005546420.localdomain dnsmasq[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/addn_hosts - 1 addresses
Dec 05 10:15:12 np0005546420.localdomain dnsmasq-dhcp[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/host
Dec 05 10:15:12 np0005546420.localdomain dnsmasq-dhcp[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/opts
Dec 05 10:15:12 np0005546420.localdomain ceph-mon[298353]: pgmap v520: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 84 KiB/s rd, 26 MiB/s wr, 124 op/s
Dec 05 10:15:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d52e7204-a858-4168-9b05-d7f96da9d351", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:15:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d52e7204-a858-4168-9b05-d7f96da9d351", "format": "json"}]: dispatch
Dec 05 10:15:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:15:12 np0005546420.localdomain sudo[325173]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:12 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:12.314 262769 INFO neutron.agent.dhcp.agent [None req-0598280c-e5b5-45b9-a048-84689718968e - - - - - -] DHCP configuration for ports {'0c87bf9f-ad78-4048-98c0-7aa83aab1a91'} is completed
Dec 05 10:15:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:12.345 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:15:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:12.347 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:15:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:12.347 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:15:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:12.347 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:15:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:12.384 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:12.385 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:15:12 np0005546420.localdomain sudo[325259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:15:12 np0005546420.localdomain sudo[325259]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:15:12 np0005546420.localdomain sudo[325259]: pam_unix(sudo:session): session closed for user root
Dec 05 10:15:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:15:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:15:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:15:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:15:13 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3266848616' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:13 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3266848616' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:14 np0005546420.localdomain ceph-mon[298353]: pgmap v521: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 87 KiB/s rd, 30 MiB/s wr, 130 op/s
Dec 05 10:15:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e206 e206: 6 total, 6 up, 6 in
Dec 05 10:15:15 np0005546420.localdomain ceph-mon[298353]: osdmap e206: 6 total, 6 up, 6 in
Dec 05 10:15:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:15:15 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:15:15 np0005546420.localdomain systemd[1]: tmp-crun.0Nbm2j.mount: Deactivated successfully.
Dec 05 10:15:15 np0005546420.localdomain podman[325278]: 2025-12-05 10:15:15.509841294 +0000 UTC m=+0.080306720 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:15:15 np0005546420.localdomain podman[325277]: 2025-12-05 10:15:15.570743604 +0000 UTC m=+0.139850828 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 10:15:15 np0005546420.localdomain podman[325278]: 2025-12-05 10:15:15.599660197 +0000 UTC m=+0.170125623 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:15:15 np0005546420.localdomain podman[325277]: 2025-12-05 10:15:15.608293114 +0000 UTC m=+0.177400298 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:15:15 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:15:15 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:15:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e207 e207: 6 total, 6 up, 6 in
Dec 05 10:15:16 np0005546420.localdomain ceph-mon[298353]: pgmap v523: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 34 KiB/s rd, 15 MiB/s wr, 49 op/s
Dec 05 10:15:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:15:16 np0005546420.localdomain ceph-mon[298353]: osdmap e207: 6 total, 6 up, 6 in
Dec 05 10:15:16 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:15:16 np0005546420.localdomain dnsmasq[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/addn_hosts - 0 addresses
Dec 05 10:15:16 np0005546420.localdomain dnsmasq-dhcp[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/host
Dec 05 10:15:16 np0005546420.localdomain dnsmasq-dhcp[325117]: read /var/lib/neutron/dhcp/b9b7a6e8-1766-48ed-8517-54544d7a5b75/opts
Dec 05 10:15:16 np0005546420.localdomain podman[325335]: 2025-12-05 10:15:16.915400176 +0000 UTC m=+0.060391625 container kill 1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b9b7a6e8-1766-48ed-8517-54544d7a5b75, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:15:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e208 e208: 6 total, 6 up, 6 in
Dec 05 10:15:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:17.133 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:17 np0005546420.localdomain kernel: device tapd8b7fc52-b4 left promiscuous mode
Dec 05 10:15:17 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:17Z|00344|binding|INFO|Releasing lport d8b7fc52-b4bc-4ae6-997a-8744dbdd4e21 from this chassis (sb_readonly=0)
Dec 05 10:15:17 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:17Z|00345|binding|INFO|Setting lport d8b7fc52-b4bc-4ae6-997a-8744dbdd4e21 down in Southbound
Dec 05 10:15:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:17.145 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.103.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-b9b7a6e8-1766-48ed-8517-54544d7a5b75', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b7a6e8-1766-48ed-8517-54544d7a5b75', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=382848de-ed8e-44b3-9741-2099acc0a033, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=d8b7fc52-b4bc-4ae6-997a-8744dbdd4e21) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:15:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:17.147 159503 INFO neutron.agent.ovn.metadata.agent [-] Port d8b7fc52-b4bc-4ae6-997a-8744dbdd4e21 in datapath b9b7a6e8-1766-48ed-8517-54544d7a5b75 unbound from our chassis
Dec 05 10:15:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:17.150 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b7a6e8-1766-48ed-8517-54544d7a5b75, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:15:17 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:17.151 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[9f227a59-dcf7-4369-9e7d-ff09ba25c75a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:15:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:17.157 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:15:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:15:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:15:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158563 "" "Go-http-client/1.1"
Dec 05 10:15:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:15:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19684 "" "Go-http-client/1.1"
Dec 05 10:15:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:17.421 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:17 np0005546420.localdomain dnsmasq[325117]: exiting on receipt of SIGTERM
Dec 05 10:15:17 np0005546420.localdomain podman[325374]: 2025-12-05 10:15:17.471430632 +0000 UTC m=+0.103309861 container kill 1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b9b7a6e8-1766-48ed-8517-54544d7a5b75, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:15:17 np0005546420.localdomain systemd[1]: libpod-1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093.scope: Deactivated successfully.
Dec 05 10:15:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "39eff584-c9b5-4dbb-b4e1-af4206cc68e9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:15:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "39eff584-c9b5-4dbb-b4e1-af4206cc68e9", "format": "json"}]: dispatch
Dec 05 10:15:17 np0005546420.localdomain ceph-mon[298353]: osdmap e208: 6 total, 6 up, 6 in
Dec 05 10:15:17 np0005546420.localdomain podman[325387]: 2025-12-05 10:15:17.539987558 +0000 UTC m=+0.054446112 container died 1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b9b7a6e8-1766-48ed-8517-54544d7a5b75, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 05 10:15:17 np0005546420.localdomain systemd[1]: tmp-crun.BdBWUJ.mount: Deactivated successfully.
Dec 05 10:15:17 np0005546420.localdomain podman[325387]: 2025-12-05 10:15:17.631928056 +0000 UTC m=+0.146386580 container cleanup 1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b9b7a6e8-1766-48ed-8517-54544d7a5b75, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 10:15:17 np0005546420.localdomain systemd[1]: libpod-conmon-1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093.scope: Deactivated successfully.
Dec 05 10:15:17 np0005546420.localdomain podman[325389]: 2025-12-05 10:15:17.65343579 +0000 UTC m=+0.151783466 container remove 1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b9b7a6e8-1766-48ed-8517-54544d7a5b75, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:15:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:17.678 262769 INFO neutron.agent.dhcp.agent [None req-786d94c6-d307-4661-b7b9-c855beae4242 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:15:17 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:17.684 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:15:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:17.913 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-ddcde3b4ee2d4b9661b4f907e491388959a3501aaa34eb134fd16a248dbf52d7-merged.mount: Deactivated successfully.
Dec 05 10:15:17 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d38a2cb2be31b7eb83830c84c092c5566bd50bd1b04d91afe69e9c89f5bc093-userdata-shm.mount: Deactivated successfully.
Dec 05 10:15:17 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2db9b7a6e8\x2d1766\x2d48ed\x2d8517\x2d54544d7a5b75.mount: Deactivated successfully.
Dec 05 10:15:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:15:18 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4070565923' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:15:18 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4070565923' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:18 np0005546420.localdomain ceph-mon[298353]: pgmap v526: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 46 KiB/s rd, 16 MiB/s wr, 71 op/s
Dec 05 10:15:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4070565923' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:18 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4070565923' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:18 np0005546420.localdomain dnsmasq[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/addn_hosts - 0 addresses
Dec 05 10:15:18 np0005546420.localdomain dnsmasq-dhcp[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/host
Dec 05 10:15:18 np0005546420.localdomain dnsmasq-dhcp[324879]: read /var/lib/neutron/dhcp/caa6a74a-c61f-4553-a3e4-7b8097b1c04f/opts
Dec 05 10:15:18 np0005546420.localdomain podman[325432]: 2025-12-05 10:15:18.540862297 +0000 UTC m=+0.068720583 container kill a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa6a74a-c61f-4553-a3e4-7b8097b1c04f, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 05 10:15:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:18.732 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:18 np0005546420.localdomain kernel: device tap50145118-81 left promiscuous mode
Dec 05 10:15:18 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:18Z|00346|binding|INFO|Releasing lport 50145118-8113-4e6b-8c05-9f1ed9cd9f4d from this chassis (sb_readonly=0)
Dec 05 10:15:18 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:18Z|00347|binding|INFO|Setting lport 50145118-8113-4e6b-8c05-9f1ed9cd9f4d down in Southbound
Dec 05 10:15:18 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:18.742 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-caa6a74a-c61f-4553-a3e4-7b8097b1c04f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-caa6a74a-c61f-4553-a3e4-7b8097b1c04f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55023c0-8714-49cd-8d75-1d71ac8e18a9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=50145118-8113-4e6b-8c05-9f1ed9cd9f4d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:15:18 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:18.745 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 50145118-8113-4e6b-8c05-9f1ed9cd9f4d in datapath caa6a74a-c61f-4553-a3e4-7b8097b1c04f unbound from our chassis
Dec 05 10:15:18 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:18.749 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network caa6a74a-c61f-4553-a3e4-7b8097b1c04f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:15:18 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:18.750 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[0f808b48-de82-47b6-8592-659ee39785d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:15:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:18.755 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:15:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:15:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:15:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:15:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:15:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:15:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:15:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:15:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:15:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:15:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:15:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:15:19 np0005546420.localdomain dnsmasq[324879]: exiting on receipt of SIGTERM
Dec 05 10:15:19 np0005546420.localdomain podman[325471]: 2025-12-05 10:15:19.355407402 +0000 UTC m=+0.063154840 container kill a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa6a74a-c61f-4553-a3e4-7b8097b1c04f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:15:19 np0005546420.localdomain systemd[1]: libpod-a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4.scope: Deactivated successfully.
Dec 05 10:15:19 np0005546420.localdomain podman[325483]: 2025-12-05 10:15:19.433327068 +0000 UTC m=+0.059034513 container died a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa6a74a-c61f-4553-a3e4-7b8097b1c04f, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:15:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:19.466 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:19.467 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:15:19 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:19Z|00348|ovn_bfd|INFO|Disabled BFD on interface ovn-473cc8-0
Dec 05 10:15:19 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:19Z|00349|ovn_bfd|INFO|Disabled BFD on interface ovn-f5bb44-0
Dec 05 10:15:19 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:19Z|00350|ovn_bfd|INFO|Disabled BFD on interface ovn-40c64e-0
Dec 05 10:15:19 np0005546420.localdomain systemd[1]: tmp-crun.HyH6na.mount: Deactivated successfully.
Dec 05 10:15:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:19.501 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:19.503 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:19 np0005546420.localdomain systemd[1]: tmp-crun.SnrzMd.mount: Deactivated successfully.
Dec 05 10:15:19 np0005546420.localdomain podman[325483]: 2025-12-05 10:15:19.537724252 +0000 UTC m=+0.163431657 container cleanup a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa6a74a-c61f-4553-a3e4-7b8097b1c04f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:15:19 np0005546420.localdomain systemd[1]: libpod-conmon-a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4.scope: Deactivated successfully.
Dec 05 10:15:19 np0005546420.localdomain podman[325485]: 2025-12-05 10:15:19.55615517 +0000 UTC m=+0.175078225 container remove a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-caa6a74a-c61f-4553-a3e4-7b8097b1c04f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:15:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:19.570 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:19 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:19.588 262769 INFO neutron.agent.dhcp.agent [None req-3d4a6b4f-7d53-4824-b2f6-c23401acba0f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:15:19 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:19.588 262769 INFO neutron.agent.dhcp.agent [None req-3d4a6b4f-7d53-4824-b2f6-c23401acba0f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:15:19 np0005546420.localdomain dnsmasq[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/addn_hosts - 0 addresses
Dec 05 10:15:19 np0005546420.localdomain dnsmasq-dhcp[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/host
Dec 05 10:15:19 np0005546420.localdomain dnsmasq-dhcp[323965]: read /var/lib/neutron/dhcp/de7b370c-b329-459e-a644-ff68f6112395/opts
Dec 05 10:15:19 np0005546420.localdomain podman[325522]: 2025-12-05 10:15:19.651838794 +0000 UTC m=+0.057844566 container kill 3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7b370c-b329-459e-a644-ff68f6112395, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:15:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:19.889 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:19 np0005546420.localdomain kernel: device tapa37fd622-10 left promiscuous mode
Dec 05 10:15:19 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:19Z|00351|binding|INFO|Releasing lport a37fd622-10aa-4a64-8d6d-a2d9fe92452b from this chassis (sb_readonly=0)
Dec 05 10:15:19 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:19Z|00352|binding|INFO|Setting lport a37fd622-10aa-4a64-8d6d-a2d9fe92452b down in Southbound
Dec 05 10:15:19 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:19.903 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-de7b370c-b329-459e-a644-ff68f6112395', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de7b370c-b329-459e-a644-ff68f6112395', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8730b222fadf4a249823e59d8b326dde', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b2cc7c18-2a28-45cd-9e3a-7a28f4302155, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=a37fd622-10aa-4a64-8d6d-a2d9fe92452b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:15:19 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:19.905 159503 INFO neutron.agent.ovn.metadata.agent [-] Port a37fd622-10aa-4a64-8d6d-a2d9fe92452b in datapath de7b370c-b329-459e-a644-ff68f6112395 unbound from our chassis
Dec 05 10:15:19 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:19.906 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de7b370c-b329-459e-a644-ff68f6112395, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:15:19 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:19.907 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[64d846e1-9c17-40ee-9188-6da6be278fd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:15:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:19.914 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-9d60f6e41f8c0ab377b6dd1e9e9764ce2746bb499637868ceb4c576616d8703c-merged.mount: Deactivated successfully.
Dec 05 10:15:20 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0bac92c125dc23b70664f93cd0a98b8084442e813091e693029a7cbff8d55a4-userdata-shm.mount: Deactivated successfully.
Dec 05 10:15:20 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2dcaa6a74a\x2dc61f\x2d4553\x2da3e4\x2d7b8097b1c04f.mount: Deactivated successfully.
Dec 05 10:15:20 np0005546420.localdomain ceph-mon[298353]: pgmap v527: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 41 KiB/s rd, 11 MiB/s wr, 62 op/s
Dec 05 10:15:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3134142781' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3134142781' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:15:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:20.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:20.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:15:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:20.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:15:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:20.891 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:15:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:15:21 np0005546420.localdomain podman[325543]: 2025-12-05 10:15:21.504329493 +0000 UTC m=+0.078301068 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 10:15:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "39eff584-c9b5-4dbb-b4e1-af4206cc68e9", "format": "json"}]: dispatch
Dec 05 10:15:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "39eff584-c9b5-4dbb-b4e1-af4206cc68e9", "force": true, "format": "json"}]: dispatch
Dec 05 10:15:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:15:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3", "format": "json"}]: dispatch
Dec 05 10:15:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2751525241' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2751525241' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:21 np0005546420.localdomain podman[325543]: 2025-12-05 10:15:21.589129881 +0000 UTC m=+0.163101476 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:15:21 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:15:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:21.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:21.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:22.425 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:22.429 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:22 np0005546420.localdomain ceph-mon[298353]: pgmap v528: 177 pgs: 177 active+clean; 373 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 8.9 MiB/s wr, 157 op/s
Dec 05 10:15:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3057450815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:15:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3296104708' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3296104708' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2692595376' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2692595376' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2145583600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:15:23 np0005546420.localdomain dnsmasq[323965]: exiting on receipt of SIGTERM
Dec 05 10:15:23 np0005546420.localdomain podman[325584]: 2025-12-05 10:15:23.69814727 +0000 UTC m=+0.060246922 container kill 3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7b370c-b329-459e-a644-ff68f6112395, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:15:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:15:23 np0005546420.localdomain systemd[1]: libpod-3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177.scope: Deactivated successfully.
Dec 05 10:15:23 np0005546420.localdomain podman[325599]: 2025-12-05 10:15:23.778686955 +0000 UTC m=+0.059618001 container died 3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7b370c-b329-459e-a644-ff68f6112395, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:15:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177-userdata-shm.mount: Deactivated successfully.
Dec 05 10:15:23 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-47f54e29a409463f5531ff3abed8d1c5fc75fca6017ccce223ab913d590b7461-merged.mount: Deactivated successfully.
Dec 05 10:15:23 np0005546420.localdomain podman[325599]: 2025-12-05 10:15:23.816382239 +0000 UTC m=+0.097313265 container remove 3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-de7b370c-b329-459e-a644-ff68f6112395, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:15:23 np0005546420.localdomain podman[325604]: 2025-12-05 10:15:23.864105083 +0000 UTC m=+0.135790824 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:15:23 np0005546420.localdomain podman[325604]: 2025-12-05 10:15:23.875599737 +0000 UTC m=+0.147285438 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:15:23 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:15:23 np0005546420.localdomain systemd[1]: libpod-conmon-3f08ecd466375ba54afbde02b4a03f2cf31722cf625475aa9a0528e9bab0d177.scope: Deactivated successfully.
Dec 05 10:15:24 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:24.065 262769 INFO neutron.agent.dhcp.agent [None req-fe1f2213-8ae0-4bf8-879c-a335d8734df6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:15:24 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:24.270 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:15:24 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:15:24.600 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:15:24 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d52e7204-a858-4168-9b05-d7f96da9d351", "format": "json"}]: dispatch
Dec 05 10:15:24 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d52e7204-a858-4168-9b05-d7f96da9d351", "force": true, "format": "json"}]: dispatch
Dec 05 10:15:24 np0005546420.localdomain ceph-mon[298353]: pgmap v529: 177 pgs: 177 active+clean; 197 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 92 KiB/s rd, 8.0 MiB/s wr, 147 op/s
Dec 05 10:15:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2342090555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2342090555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:24 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2dde7b370c\x2db329\x2d459e\x2da644\x2dff68f6112395.mount: Deactivated successfully.
Dec 05 10:15:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:24.854 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:25 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3", "format": "json"}]: dispatch
Dec 05 10:15:25 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3", "force": true, "format": "json"}]: dispatch
Dec 05 10:15:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e209 e209: 6 total, 6 up, 6 in
Dec 05 10:15:26 np0005546420.localdomain ceph-mon[298353]: pgmap v530: 177 pgs: 177 active+clean; 197 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 6.8 MiB/s wr, 125 op/s
Dec 05 10:15:26 np0005546420.localdomain ceph-mon[298353]: osdmap e209: 6 total, 6 up, 6 in
Dec 05 10:15:26 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:15:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:26.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:26.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:27.427 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "636df4c8-dabc-4041-93c0-674c71210a5e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:15:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "636df4c8-dabc-4041-93c0-674c71210a5e", "format": "json"}]: dispatch
Dec 05 10:15:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:27.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:27.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:28 np0005546420.localdomain ceph-mon[298353]: pgmap v532: 177 pgs: 177 active+clean; 197 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 47 KiB/s wr, 116 op/s
Dec 05 10:15:29 np0005546420.localdomain sshd[325641]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:15:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:29.616 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:29.616 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:15:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:29.619 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:15:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:29.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:30 np0005546420.localdomain ceph-mon[298353]: pgmap v533: 177 pgs: 177 active+clean; 197 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 47 KiB/s wr, 116 op/s
Dec 05 10:15:30 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "636df4c8-dabc-4041-93c0-674c71210a5e", "format": "json"}]: dispatch
Dec 05 10:15:30 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "636df4c8-dabc-4041-93c0-674c71210a5e", "force": true, "format": "json"}]: dispatch
Dec 05 10:15:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:15:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:15:31 np0005546420.localdomain podman[325642]: 2025-12-05 10:15:31.517051621 +0000 UTC m=+0.089821774 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:15:31 np0005546420.localdomain podman[325642]: 2025-12-05 10:15:31.554330092 +0000 UTC m=+0.127100235 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:15:31 np0005546420.localdomain podman[325643]: 2025-12-05 10:15:31.569899863 +0000 UTC m=+0.139454937 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 10:15:31 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:15:31 np0005546420.localdomain podman[325643]: 2025-12-05 10:15:31.575716543 +0000 UTC m=+0.145271617 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 05 10:15:31 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:15:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:31.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:15:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:31.904 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:15:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:31.905 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:15:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:31.905 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:15:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:31.905 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:15:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:31.906 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:15:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1174594917' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:15:32 np0005546420.localdomain ceph-mon[298353]: pgmap v534: 177 pgs: 177 active+clean; 198 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 43 KiB/s wr, 42 op/s
Dec 05 10:15:32 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:15:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:15:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3531857231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:15:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:32.399 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:15:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:32.473 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:32.657 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:15:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:32.659 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11533MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:15:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:32.660 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:15:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:32.660 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:15:33 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:15:33 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "format": "json"}]: dispatch
Dec 05 10:15:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3531857231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:15:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1926870223' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:15:33 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:15:33.621 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:15:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:33.985 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:15:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:33.986 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:15:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:34.007 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:15:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:34.030 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:15:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:34.031 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:15:34 np0005546420.localdomain ceph-mon[298353]: pgmap v535: 177 pgs: 177 active+clean; 198 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 43 KiB/s wr, 39 op/s
Dec 05 10:15:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:34.049 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:15:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:34.079 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:15:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:34.097 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:15:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:15:34 np0005546420.localdomain podman[325724]: 2025-12-05 10:15:34.499629288 +0000 UTC m=+0.076548344 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:15:34 np0005546420.localdomain podman[325724]: 2025-12-05 10:15:34.534588077 +0000 UTC m=+0.111507063 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 10:15:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:15:34 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2523016085' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:15:34 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:15:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:34.554 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:15:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:34.560 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:15:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:34.578 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:15:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:34.580 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:15:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:34.581 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:15:35 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:15:35.063 2 INFO neutron.agent.securitygroups_rpc [None req-1518dd7f-186b-4c5f-9ddd-a06eab8a04f6 8c95b42a11ae4b2ca59be36067b7e35c c2c66bea319748f696485854e7041763 - - default default] Security group rule updated ['3d5a4cde-c54d-4be1-a363-4165dbdf4da7']
Dec 05 10:15:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2523016085' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:15:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4151239959' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:15:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4151239959' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:15:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:36 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "snap_name": "74f67cad-1d8c-4e22-8727-85e4c5d4ccb8", "format": "json"}]: dispatch
Dec 05 10:15:36 np0005546420.localdomain ceph-mon[298353]: pgmap v536: 177 pgs: 177 active+clean; 198 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 43 KiB/s wr, 39 op/s
Dec 05 10:15:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:37.503 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:15:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:37.505 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:37.506 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5031 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:15:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:37.506 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:15:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:37.506 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:15:38 np0005546420.localdomain ceph-mon[298353]: pgmap v537: 177 pgs: 177 active+clean; 198 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 55 KiB/s wr, 64 op/s
Dec 05 10:15:39 np0005546420.localdomain sshd[325641]: error: kex_exchange_identification: read: Connection timed out
Dec 05 10:15:39 np0005546420.localdomain sshd[325641]: banner exchange: Connection from 106.12.173.59 port 35896: Connection timed out
Dec 05 10:15:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:40 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "snap_name": "74f67cad-1d8c-4e22-8727-85e4c5d4ccb8_f3c8f45f-f48a-4e3d-9a84-9ca89467d072", "force": true, "format": "json"}]: dispatch
Dec 05 10:15:40 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "snap_name": "74f67cad-1d8c-4e22-8727-85e4c5d4ccb8", "force": true, "format": "json"}]: dispatch
Dec 05 10:15:40 np0005546420.localdomain ceph-mon[298353]: pgmap v538: 177 pgs: 177 active+clean; 198 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 31 KiB/s wr, 32 op/s
Dec 05 10:15:42 np0005546420.localdomain ceph-mon[298353]: pgmap v539: 177 pgs: 177 active+clean; 198 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 41 KiB/s wr, 34 op/s
Dec 05 10:15:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:42.509 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:15:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:42.511 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:15:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:42.511 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:15:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:42.511 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:15:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:42.556 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:42.557 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:15:44 np0005546420.localdomain ceph-mon[298353]: pgmap v540: 177 pgs: 177 active+clean; 198 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 27 KiB/s wr, 31 op/s
Dec 05 10:15:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:45 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "format": "json"}]: dispatch
Dec 05 10:15:45 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "force": true, "format": "json"}]: dispatch
Dec 05 10:15:45 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:15:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:15:46 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:15:46 np0005546420.localdomain podman[325745]: 2025-12-05 10:15:46.491539727 +0000 UTC m=+0.070091475 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:15:46 np0005546420.localdomain podman[325745]: 2025-12-05 10:15:46.505425616 +0000 UTC m=+0.083977384 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec 05 10:15:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "84c28651-3e0c-41ba-8232-c0a4261265e5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:15:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "84c28651-3e0c-41ba-8232-c0a4261265e5", "format": "json"}]: dispatch
Dec 05 10:15:46 np0005546420.localdomain ceph-mon[298353]: pgmap v541: 177 pgs: 177 active+clean; 198 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 26 KiB/s wr, 30 op/s
Dec 05 10:15:46 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:15:46 np0005546420.localdomain podman[325746]: 2025-12-05 10:15:46.559471514 +0000 UTC m=+0.133599255 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:15:46 np0005546420.localdomain podman[325746]: 2025-12-05 10:15:46.57069966 +0000 UTC m=+0.144827401 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:15:46 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:15:47 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e210 e210: 6 total, 6 up, 6 in
Dec 05 10:15:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:15:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:15:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:15:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:15:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:15:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18265 "" "Go-http-client/1.1"
Dec 05 10:15:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:47.558 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:15:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:47.561 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:15:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:47.561 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:15:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:47.561 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:15:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:47.589 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:47.590 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:15:48 np0005546420.localdomain ceph-mon[298353]: osdmap e210: 6 total, 6 up, 6 in
Dec 05 10:15:48 np0005546420.localdomain ceph-mon[298353]: pgmap v543: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 37 KiB/s wr, 4 op/s
Dec 05 10:15:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:15:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:15:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:15:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:15:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:15:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:15:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:15:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:15:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:15:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:15:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:15:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:15:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:50 np0005546420.localdomain ceph-mon[298353]: pgmap v544: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 37 KiB/s wr, 4 op/s
Dec 05 10:15:51 np0005546420.localdomain sshd[325786]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:15:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "84c28651-3e0c-41ba-8232-c0a4261265e5", "format": "json"}]: dispatch
Dec 05 10:15:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "84c28651-3e0c-41ba-8232-c0a4261265e5", "force": true, "format": "json"}]: dispatch
Dec 05 10:15:52 np0005546420.localdomain ceph-mon[298353]: pgmap v545: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 36 KiB/s wr, 3 op/s
Dec 05 10:15:52 np0005546420.localdomain sshd[325786]: error: kex_exchange_identification: Connection closed by remote host
Dec 05 10:15:52 np0005546420.localdomain sshd[325786]: Connection closed by 43.225.159.82 port 38724
Dec 05 10:15:52 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:15:52 np0005546420.localdomain podman[325787]: 2025-12-05 10:15:52.508694945 +0000 UTC m=+0.085025225 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:15:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:52.592 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:15:52 np0005546420.localdomain podman[325787]: 2025-12-05 10:15:52.627829604 +0000 UTC m=+0.204159834 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 10:15:52 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:15:54 np0005546420.localdomain sshd[325812]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:15:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:15:54 np0005546420.localdomain ceph-mon[298353]: pgmap v546: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 36 KiB/s wr, 3 op/s
Dec 05 10:15:54 np0005546420.localdomain podman[325813]: 2025-12-05 10:15:54.527890162 +0000 UTC m=+0.099707170 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:15:54 np0005546420.localdomain podman[325813]: 2025-12-05 10:15:54.564148651 +0000 UTC m=+0.135965569 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:15:54 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:15:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:15:56 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e211 e211: 6 total, 6 up, 6 in
Dec 05 10:15:56 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:15:56Z|00353|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 05 10:15:56 np0005546420.localdomain ceph-mon[298353]: pgmap v547: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 36 KiB/s wr, 3 op/s
Dec 05 10:15:56 np0005546420.localdomain ceph-mon[298353]: osdmap e211: 6 total, 6 up, 6 in
Dec 05 10:15:57 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e212 e212: 6 total, 6 up, 6 in
Dec 05 10:15:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:15:57.596 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:15:58 np0005546420.localdomain ceph-mon[298353]: pgmap v549: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 6.4 KiB/s rd, 22 KiB/s wr, 11 op/s
Dec 05 10:15:58 np0005546420.localdomain ceph-mon[298353]: osdmap e212: 6 total, 6 up, 6 in
Dec 05 10:15:59 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e213 e213: 6 total, 6 up, 6 in
Dec 05 10:16:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:00 np0005546420.localdomain ceph-mon[298353]: pgmap v551: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 8.0 KiB/s rd, 14 KiB/s wr, 13 op/s
Dec 05 10:16:00 np0005546420.localdomain ceph-mon[298353]: osdmap e213: 6 total, 6 up, 6 in
Dec 05 10:16:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:16:00 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1620176454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:16:00 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1620176454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:01 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e214 e214: 6 total, 6 up, 6 in
Dec 05 10:16:01 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1620176454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:01 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1620176454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:16:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:16:02 np0005546420.localdomain podman[325833]: 2025-12-05 10:16:02.499001461 +0000 UTC m=+0.076280346 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:16:02 np0005546420.localdomain podman[325833]: 2025-12-05 10:16:02.5041252 +0000 UTC m=+0.081404105 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 05 10:16:02 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:16:02 np0005546420.localdomain podman[325832]: 2025-12-05 10:16:02.557499477 +0000 UTC m=+0.133651797 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:16:02 np0005546420.localdomain podman[325832]: 2025-12-05 10:16:02.57052268 +0000 UTC m=+0.146675070 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:16:02 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:16:02 np0005546420.localdomain ceph-mon[298353]: pgmap v553: 177 pgs: 177 active+clean; 198 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 24 KiB/s wr, 116 op/s
Dec 05 10:16:02 np0005546420.localdomain ceph-mon[298353]: osdmap e214: 6 total, 6 up, 6 in
Dec 05 10:16:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:02.598 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:02 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e215 e215: 6 total, 6 up, 6 in
Dec 05 10:16:03 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:16:03 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "format": "json"}]: dispatch
Dec 05 10:16:03 np0005546420.localdomain ceph-mon[298353]: osdmap e215: 6 total, 6 up, 6 in
Dec 05 10:16:03 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:16:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4145883308' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4145883308' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e216 e216: 6 total, 6 up, 6 in
Dec 05 10:16:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:16:04.133 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:16:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:16:04.134 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:16:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:16:04.134 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:16:04 np0005546420.localdomain ceph-mon[298353]: pgmap v556: 177 pgs: 177 active+clean; 198 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 6.3 KiB/s wr, 108 op/s
Dec 05 10:16:04 np0005546420.localdomain ceph-mon[298353]: osdmap e216: 6 total, 6 up, 6 in
Dec 05 10:16:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1835666432' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1835666432' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:04 np0005546420.localdomain sshd[325812]: error: kex_exchange_identification: read: Connection timed out
Dec 05 10:16:04 np0005546420.localdomain sshd[325812]: banner exchange: Connection from 115.190.6.9 port 38254: Connection timed out
Dec 05 10:16:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:16:05 np0005546420.localdomain systemd[1]: tmp-crun.Vn5BVk.mount: Deactivated successfully.
Dec 05 10:16:05 np0005546420.localdomain podman[325873]: 2025-12-05 10:16:05.497638004 +0000 UTC m=+0.075886923 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:16:05 np0005546420.localdomain podman[325873]: 2025-12-05 10:16:05.538514396 +0000 UTC m=+0.116763375 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:16:05 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:16:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e217 e217: 6 total, 6 up, 6 in
Dec 05 10:16:06 np0005546420.localdomain ceph-mon[298353]: pgmap v558: 177 pgs: 177 active+clean; 198 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 6.3 KiB/s wr, 109 op/s
Dec 05 10:16:06 np0005546420.localdomain ceph-mon[298353]: osdmap e217: 6 total, 6 up, 6 in
Dec 05 10:16:07 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e218 e218: 6 total, 6 up, 6 in
Dec 05 10:16:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:07.601 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:07.603 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:07.603 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:16:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:07.603 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:07.632 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:07.632 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "snap_name": "aa63b329-df82-4a19-aa39-a051e214eb1e", "format": "json"}]: dispatch
Dec 05 10:16:07 np0005546420.localdomain ceph-mon[298353]: osdmap e218: 6 total, 6 up, 6 in
Dec 05 10:16:08 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e219 e219: 6 total, 6 up, 6 in
Dec 05 10:16:08 np0005546420.localdomain ceph-mon[298353]: pgmap v561: 177 pgs: 177 active+clean; 198 MiB data, 1018 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 32 KiB/s wr, 107 op/s
Dec 05 10:16:09 np0005546420.localdomain ceph-mon[298353]: osdmap e219: 6 total, 6 up, 6 in
Dec 05 10:16:09 np0005546420.localdomain ceph-mon[298353]: pgmap v563: 177 pgs: 177 active+clean; 198 MiB data, 1018 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 27 KiB/s wr, 89 op/s
Dec 05 10:16:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e220 e220: 6 total, 6 up, 6 in
Dec 05 10:16:10 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "snap_name": "aa63b329-df82-4a19-aa39-a051e214eb1e", "target_sub_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "format": "json"}]: dispatch
Dec 05 10:16:10 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "format": "json"}]: dispatch
Dec 05 10:16:10 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 05 10:16:11 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e221 e221: 6 total, 6 up, 6 in
Dec 05 10:16:11 np0005546420.localdomain ceph-mon[298353]: osdmap e220: 6 total, 6 up, 6 in
Dec 05 10:16:11 np0005546420.localdomain ceph-mon[298353]: pgmap v565: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 103 KiB/s rd, 53 KiB/s wr, 143 op/s
Dec 05 10:16:11 np0005546420.localdomain ceph-mon[298353]: osdmap e221: 6 total, 6 up, 6 in
Dec 05 10:16:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea920704-3133-4d40-a979-346396e08bfd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:16:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea920704-3133-4d40-a979-346396e08bfd", "format": "json"}]: dispatch
Dec 05 10:16:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:16:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:12.633 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:12.635 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:12.636 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:16:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:12.636 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:12 np0005546420.localdomain sudo[325892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:16:12 np0005546420.localdomain sudo[325892]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:16:12 np0005546420.localdomain sudo[325892]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:12.673 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:12.674 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:12 np0005546420.localdomain sudo[325910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:16:12 np0005546420.localdomain sudo[325910]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:16:12 np0005546420.localdomain ceph-mon[298353]: mgrmap e51: np0005546419.zhsnqq(active, since 14m), standbys: np0005546420.aoeylc, np0005546421.sukfea
Dec 05 10:16:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e222 e222: 6 total, 6 up, 6 in
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.958 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:16:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:16:13 np0005546420.localdomain sudo[325910]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:13 np0005546420.localdomain sudo[325961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:16:13 np0005546420.localdomain sudo[325961]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:16:13 np0005546420.localdomain sudo[325961]: pam_unix(sudo:session): session closed for user root
Dec 05 10:16:13 np0005546420.localdomain ceph-mon[298353]: osdmap e222: 6 total, 6 up, 6 in
Dec 05 10:16:13 np0005546420.localdomain ceph-mon[298353]: pgmap v568: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 32 KiB/s wr, 73 op/s
Dec 05 10:16:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:16:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:16:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:16:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:16:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e223 e223: 6 total, 6 up, 6 in
Dec 05 10:16:14 np0005546420.localdomain ceph-mon[298353]: osdmap e223: 6 total, 6 up, 6 in
Dec 05 10:16:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:16:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e224 e224: 6 total, 6 up, 6 in
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2ab3d121-5033-4ae4-bc88-7df1d583d342", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2ab3d121-5033-4ae4-bc88-7df1d583d342", "format": "json"}]: dispatch
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: osdmap e224: 6 total, 6 up, 6 in
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "format": "json"}]: dispatch
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/295759450' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/295759450' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: pgmap v571: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 6.2 KiB/s rd, 511 B/s wr, 9 op/s
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:16:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e225 e225: 6 total, 6 up, 6 in
Dec 05 10:16:16 np0005546420.localdomain ceph-mon[298353]: osdmap e225: 6 total, 6 up, 6 in
Dec 05 10:16:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e226 e226: 6 total, 6 up, 6 in
Dec 05 10:16:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:16:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:16:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:16:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:16:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:16:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18262 "" "Go-http-client/1.1"
Dec 05 10:16:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:16:17 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:16:17 np0005546420.localdomain podman[325980]: 2025-12-05 10:16:17.501076798 +0000 UTC m=+0.077101511 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:16:17 np0005546420.localdomain podman[325980]: 2025-12-05 10:16:17.515312117 +0000 UTC m=+0.091336780 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:16:17 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:16:17 np0005546420.localdomain podman[325979]: 2025-12-05 10:16:17.609418502 +0000 UTC m=+0.184615910 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec 05 10:16:17 np0005546420.localdomain podman[325979]: 2025-12-05 10:16:17.621728202 +0000 UTC m=+0.196925620 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 10:16:17 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:16:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:17.675 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:17.676 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:18 np0005546420.localdomain ceph-mon[298353]: osdmap e226: 6 total, 6 up, 6 in
Dec 05 10:16:18 np0005546420.localdomain ceph-mon[298353]: pgmap v574: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 100 KiB/s wr, 152 op/s
Dec 05 10:16:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e227 e227: 6 total, 6 up, 6 in
Dec 05 10:16:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:16:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:16:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:16:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:16:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:16:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:16:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:16:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:16:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:16:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:16:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:16:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:16:19 np0005546420.localdomain ceph-mon[298353]: osdmap e227: 6 total, 6 up, 6 in
Dec 05 10:16:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "snap_name": "ac528f21-d490-4e97-bdf5-2a4687cb644d", "format": "json"}]: dispatch
Dec 05 10:16:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2ab3d121-5033-4ae4-bc88-7df1d583d342", "format": "json"}]: dispatch
Dec 05 10:16:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2ab3d121-5033-4ae4-bc88-7df1d583d342", "force": true, "format": "json"}]: dispatch
Dec 05 10:16:20 np0005546420.localdomain ceph-mon[298353]: pgmap v576: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 92 KiB/s rd, 89 KiB/s wr, 135 op/s
Dec 05 10:16:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3970866277' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3970866277' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e228 e228: 6 total, 6 up, 6 in
Dec 05 10:16:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:21.582 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:21.582 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:16:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:21.583 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:16:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:21.599 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:16:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:21.600 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:21.600 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:16:22 np0005546420.localdomain ceph-mon[298353]: osdmap e228: 6 total, 6 up, 6 in
Dec 05 10:16:22 np0005546420.localdomain ceph-mon[298353]: pgmap v578: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 117 KiB/s rd, 107 KiB/s wr, 172 op/s
Dec 05 10:16:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4205473615' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4205473615' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:22 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:16:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:22.678 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:22.680 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:22.680 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:16:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:22.680 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:22.714 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:22.715 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e229 e229: 6 total, 6 up, 6 in
Dec 05 10:16:23 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "89b8d3a9-86b3-485b-aa83-b34d33e5ba52", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:16:23 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "89b8d3a9-86b3-485b-aa83-b34d33e5ba52", "format": "json"}]: dispatch
Dec 05 10:16:23 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "snap_name": "ac528f21-d490-4e97-bdf5-2a4687cb644d_366ab011-0a32-4ef6-b163-23098f6262cd", "force": true, "format": "json"}]: dispatch
Dec 05 10:16:23 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "snap_name": "ac528f21-d490-4e97-bdf5-2a4687cb644d", "force": true, "format": "json"}]: dispatch
Dec 05 10:16:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2946632018' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:16:23 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:16:23 np0005546420.localdomain podman[326022]: 2025-12-05 10:16:23.49278763 +0000 UTC m=+0.067339170 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 05 10:16:23 np0005546420.localdomain podman[326022]: 2025-12-05 10:16:23.529852094 +0000 UTC m=+0.104403634 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:16:23 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:16:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:23.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:23.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:24 np0005546420.localdomain ceph-mon[298353]: osdmap e229: 6 total, 6 up, 6 in
Dec 05 10:16:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3598026697' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3598026697' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:24 np0005546420.localdomain ceph-mon[298353]: pgmap v580: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 33 KiB/s wr, 95 op/s
Dec 05 10:16:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2970110141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:16:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e230 e230: 6 total, 6 up, 6 in
Dec 05 10:16:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:16:25 np0005546420.localdomain podman[326047]: 2025-12-05 10:16:25.505491817 +0000 UTC m=+0.080782715 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2)
Dec 05 10:16:25 np0005546420.localdomain podman[326047]: 2025-12-05 10:16:25.517177827 +0000 UTC m=+0.092468775 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:16:25 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:16:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e231 e231: 6 total, 6 up, 6 in
Dec 05 10:16:26 np0005546420.localdomain ceph-mon[298353]: osdmap e230: 6 total, 6 up, 6 in
Dec 05 10:16:26 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "format": "json"}]: dispatch
Dec 05 10:16:26 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "force": true, "format": "json"}]: dispatch
Dec 05 10:16:26 np0005546420.localdomain ceph-mon[298353]: pgmap v582: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 33 KiB/s wr, 95 op/s
Dec 05 10:16:26 np0005546420.localdomain ceph-mon[298353]: osdmap e231: 6 total, 6 up, 6 in
Dec 05 10:16:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:26.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:27 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e232 e232: 6 total, 6 up, 6 in
Dec 05 10:16:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "89b8d3a9-86b3-485b-aa83-b34d33e5ba52", "format": "json"}]: dispatch
Dec 05 10:16:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "89b8d3a9-86b3-485b-aa83-b34d33e5ba52", "force": true, "format": "json"}]: dispatch
Dec 05 10:16:27 np0005546420.localdomain ceph-mon[298353]: osdmap e232: 6 total, 6 up, 6 in
Dec 05 10:16:27 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:16:27 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2656900176' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:27 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:16:27 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2656900176' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:27.715 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:27.717 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:28 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2656900176' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:28 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2656900176' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:28 np0005546420.localdomain ceph-mon[298353]: pgmap v585: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 96 KiB/s wr, 91 op/s
Dec 05 10:16:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:28.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:16:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:29.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:29.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:16:29.995 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:16:29 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:16:29.997 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:16:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:29.999 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:30 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "484de3bf-67b8-4f95-9d4e-1049ce65d043", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:16:30 np0005546420.localdomain ceph-mon[298353]: pgmap v586: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 69 KiB/s wr, 66 op/s
Dec 05 10:16:30 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "484de3bf-67b8-4f95-9d4e-1049ce65d043", "format": "json"}]: dispatch
Dec 05 10:16:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e233 e233: 6 total, 6 up, 6 in
Dec 05 10:16:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:16:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3712351438' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:16:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3712351438' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:32 np0005546420.localdomain ceph-mon[298353]: osdmap e233: 6 total, 6 up, 6 in
Dec 05 10:16:32 np0005546420.localdomain ceph-mon[298353]: pgmap v588: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 92 KiB/s wr, 97 op/s
Dec 05 10:16:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3712351438' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:32.769 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:33 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:16:32.999 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:16:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3712351438' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2226220254' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:16:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:16:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:16:33 np0005546420.localdomain podman[326067]: 2025-12-05 10:16:33.503541118 +0000 UTC m=+0.074702778 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:16:33 np0005546420.localdomain podman[326067]: 2025-12-05 10:16:33.513071512 +0000 UTC m=+0.084233152 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:16:33 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:16:33 np0005546420.localdomain podman[326068]: 2025-12-05 10:16:33.568116461 +0000 UTC m=+0.134599576 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:16:33 np0005546420.localdomain podman[326068]: 2025-12-05 10:16:33.59851497 +0000 UTC m=+0.164998045 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:16:33 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:16:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:33.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:16:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:33.899 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:16:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:33.900 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:16:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:33.900 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:16:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:33.901 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:16:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:33.901 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:16:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "484de3bf-67b8-4f95-9d4e-1049ce65d043", "format": "json"}]: dispatch
Dec 05 10:16:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "484de3bf-67b8-4f95-9d4e-1049ce65d043", "force": true, "format": "json"}]: dispatch
Dec 05 10:16:34 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1890770352' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:16:34 np0005546420.localdomain ceph-mon[298353]: pgmap v589: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 75 KiB/s wr, 80 op/s
Dec 05 10:16:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:16:34 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1820385550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:16:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:34.369 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:16:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:34.605 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:16:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:34.608 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11527MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:16:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:34.608 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:16:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:34.609 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:16:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:34.818 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:16:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:34.819 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:16:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:34.839 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:16:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1820385550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:16:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:16:35 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3021702408' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:16:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:35.320 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:16:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:35.327 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:16:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:35.345 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:16:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:35.347 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:16:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:35.347 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:16:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e234 e234: 6 total, 6 up, 6 in
Dec 05 10:16:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3021702408' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:16:36 np0005546420.localdomain ceph-mon[298353]: pgmap v590: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 16 KiB/s wr, 23 op/s
Dec 05 10:16:36 np0005546420.localdomain ceph-mon[298353]: osdmap e234: 6 total, 6 up, 6 in
Dec 05 10:16:36 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:16:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:16:36 np0005546420.localdomain podman[326152]: 2025-12-05 10:16:36.512267922 +0000 UTC m=+0.084941854 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:16:36 np0005546420.localdomain podman[326152]: 2025-12-05 10:16:36.528313157 +0000 UTC m=+0.100987069 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:16:36 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:16:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:16:36 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2565126787' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:16:36 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2565126787' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:37 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "33e10816-9bfc-48f9-9aa2-4d50e3101227", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:16:37 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "33e10816-9bfc-48f9-9aa2-4d50e3101227", "format": "json"}]: dispatch
Dec 05 10:16:37 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2565126787' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:37 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2565126787' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:37.773 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:37.774 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:37.775 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:16:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:37.775 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:37.793 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:37.794 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:38 np0005546420.localdomain ceph-mon[298353]: pgmap v592: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 55 KiB/s wr, 53 op/s
Dec 05 10:16:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:40 np0005546420.localdomain ceph-mon[298353]: pgmap v593: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 52 KiB/s wr, 51 op/s
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.060054) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801060141, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2337, "num_deletes": 272, "total_data_size": 3563497, "memory_usage": 3616976, "flush_reason": "Manual Compaction"}
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801076255, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2323914, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31036, "largest_seqno": 33368, "table_properties": {"data_size": 2314637, "index_size": 5654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21601, "raw_average_key_size": 21, "raw_value_size": 2295154, "raw_average_value_size": 2297, "num_data_blocks": 244, "num_entries": 999, "num_filter_entries": 999, "num_deletions": 272, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929679, "oldest_key_time": 1764929679, "file_creation_time": 1764929801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 16248 microseconds, and 7061 cpu microseconds.
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.076312) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2323914 bytes OK
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.076340) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.078205) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.078226) EVENT_LOG_v1 {"time_micros": 1764929801078220, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.078252) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3552542, prev total WAL file size 3552866, number of live WAL files 2.
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.079285) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323635' seq:72057594037927935, type:22 .. '6C6F676D0034353138' seq:0, type:0; will stop at (end)
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2269KB)], [54(15MB)]
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801079328, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 19082585, "oldest_snapshot_seqno": -1}
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 13501 keys, 18752301 bytes, temperature: kUnknown
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801198342, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 18752301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18673580, "index_size": 43868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33797, "raw_key_size": 361166, "raw_average_key_size": 26, "raw_value_size": 18442259, "raw_average_value_size": 1365, "num_data_blocks": 1655, "num_entries": 13501, "num_filter_entries": 13501, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.198736) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 18752301 bytes
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.200281) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.1 rd, 157.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 16.0 +0.0 blob) out(17.9 +0.0 blob), read-write-amplify(16.3) write-amplify(8.1) OK, records in: 14062, records dropped: 561 output_compression: NoCompression
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.200308) EVENT_LOG_v1 {"time_micros": 1764929801200296, "job": 32, "event": "compaction_finished", "compaction_time_micros": 119170, "compaction_time_cpu_micros": 49127, "output_level": 6, "num_output_files": 1, "total_output_size": 18752301, "num_input_records": 14062, "num_output_records": 13501, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801200999, "job": 32, "event": "table_file_deletion", "file_number": 56}
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801203324, "job": 32, "event": "table_file_deletion", "file_number": 54}
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.079208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.203416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.203422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.203425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.203428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:16:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:16:41.203437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:16:42 np0005546420.localdomain ceph-mon[298353]: pgmap v594: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 33 KiB/s wr, 37 op/s
Dec 05 10:16:42 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "33e10816-9bfc-48f9-9aa2-4d50e3101227", "format": "json"}]: dispatch
Dec 05 10:16:42 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "33e10816-9bfc-48f9-9aa2-4d50e3101227", "force": true, "format": "json"}]: dispatch
Dec 05 10:16:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:42.794 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:42.796 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:42.797 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:16:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:42.797 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:42.832 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:42.834 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:44 np0005546420.localdomain ceph-mon[298353]: pgmap v595: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 32 KiB/s wr, 36 op/s
Dec 05 10:16:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:46 np0005546420.localdomain ceph-mgr[286940]: client.0 ms_handle_reset on v2:172.18.0.106:6810/1193881100
Dec 05 10:16:46 np0005546420.localdomain ceph-mon[298353]: pgmap v596: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 32 KiB/s wr, 36 op/s
Dec 05 10:16:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "01a37854-0dc7-41dc-a61a-5647fabb505d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:16:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:16:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:16:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:16:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:16:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:16:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:16:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18268 "" "Go-http-client/1.1"
Dec 05 10:16:47 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e235 e235: 6 total, 6 up, 6 in
Dec 05 10:16:47 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "01a37854-0dc7-41dc-a61a-5647fabb505d", "format": "json"}]: dispatch
Dec 05 10:16:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:47.835 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:47.837 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:16:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:47.837 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:16:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:47.837 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:47.876 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:47.877 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:16:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:16:48 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:16:48 np0005546420.localdomain podman[326171]: 2025-12-05 10:16:48.50094932 +0000 UTC m=+0.078038660 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:16:48 np0005546420.localdomain podman[326171]: 2025-12-05 10:16:48.51746186 +0000 UTC m=+0.094551070 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec 05 10:16:48 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:16:48 np0005546420.localdomain ceph-mon[298353]: pgmap v597: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 53 KiB/s wr, 35 op/s
Dec 05 10:16:48 np0005546420.localdomain ceph-mon[298353]: osdmap e235: 6 total, 6 up, 6 in
Dec 05 10:16:48 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:16:48 np0005546420.localdomain podman[326172]: 2025-12-05 10:16:48.61819079 +0000 UTC m=+0.190590665 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:16:48 np0005546420.localdomain podman[326172]: 2025-12-05 10:16:48.632379158 +0000 UTC m=+0.204779043 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:16:48 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:16:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:16:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:16:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:16:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:16:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:16:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:16:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:16:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:16:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:16:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:16:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:16:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:16:49 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e236 e236: 6 total, 6 up, 6 in
Dec 05 10:16:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:16:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "format": "json"}]: dispatch
Dec 05 10:16:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:16:50.488 262769 INFO neutron.agent.linux.ip_lib [None req-9d4942b4-05c3-4795-9283-fbfca8ca975e - - - - - -] Device tap1f8ec0e0-cc cannot be used as it has no MAC address
Dec 05 10:16:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:50.516 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:50 np0005546420.localdomain kernel: device tap1f8ec0e0-cc entered promiscuous mode
Dec 05 10:16:50 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929810.5263] manager: (tap1f8ec0e0-cc): new Generic device (/org/freedesktop/NetworkManager/Devices/61)
Dec 05 10:16:50 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:16:50Z|00354|binding|INFO|Claiming lport 1f8ec0e0-ccc8-49be-bea2-9b491416bc1f for this chassis.
Dec 05 10:16:50 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:16:50Z|00355|binding|INFO|1f8ec0e0-ccc8-49be-bea2-9b491416bc1f: Claiming unknown
Dec 05 10:16:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:50.530 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:50 np0005546420.localdomain systemd-udevd[326224]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:16:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:16:50.541 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3554a89b305c449f9fd292eca5647512', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=135cedc8-bceb-4f2f-8778-26f5bc6f81d3, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=1f8ec0e0-ccc8-49be-bea2-9b491416bc1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:16:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:16:50.544 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 1f8ec0e0-ccc8-49be-bea2-9b491416bc1f in datapath 05dd4ee6-5f37-4402-88a5-db28b0b4198e bound to our chassis
Dec 05 10:16:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:16:50.546 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port f6b4fd13-f8dc-480f-9f29-ae687770e358 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:16:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:16:50.546 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05dd4ee6-5f37-4402-88a5-db28b0b4198e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:16:50 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:16:50.548 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4fb4c6-d11d-4aa2-a2ca-c2b7e3d3e5d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:16:50 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap1f8ec0e0-cc: No such device
Dec 05 10:16:50 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:16:50Z|00356|binding|INFO|Setting lport 1f8ec0e0-ccc8-49be-bea2-9b491416bc1f ovn-installed in OVS
Dec 05 10:16:50 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:16:50Z|00357|binding|INFO|Setting lport 1f8ec0e0-ccc8-49be-bea2-9b491416bc1f up in Southbound
Dec 05 10:16:50 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap1f8ec0e0-cc: No such device
Dec 05 10:16:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:50.565 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:50 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap1f8ec0e0-cc: No such device
Dec 05 10:16:50 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap1f8ec0e0-cc: No such device
Dec 05 10:16:50 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap1f8ec0e0-cc: No such device
Dec 05 10:16:50 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap1f8ec0e0-cc: No such device
Dec 05 10:16:50 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap1f8ec0e0-cc: No such device
Dec 05 10:16:50 np0005546420.localdomain ceph-mon[298353]: pgmap v599: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 9.9 KiB/s rd, 30 KiB/s wr, 16 op/s
Dec 05 10:16:50 np0005546420.localdomain ceph-mon[298353]: osdmap e236: 6 total, 6 up, 6 in
Dec 05 10:16:50 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap1f8ec0e0-cc: No such device
Dec 05 10:16:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:50.615 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:50.656 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:16:50 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4059684747' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:16:50 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4059684747' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:51 np0005546420.localdomain podman[326295]: 
Dec 05 10:16:51 np0005546420.localdomain podman[326295]: 2025-12-05 10:16:51.601183429 +0000 UTC m=+0.094420116 container create a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:16:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "01a37854-0dc7-41dc-a61a-5647fabb505d", "format": "json"}]: dispatch
Dec 05 10:16:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "01a37854-0dc7-41dc-a61a-5647fabb505d", "force": true, "format": "json"}]: dispatch
Dec 05 10:16:51 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4059684747' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:16:51 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4059684747' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:16:51 np0005546420.localdomain podman[326295]: 2025-12-05 10:16:51.552552138 +0000 UTC m=+0.045788865 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:16:51 np0005546420.localdomain systemd[1]: Started libpod-conmon-a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab.scope.
Dec 05 10:16:51 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:16:51 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/795f056e8a7fd525c76b3cbb9a5b90915e4e9fffd72d3681c376119e9fd0abae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:16:51 np0005546420.localdomain podman[326295]: 2025-12-05 10:16:51.68704918 +0000 UTC m=+0.180285857 container init a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:16:51 np0005546420.localdomain podman[326295]: 2025-12-05 10:16:51.696160402 +0000 UTC m=+0.189397089 container start a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:16:51 np0005546420.localdomain dnsmasq[326313]: started, version 2.85 cachesize 150
Dec 05 10:16:51 np0005546420.localdomain dnsmasq[326313]: DNS service limited to local subnets
Dec 05 10:16:51 np0005546420.localdomain dnsmasq[326313]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:16:51 np0005546420.localdomain dnsmasq[326313]: warning: no upstream servers configured
Dec 05 10:16:51 np0005546420.localdomain dnsmasq-dhcp[326313]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:16:51 np0005546420.localdomain dnsmasq[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/addn_hosts - 0 addresses
Dec 05 10:16:51 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/host
Dec 05 10:16:51 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/opts
Dec 05 10:16:51 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:16:51.769 262769 INFO neutron.agent.dhcp.agent [None req-0f7f7829-e61d-40f3-b680-ff010c353633 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:16:50Z, description=, device_id=e20e0056-676e-48bf-ab69-b052c316c7c0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e98ca0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e988e0>], id=2e4daf26-2dba-49e7-a95e-84a8f3eec46f, ip_allocation=immediate, mac_address=fa:16:3e:9f:7a:f1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:16:48Z, description=, dns_domain=, id=05dd4ee6-5f37-4402-88a5-db28b0b4198e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1196921280-network, port_security_enabled=True, project_id=3554a89b305c449f9fd292eca5647512, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53061, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3361, status=ACTIVE, subnets=['09f606e0-a44e-4222-ae5e-ac65e4413829'], tags=[], tenant_id=3554a89b305c449f9fd292eca5647512, updated_at=2025-12-05T10:16:48Z, vlan_transparent=None, network_id=05dd4ee6-5f37-4402-88a5-db28b0b4198e, port_security_enabled=False, project_id=3554a89b305c449f9fd292eca5647512, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3369, status=DOWN, tags=[], tenant_id=3554a89b305c449f9fd292eca5647512, updated_at=2025-12-05T10:16:50Z on network 05dd4ee6-5f37-4402-88a5-db28b0b4198e
Dec 05 10:16:51 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:16:51.911 262769 INFO neutron.agent.dhcp.agent [None req-27aa66fc-6698-4c7c-b0f5-b7675498d9ed - - - - - -] DHCP configuration for ports {'1ea8845d-38d2-4efb-91a5-56a9b0cf4fb7'} is completed
Dec 05 10:16:52 np0005546420.localdomain dnsmasq[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/addn_hosts - 1 addresses
Dec 05 10:16:52 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/host
Dec 05 10:16:52 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/opts
Dec 05 10:16:52 np0005546420.localdomain podman[326330]: 2025-12-05 10:16:52.01906611 +0000 UTC m=+0.064230944 container kill a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:16:52 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:16:52.190 262769 INFO neutron.agent.dhcp.agent [None req-d3237240-4d47-497a-875a-69ead3437eb5 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:16:50Z, description=, device_id=e20e0056-676e-48bf-ab69-b052c316c7c0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e32820>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d9a8c3f40>], id=2e4daf26-2dba-49e7-a95e-84a8f3eec46f, ip_allocation=immediate, mac_address=fa:16:3e:9f:7a:f1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:16:48Z, description=, dns_domain=, id=05dd4ee6-5f37-4402-88a5-db28b0b4198e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1196921280-network, port_security_enabled=True, project_id=3554a89b305c449f9fd292eca5647512, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53061, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3361, status=ACTIVE, subnets=['09f606e0-a44e-4222-ae5e-ac65e4413829'], tags=[], tenant_id=3554a89b305c449f9fd292eca5647512, updated_at=2025-12-05T10:16:48Z, vlan_transparent=None, network_id=05dd4ee6-5f37-4402-88a5-db28b0b4198e, port_security_enabled=False, project_id=3554a89b305c449f9fd292eca5647512, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3369, status=DOWN, tags=[], tenant_id=3554a89b305c449f9fd292eca5647512, updated_at=2025-12-05T10:16:50Z on network 05dd4ee6-5f37-4402-88a5-db28b0b4198e
Dec 05 10:16:52 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:16:52.257 262769 INFO neutron.agent.dhcp.agent [None req-599785b6-0d00-4dc8-a119-671548a16839 - - - - - -] DHCP configuration for ports {'2e4daf26-2dba-49e7-a95e-84a8f3eec46f'} is completed
Dec 05 10:16:52 np0005546420.localdomain dnsmasq[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/addn_hosts - 1 addresses
Dec 05 10:16:52 np0005546420.localdomain podman[326366]: 2025-12-05 10:16:52.399548756 +0000 UTC m=+0.058813636 container kill a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:16:52 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/host
Dec 05 10:16:52 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/opts
Dec 05 10:16:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:16:52Z|00358|ovn_bfd|INFO|Enabled BFD on interface ovn-473cc8-0
Dec 05 10:16:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:16:52Z|00359|ovn_bfd|INFO|Enabled BFD on interface ovn-f5bb44-0
Dec 05 10:16:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:16:52Z|00360|ovn_bfd|INFO|Enabled BFD on interface ovn-40c64e-0
Dec 05 10:16:52 np0005546420.localdomain ceph-mon[298353]: pgmap v601: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 60 KiB/s wr, 49 op/s
Dec 05 10:16:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:16:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:52.640 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:52.645 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:52.653 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:52.666 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:52.679 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:52 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:16:52.700 262769 INFO neutron.agent.dhcp.agent [None req-2c9503ab-c6ad-4eb1-8033-9483dad3e9d1 - - - - - -] DHCP configuration for ports {'2e4daf26-2dba-49e7-a95e-84a8f3eec46f'} is completed
Dec 05 10:16:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:52.876 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:52.878 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:16:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "format": "json"}]: dispatch
Dec 05 10:16:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:53.655 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:54.351 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:54.370 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:54 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:16:54 np0005546420.localdomain podman[326390]: 2025-12-05 10:16:54.519545344 +0000 UTC m=+0.090275119 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:16:54 np0005546420.localdomain podman[326390]: 2025-12-05 10:16:54.565599085 +0000 UTC m=+0.136328840 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:16:54 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:16:54 np0005546420.localdomain ceph-mon[298353]: pgmap v602: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 60 KiB/s wr, 50 op/s
Dec 05 10:16:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:16:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:16:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/5e4c632f-fa60-41da-a9f7-9e777915367e", "osd", "allow rw pool=manila_data namespace=fsvolumens_e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:16:55 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/5e4c632f-fa60-41da-a9f7-9e777915367e", "osd", "allow rw pool=manila_data namespace=fsvolumens_e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:16:56 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e237 e237: 6 total, 6 up, 6 in
Dec 05 10:16:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:16:56 np0005546420.localdomain podman[326418]: 2025-12-05 10:16:56.516585916 +0000 UTC m=+0.091796095 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 05 10:16:56 np0005546420.localdomain podman[326418]: 2025-12-05 10:16:56.531546187 +0000 UTC m=+0.106756316 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:16:56 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:16:56 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:16:56 np0005546420.localdomain ceph-mon[298353]: pgmap v603: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 25 KiB/s wr, 45 op/s
Dec 05 10:16:56 np0005546420.localdomain ceph-mon[298353]: osdmap e237: 6 total, 6 up, 6 in
Dec 05 10:16:57 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea920704-3133-4d40-a979-346396e08bfd", "format": "json"}]: dispatch
Dec 05 10:16:57 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea920704-3133-4d40-a979-346396e08bfd", "force": true, "format": "json"}]: dispatch
Dec 05 10:16:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:16:57.900 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:16:58 np0005546420.localdomain ceph-mon[298353]: pgmap v605: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 78 KiB/s wr, 60 op/s
Dec 05 10:16:58 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:16:58 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=tempest-cephx-id-2094750145,client_metadata.root=/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/5e4c632f-fa60-41da-a9f7-9e777915367e],prefix=session evict} (starting...)
Dec 05 10:16:59 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:16:59 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch
Dec 05 10:16:59 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished
Dec 05 10:16:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:16:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "format": "json"}]: dispatch
Dec 05 10:16:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "force": true, "format": "json"}]: dispatch
Dec 05 10:16:59 np0005546420.localdomain ceph-mon[298353]: pgmap v606: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 63 KiB/s wr, 49 op/s
Dec 05 10:17:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:17:00 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1372018845' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:17:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:17:00 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1372018845' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:17:00 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1372018845' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:17:00 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1372018845' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:17:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:17:00 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3571735908' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:01 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3571735908' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:01 np0005546420.localdomain ceph-mon[298353]: pgmap v607: 177 pgs: 177 active+clean; 239 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 74 op/s
Dec 05 10:17:01 np0005546420.localdomain sshd[326437]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e238 e238: 6 total, 6 up, 6 in
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "format": "json"}]: dispatch
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: osdmap e238: 6 total, 6 up, 6 in
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.891931) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822892052, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 636, "num_deletes": 252, "total_data_size": 522701, "memory_usage": 534496, "flush_reason": "Manual Compaction"}
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822897996, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 340806, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33373, "largest_seqno": 34004, "table_properties": {"data_size": 337730, "index_size": 995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8262, "raw_average_key_size": 20, "raw_value_size": 331153, "raw_average_value_size": 821, "num_data_blocks": 43, "num_entries": 403, "num_filter_entries": 403, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929801, "oldest_key_time": 1764929801, "file_creation_time": 1764929822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 6067 microseconds, and 2220 cpu microseconds.
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.898044) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 340806 bytes OK
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.898066) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.899672) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.899688) EVENT_LOG_v1 {"time_micros": 1764929822899682, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.899706) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 519041, prev total WAL file size 519041, number of live WAL files 2.
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.900134) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end)
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(332KB)], [57(17MB)]
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822900168, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 19093107, "oldest_snapshot_seqno": -1}
Dec 05 10:17:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:02.902 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:17:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:02.905 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:17:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:02.905 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:17:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:02.905 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:02.930 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:02.931 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3449754904' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3449754904' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 13380 keys, 17803354 bytes, temperature: kUnknown
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822978765, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 17803354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17726643, "index_size": 42178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33477, "raw_key_size": 359285, "raw_average_key_size": 26, "raw_value_size": 17498630, "raw_average_value_size": 1307, "num_data_blocks": 1577, "num_entries": 13380, "num_filter_entries": 13380, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.979119) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 17803354 bytes
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.981486) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.6 rd, 226.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 17.9 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(108.3) write-amplify(52.2) OK, records in: 13904, records dropped: 524 output_compression: NoCompression
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.981518) EVENT_LOG_v1 {"time_micros": 1764929822981503, "job": 34, "event": "compaction_finished", "compaction_time_micros": 78690, "compaction_time_cpu_micros": 35926, "output_level": 6, "num_output_files": 1, "total_output_size": 17803354, "num_input_records": 13904, "num_output_records": 13380, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822981693, "job": 34, "event": "table_file_deletion", "file_number": 59}
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822984469, "job": 34, "event": "table_file_deletion", "file_number": 57}
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.900074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.984585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.984594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.984597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.984600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:17:02 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:17:02.984603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:17:03 np0005546420.localdomain sshd[326437]: Received disconnect from 24.232.50.5 port 42936:11: Bye Bye [preauth]
Dec 05 10:17:03 np0005546420.localdomain sshd[326437]: Disconnected from authenticating user root 24.232.50.5 port 42936 [preauth]
Dec 05 10:17:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e239 e239: 6 total, 6 up, 6 in
Dec 05 10:17:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3449754904' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:17:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3449754904' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:17:03 np0005546420.localdomain ceph-mon[298353]: osdmap e239: 6 total, 6 up, 6 in
Dec 05 10:17:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3569595172' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:17:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3569595172' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:17:03 np0005546420.localdomain ceph-mon[298353]: pgmap v610: 177 pgs: 177 active+clean; 247 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 3.0 MiB/s wr, 98 op/s
Dec 05 10:17:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:04.135 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:04.135 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:04.136 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:17:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:17:04 np0005546420.localdomain podman[326440]: 2025-12-05 10:17:04.512257245 +0000 UTC m=+0.086622636 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 05 10:17:04 np0005546420.localdomain podman[326439]: 2025-12-05 10:17:04.562282439 +0000 UTC m=+0.137720874 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:17:04 np0005546420.localdomain podman[326439]: 2025-12-05 10:17:04.577314203 +0000 UTC m=+0.152752628 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:17:04 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:17:04 np0005546420.localdomain podman[326440]: 2025-12-05 10:17:04.593584506 +0000 UTC m=+0.167949877 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 05 10:17:04 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:17:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/1ae02bd3-e301-4c48-8e82-c476a0edaac3", "osd", "allow rw pool=manila_data namespace=fsvolumens_87604c98-95ac-4fbd-9129-c8a4a3776866", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:17:05 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/1ae02bd3-e301-4c48-8e82-c476a0edaac3", "osd", "allow rw pool=manila_data namespace=fsvolumens_87604c98-95ac-4fbd-9129-c8a4a3776866", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:17:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:17:06 np0005546420.localdomain ceph-mon[298353]: pgmap v611: 177 pgs: 177 active+clean; 247 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 2.7 MiB/s wr, 84 op/s
Dec 05 10:17:07 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2175213793' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:17:07 np0005546420.localdomain systemd[1]: tmp-crun.18sUOX.mount: Deactivated successfully.
Dec 05 10:17:07 np0005546420.localdomain podman[326480]: 2025-12-05 10:17:07.515136959 +0000 UTC m=+0.095613413 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:17:07 np0005546420.localdomain podman[326480]: 2025-12-05 10:17:07.528030266 +0000 UTC m=+0.108506730 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125)
Dec 05 10:17:07 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:17:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:07.932 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:17:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:07.934 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:07.934 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:17:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:07.934 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:07.935 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:07.937 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:08 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e240 e240: 6 total, 6 up, 6 in
Dec 05 10:17:08 np0005546420.localdomain ceph-mon[298353]: pgmap v612: 177 pgs: 177 active+clean; 294 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 5.4 MiB/s wr, 148 op/s
Dec 05 10:17:08 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=tempest-cephx-id-2094750145,client_metadata.root=/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/1ae02bd3-e301-4c48-8e82-c476a0edaac3],prefix=session evict} (starting...)
Dec 05 10:17:09 np0005546420.localdomain ceph-mon[298353]: osdmap e240: 6 total, 6 up, 6 in
Dec 05 10:17:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:09 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:09 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch
Dec 05 10:17:09 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished
Dec 05 10:17:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:09 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e241 e241: 6 total, 6 up, 6 in
Dec 05 10:17:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:10 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "format": "json"}]: dispatch
Dec 05 10:17:10 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "force": true, "format": "json"}]: dispatch
Dec 05 10:17:10 np0005546420.localdomain ceph-mon[298353]: pgmap v614: 177 pgs: 177 active+clean; 294 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 69 op/s
Dec 05 10:17:10 np0005546420.localdomain ceph-mon[298353]: osdmap e241: 6 total, 6 up, 6 in
Dec 05 10:17:10 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2439844479' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:17:10 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2439844479' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:17:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e242 e242: 6 total, 6 up, 6 in
Dec 05 10:17:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7b2c6949-7a2d-43e1-b721-1a9688202f80", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:17:12 np0005546420.localdomain ceph-mon[298353]: pgmap v616: 177 pgs: 177 active+clean; 294 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 5.3 MiB/s rd, 5.0 MiB/s wr, 153 op/s
Dec 05 10:17:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7b2c6949-7a2d-43e1-b721-1a9688202f80", "format": "json"}]: dispatch
Dec 05 10:17:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:17:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:12.939 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:17:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:12.941 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:17:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:12.941 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:17:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:12.941 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:12.981 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:12.982 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:13 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "format": "json"}]: dispatch
Dec 05 10:17:13 np0005546420.localdomain ceph-mon[298353]: osdmap e242: 6 total, 6 up, 6 in
Dec 05 10:17:13 np0005546420.localdomain sudo[326499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:17:13 np0005546420.localdomain sudo[326499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:17:13 np0005546420.localdomain sudo[326499]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:13 np0005546420.localdomain sudo[326517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 10:17:13 np0005546420.localdomain sudo[326517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:17:14 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e243 e243: 6 total, 6 up, 6 in
Dec 05 10:17:14 np0005546420.localdomain ceph-mon[298353]: pgmap v618: 177 pgs: 177 active+clean; 294 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 121 op/s
Dec 05 10:17:14 np0005546420.localdomain sudo[326517]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:14 np0005546420.localdomain sudo[326555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:17:14 np0005546420.localdomain sudo[326555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:17:14 np0005546420.localdomain sudo[326555]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:14 np0005546420.localdomain sudo[326573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:17:14 np0005546420.localdomain sudo[326573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:17:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:15 np0005546420.localdomain ceph-mon[298353]: osdmap e243: 6 total, 6 up, 6 in
Dec 05 10:17:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:17:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:17:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:17:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:17:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/646042663' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:17:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/646042663' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:17:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/8845d7b0-73a1-433b-8965-fdf32cd8dc5d", "osd", "allow rw pool=manila_data namespace=fsvolumens_7cb70929-0aaf-4a14-a66d-91468bfe4b75", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:17:15 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/8845d7b0-73a1-433b-8965-fdf32cd8dc5d", "osd", "allow rw pool=manila_data namespace=fsvolumens_7cb70929-0aaf-4a14-a66d-91468bfe4b75", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:17:15 np0005546420.localdomain sudo[326573]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:15 np0005546420.localdomain sudo[326624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:17:15 np0005546420.localdomain sudo[326624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:17:15 np0005546420.localdomain sudo[326624]: pam_unix(sudo:session): session closed for user root
Dec 05 10:17:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e244 e244: 6 total, 6 up, 6 in
Dec 05 10:17:16 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7b2c6949-7a2d-43e1-b721-1a9688202f80", "format": "json"}]: dispatch
Dec 05 10:17:16 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7b2c6949-7a2d-43e1-b721-1a9688202f80", "force": true, "format": "json"}]: dispatch
Dec 05 10:17:16 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:17:16 np0005546420.localdomain ceph-mon[298353]: pgmap v620: 177 pgs: 177 active+clean; 294 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 121 op/s
Dec 05 10:17:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:17:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:17:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:17:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:17:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:17:16 np0005546420.localdomain ceph-mon[298353]: osdmap e244: 6 total, 6 up, 6 in
Dec 05 10:17:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:17:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:17:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:17:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154915 "" "Go-http-client/1.1"
Dec 05 10:17:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:17:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18741 "" "Go-http-client/1.1"
Dec 05 10:17:17 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:17:17.383 2 INFO neutron.agent.securitygroups_rpc [None req-2a5fe3a0-7b5a-4c57-ac4a-29ef69a28174 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Security group rule updated ['811851ce-aefb-4b50-bb3d-fd5f8bc97e90']
Dec 05 10:17:17 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:17:17.570 2 INFO neutron.agent.securitygroups_rpc [None req-7838533a-af27-4a68-8cd9-2dbd66b75065 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Security group rule updated ['811851ce-aefb-4b50-bb3d-fd5f8bc97e90']
Dec 05 10:17:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:17.982 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:17:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:17.984 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:17.985 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:17:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:17.985 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:17.986 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:17.989 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:18 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=tempest-cephx-id-2094750145,client_metadata.root=/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/8845d7b0-73a1-433b-8965-fdf32cd8dc5d],prefix=session evict} (starting...)
Dec 05 10:17:18 np0005546420.localdomain ceph-mon[298353]: mgrmap e52: np0005546419.zhsnqq(active, since 15m), standbys: np0005546420.aoeylc, np0005546421.sukfea
Dec 05 10:17:18 np0005546420.localdomain ceph-mon[298353]: pgmap v622: 177 pgs: 177 active+clean; 202 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 144 KiB/s rd, 590 KiB/s wr, 90 op/s
Dec 05 10:17:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch
Dec 05 10:17:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished
Dec 05 10:17:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:17:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:17:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:17:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:17:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:17:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:17:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:17:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:17:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:17:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:17:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:17:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:17:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "format": "json"}]: dispatch
Dec 05 10:17:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "force": true, "format": "json"}]: dispatch
Dec 05 10:17:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:17:19 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:17:19 np0005546420.localdomain podman[326643]: 2025-12-05 10:17:19.522076671 +0000 UTC m=+0.091852407 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:17:19 np0005546420.localdomain podman[326643]: 2025-12-05 10:17:19.563393046 +0000 UTC m=+0.133168802 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:17:19 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:17:19 np0005546420.localdomain podman[326642]: 2025-12-05 10:17:19.569764023 +0000 UTC m=+0.142343945 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git)
Dec 05 10:17:19 np0005546420.localdomain podman[326642]: 2025-12-05 10:17:19.649244357 +0000 UTC m=+0.221824299 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 10:17:19 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:17:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:20.348 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:20.349 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:17:20 np0005546420.localdomain ceph-mon[298353]: pgmap v623: 177 pgs: 177 active+clean; 202 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 65 KiB/s wr, 73 op/s
Dec 05 10:17:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e245 e245: 6 total, 6 up, 6 in
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.230 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.231 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.249 281103 DEBUG nova.compute.manager [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.349 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.350 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.358 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.359 281103 INFO nova.compute.claims [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Claim successful on node np0005546420.localdomain
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.469 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.874 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.896 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.896 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:17:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:17:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/90478948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.923 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.930 281103 DEBUG nova.compute.provider_tree [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.948 281103 DEBUG nova.scheduler.client.report [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.971 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:21.972 281103 DEBUG nova.compute.manager [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 10:17:22 np0005546420.localdomain ceph-mon[298353]: osdmap e245: 6 total, 6 up, 6 in
Dec 05 10:17:22 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:17:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:17:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:17:22 np0005546420.localdomain ceph-mon[298353]: pgmap v625: 177 pgs: 177 active+clean; 202 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 103 KiB/s wr, 78 op/s
Dec 05 10:17:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/90478948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.247 281103 DEBUG nova.compute.manager [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.249 281103 DEBUG nova.network.neutron [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.259 281103 INFO nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.276 281103 DEBUG nova.compute.manager [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.359 281103 DEBUG nova.compute.manager [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.360 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.361 281103 INFO nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Creating image(s)
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.398 281103 DEBUG nova.storage.rbd_utils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] rbd image be3af3e0-e77e-4be9-9458-b874e91bdd42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.440 281103 DEBUG nova.storage.rbd_utils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] rbd image be3af3e0-e77e-4be9-9458-b874e91bdd42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.480 281103 DEBUG nova.storage.rbd_utils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] rbd image be3af3e0-e77e-4be9-9458-b874e91bdd42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.486 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.512 281103 DEBUG nova.policy [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.563 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.564 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "803b7e0e18f6b644279a18f87a62b7eb9e1015e6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.565 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "803b7e0e18f6b644279a18f87a62b7eb9e1015e6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.565 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "803b7e0e18f6b644279a18f87a62b7eb9e1015e6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:22 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:17:22Z|00361|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.605 281103 DEBUG nova.storage.rbd_utils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] rbd image be3af3e0-e77e-4be9-9458-b874e91bdd42_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.611 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6 be3af3e0-e77e-4be9-9458-b874e91bdd42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:22 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:17:22.956 2 INFO neutron.agent.securitygroups_rpc [req-883c4d7b-47a9-4234-a8cb-46707ff21e47 req-1c8366a0-660d-41e6-a45d-0d8b43d7ab6d 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Security group member updated ['811851ce-aefb-4b50-bb3d-fd5f8bc97e90']
Dec 05 10:17:22 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:17:22.980 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:17:22Z, description=, device_id=be3af3e0-e77e-4be9-9458-b874e91bdd42, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99db8160>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99db8a00>], id=8ddbcf74-77c9-415e-9ff7-3416cf2f699f, ip_allocation=immediate, mac_address=fa:16:3e:2e:15:a9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:16:48Z, description=, dns_domain=, id=05dd4ee6-5f37-4402-88a5-db28b0b4198e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1196921280-network, port_security_enabled=True, project_id=3554a89b305c449f9fd292eca5647512, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53061, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3361, status=ACTIVE, subnets=['09f606e0-a44e-4222-ae5e-ac65e4413829'], tags=[], tenant_id=3554a89b305c449f9fd292eca5647512, updated_at=2025-12-05T10:16:48Z, vlan_transparent=None, network_id=05dd4ee6-5f37-4402-88a5-db28b0b4198e, port_security_enabled=True, project_id=3554a89b305c449f9fd292eca5647512, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['811851ce-aefb-4b50-bb3d-fd5f8bc97e90'], standard_attr_id=3457, status=DOWN, tags=[], tenant_id=3554a89b305c449f9fd292eca5647512, updated_at=2025-12-05T10:17:22Z on network 05dd4ee6-5f37-4402-88a5-db28b0b4198e
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.990 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.993 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.993 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:22.993 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:23.019 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:23.020 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:23.150 281103 DEBUG nova.network.neutron [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Successfully created port: 8ddbcf74-77c9-415e-9ff7-3416cf2f699f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 10:17:23 np0005546420.localdomain dnsmasq[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/addn_hosts - 2 addresses
Dec 05 10:17:23 np0005546420.localdomain podman[326817]: 2025-12-05 10:17:23.219124334 +0000 UTC m=+0.073206121 container kill a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:17:23 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/host
Dec 05 10:17:23 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/opts
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:23.222 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6 be3af3e0-e77e-4be9-9458-b874e91bdd42_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:23.329 281103 DEBUG nova.storage.rbd_utils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] resizing rbd image be3af3e0-e77e-4be9-9458-b874e91bdd42_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 10:17:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2254812203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:17:23 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:17:23.477 262769 INFO neutron.agent.dhcp.agent [None req-317ce9ac-0594-4280-801f-6783bdbdafbe - - - - - -] DHCP configuration for ports {'8ddbcf74-77c9-415e-9ff7-3416cf2f699f'} is completed
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:23.486 281103 DEBUG nova.objects.instance [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lazy-loading 'migration_context' on Instance uuid be3af3e0-e77e-4be9-9458-b874e91bdd42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:23.512 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:23.513 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Ensure instance console log exists: /var/lib/nova/instances/be3af3e0-e77e-4be9-9458-b874e91bdd42/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:23.513 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:23.514 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:23.514 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:23 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:17:23.989 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005546420.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:17:22Z, description=, device_id=be3af3e0-e77e-4be9-9458-b874e91bdd42, device_owner=compute:nova, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e93af0>], dns_domain=, dns_name=tempest-volumesbackupstest-instance-1667374142, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e903a0>], id=8ddbcf74-77c9-415e-9ff7-3416cf2f699f, ip_allocation=immediate, mac_address=fa:16:3e:2e:15:a9, name=, network_id=05dd4ee6-5f37-4402-88a5-db28b0b4198e, port_security_enabled=True, project_id=3554a89b305c449f9fd292eca5647512, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['811851ce-aefb-4b50-bb3d-fd5f8bc97e90'], standard_attr_id=3457, status=DOWN, tags=[], tenant_id=3554a89b305c449f9fd292eca5647512, updated_at=2025-12-05T10:17:23Z on network 05dd4ee6-5f37-4402-88a5-db28b0b4198e
Dec 05 10:17:24 np0005546420.localdomain dnsmasq[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/addn_hosts - 2 addresses
Dec 05 10:17:24 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/host
Dec 05 10:17:24 np0005546420.localdomain systemd[1]: tmp-crun.4EMpMF.mount: Deactivated successfully.
Dec 05 10:17:24 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/opts
Dec 05 10:17:24 np0005546420.localdomain podman[326928]: 2025-12-05 10:17:24.231100736 +0000 UTC m=+0.068383853 container kill a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.232 281103 DEBUG nova.network.neutron [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Successfully updated port: 8ddbcf74-77c9-415e-9ff7-3416cf2f699f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.262 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.263 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquired lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.263 281103 DEBUG nova.network.neutron [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.319 281103 DEBUG nova.network.neutron [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 10:17:24 np0005546420.localdomain ceph-mon[298353]: pgmap v626: 177 pgs: 177 active+clean; 202 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 93 KiB/s wr, 70 op/s
Dec 05 10:17:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2669198364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:17:24 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:17:24.527 262769 INFO neutron.agent.dhcp.agent [None req-07722c22-35f2-4e85-862d-e2796b1e5708 - - - - - -] DHCP configuration for ports {'8ddbcf74-77c9-415e-9ff7-3416cf2f699f'} is completed
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.554 281103 DEBUG nova.compute.manager [req-fc79e1f4-a27e-45bf-b8de-5164784e87d9 req-d5ef129a-6adc-4e6e-8c73-a95a4b2b45b1 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Received event network-changed-8ddbcf74-77c9-415e-9ff7-3416cf2f699f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.554 281103 DEBUG nova.compute.manager [req-fc79e1f4-a27e-45bf-b8de-5164784e87d9 req-d5ef129a-6adc-4e6e-8c73-a95a4b2b45b1 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Refreshing instance network info cache due to event network-changed-8ddbcf74-77c9-415e-9ff7-3416cf2f699f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.555 281103 DEBUG oslo_concurrency.lockutils [req-fc79e1f4-a27e-45bf-b8de-5164784e87d9 req-d5ef129a-6adc-4e6e-8c73-a95a4b2b45b1 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.582 281103 DEBUG nova.network.neutron [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Updating instance_info_cache with network_info: [{"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.600 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Releasing lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.601 281103 DEBUG nova.compute.manager [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Instance network_info: |[{"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.601 281103 DEBUG oslo_concurrency.lockutils [req-fc79e1f4-a27e-45bf-b8de-5164784e87d9 req-d5ef129a-6adc-4e6e-8c73-a95a4b2b45b1 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquired lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.602 281103 DEBUG nova.network.neutron [req-fc79e1f4-a27e-45bf-b8de-5164784e87d9 req-d5ef129a-6adc-4e6e-8c73-a95a4b2b45b1 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Refreshing network info cache for port 8ddbcf74-77c9-415e-9ff7-3416cf2f699f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.607 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Start _get_guest_xml network_info=[{"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T10:03:24Z,direct_url=<?>,disk_format='qcow2',id=3647d20f-5e09-41b2-a6f3-f320b9e4e343,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e6ca8a92050741d3a93772e6c1b0d704',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T10:03:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'guest_format': None, 'image_id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.614 281103 WARNING nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.623 281103 DEBUG nova.virt.libvirt.host [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Searching host: 'np0005546420.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.623 281103 DEBUG nova.virt.libvirt.host [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.626 281103 DEBUG nova.virt.libvirt.host [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Searching host: 'np0005546420.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.626 281103 DEBUG nova.virt.libvirt.host [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.627 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.627 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T10:03:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='445199a6-1f73-405e-82f4-8bd8c4bb34c6',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T10:03:24Z,direct_url=<?>,disk_format='qcow2',id=3647d20f-5e09-41b2-a6f3-f320b9e4e343,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e6ca8a92050741d3a93772e6c1b0d704',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T10:03:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.628 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.629 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.629 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.630 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.630 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.630 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.631 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.631 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.632 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.632 281103 DEBUG nova.virt.hardware [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.637 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:24.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:24 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=tempest-cephx-id-2094750145,client_metadata.root=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083],prefix=session evict} (starting...)
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.074 281103 DEBUG nova.network.neutron [req-fc79e1f4-a27e-45bf-b8de-5164784e87d9 req-d5ef129a-6adc-4e6e-8c73-a95a4b2b45b1 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Updated VIF entry in instance network info cache for port 8ddbcf74-77c9-415e-9ff7-3416cf2f699f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.075 281103 DEBUG nova.network.neutron [req-fc79e1f4-a27e-45bf-b8de-5164784e87d9 req-d5ef129a-6adc-4e6e-8c73-a95a4b2b45b1 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Updating instance_info_cache with network_info: [{"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.095 281103 DEBUG oslo_concurrency.lockutils [req-fc79e1f4-a27e-45bf-b8de-5164784e87d9 req-d5ef129a-6adc-4e6e-8c73-a95a4b2b45b1 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Releasing lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:17:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:17:25 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3697056402' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.137 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.182 281103 DEBUG nova.storage.rbd_utils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] rbd image be3af3e0-e77e-4be9-9458-b874e91bdd42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.191 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:17:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch
Dec 05 10:17:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished
Dec 05 10:17:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3697056402' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:25 np0005546420.localdomain podman[327009]: 2025-12-05 10:17:25.511996309 +0000 UTC m=+0.078204156 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:17:25 np0005546420.localdomain podman[327009]: 2025-12-05 10:17:25.579544114 +0000 UTC m=+0.145751921 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 10:17:25 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:17:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:17:25 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2754966128' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.621 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.624 281103 DEBUG nova.virt.libvirt.vif [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T10:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1667374142',display_name='tempest-VolumesBackupsTest-instance-1667374142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005546420.localdomain',hostname='tempest-volumesbackupstest-instance-1667374142',id=10,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyKqIsd+SCFDxbPlfFsm08RfHyG+rO21YddeoWaTQBVVF6Cco+Ied8SAdXLL5E1pUN90Je2RcobCpuLF0gx9YPyalVuHrbR3g3NAZ05ZQKCynlq8k7BUsFjqSil42j2GA==',key_name='tempest-keypair-1886164452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005546420.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005546420.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3554a89b305c449f9fd292eca5647512',ramdisk_id='',reservation_id='r-7yg9v6dz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-833308520',owner_user_name='tempest-VolumesBackupsTest-833308520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T10:17:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0b795e7702e342d9821a3667644be5b0',uuid=be3af3e0-e77e-4be9-9458-b874e91bdd42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.625 281103 DEBUG nova.network.os_vif_util [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Converting VIF {"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.626 281103 DEBUG nova.network.os_vif_util [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=8ddbcf74-77c9-415e-9ff7-3416cf2f699f,network=Network(05dd4ee6-5f37-4402-88a5-db28b0b4198e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ddbcf74-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.629 281103 DEBUG nova.objects.instance [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lazy-loading 'pci_devices' on Instance uuid be3af3e0-e77e-4be9-9458-b874e91bdd42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.644 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] End _get_guest_xml xml=<domain type="kvm">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   <uuid>be3af3e0-e77e-4be9-9458-b874e91bdd42</uuid>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   <name>instance-0000000a</name>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   <memory>131072</memory>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   <vcpu>1</vcpu>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   <metadata>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <nova:name>tempest-VolumesBackupsTest-instance-1667374142</nova:name>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <nova:creationTime>2025-12-05 10:17:24</nova:creationTime>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <nova:flavor name="m1.nano">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <nova:memory>128</nova:memory>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <nova:disk>1</nova:disk>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <nova:swap>0</nova:swap>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <nova:vcpus>1</nova:vcpus>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       </nova:flavor>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <nova:owner>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <nova:user uuid="0b795e7702e342d9821a3667644be5b0">tempest-VolumesBackupsTest-833308520-project-member</nova:user>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <nova:project uuid="3554a89b305c449f9fd292eca5647512">tempest-VolumesBackupsTest-833308520</nova:project>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       </nova:owner>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <nova:root type="image" uuid="3647d20f-5e09-41b2-a6f3-f320b9e4e343"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <nova:ports>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <nova:port uuid="8ddbcf74-77c9-415e-9ff7-3416cf2f699f">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         </nova:port>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       </nova:ports>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     </nova:instance>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   </metadata>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   <sysinfo type="smbios">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <system>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <entry name="manufacturer">RDO</entry>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <entry name="product">OpenStack Compute</entry>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <entry name="serial">be3af3e0-e77e-4be9-9458-b874e91bdd42</entry>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <entry name="uuid">be3af3e0-e77e-4be9-9458-b874e91bdd42</entry>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <entry name="family">Virtual Machine</entry>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     </system>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   </sysinfo>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   <os>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <boot dev="hd"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <smbios mode="sysinfo"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   </os>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   <features>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <acpi/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <apic/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <vmcoreinfo/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   </features>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   <clock offset="utc">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <timer name="hpet" present="no"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   </clock>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   <cpu mode="host-model" match="exact">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   </cpu>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   <devices>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <disk type="network" device="disk">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <driver type="raw" cache="none"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <source protocol="rbd" name="vms/be3af3e0-e77e-4be9-9458-b874e91bdd42_disk">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.103" port="6789"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.104" port="6789"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.105" port="6789"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       </source>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <auth username="openstack">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <secret type="ceph" uuid="79feddb1-4bfc-557f-83b9-0d57c9f66c1b"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       </auth>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <target dev="vda" bus="virtio"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     </disk>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <disk type="network" device="cdrom">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <driver type="raw" cache="none"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <source protocol="rbd" name="vms/be3af3e0-e77e-4be9-9458-b874e91bdd42_disk.config">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.103" port="6789"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.104" port="6789"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <host name="172.18.0.105" port="6789"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       </source>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <auth username="openstack">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:         <secret type="ceph" uuid="79feddb1-4bfc-557f-83b9-0d57c9f66c1b"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       </auth>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <target dev="sda" bus="sata"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     </disk>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <interface type="ethernet">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <mac address="fa:16:3e:2e:15:a9"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <model type="virtio"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <mtu size="1442"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <target dev="tap8ddbcf74-77"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     </interface>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <serial type="pty">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <log file="/var/lib/nova/instances/be3af3e0-e77e-4be9-9458-b874e91bdd42/console.log" append="off"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     </serial>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <video>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <model type="virtio"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     </video>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <input type="tablet" bus="usb"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <rng model="virtio">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <backend model="random">/dev/urandom</backend>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     </rng>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <controller type="usb" index="0"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     <memballoon model="virtio">
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:       <stats period="10"/>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:     </memballoon>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:   </devices>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: </domain>
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.645 281103 DEBUG nova.compute.manager [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Preparing to wait for external event network-vif-plugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.645 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.646 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.646 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.647 281103 DEBUG nova.virt.libvirt.vif [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T10:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1667374142',display_name='tempest-VolumesBackupsTest-instance-1667374142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005546420.localdomain',hostname='tempest-volumesbackupstest-instance-1667374142',id=10,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyKqIsd+SCFDxbPlfFsm08RfHyG+rO21YddeoWaTQBVVF6Cco+Ied8SAdXLL5E1pUN90Je2RcobCpuLF0gx9YPyalVuHrbR3g3NAZ05ZQKCynlq8k7BUsFjqSil42j2GA==',key_name='tempest-keypair-1886164452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005546420.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005546420.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3554a89b305c449f9fd292eca5647512',ramdisk_id='',reservation_id='r-7yg9v6dz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-833308520',owner_user_name='tempest-VolumesBackupsTest-833308520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T10:17:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0b795e7702e342d9821a3667644be5b0',uuid=be3af3e0-e77e-4be9-9458-b874e91bdd42,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.648 281103 DEBUG nova.network.os_vif_util [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Converting VIF {"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.648 281103 DEBUG nova.network.os_vif_util [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=8ddbcf74-77c9-415e-9ff7-3416cf2f699f,network=Network(05dd4ee6-5f37-4402-88a5-db28b0b4198e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ddbcf74-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.649 281103 DEBUG os_vif [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=8ddbcf74-77c9-415e-9ff7-3416cf2f699f,network=Network(05dd4ee6-5f37-4402-88a5-db28b0b4198e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ddbcf74-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.650 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.650 281103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.651 281103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.655 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.655 281103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ddbcf74-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.656 281103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ddbcf74-77, col_values=(('external_ids', {'iface-id': '8ddbcf74-77c9-415e-9ff7-3416cf2f699f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:15:a9', 'vm-uuid': 'be3af3e0-e77e-4be9-9458-b874e91bdd42'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.658 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.661 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.668 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.670 281103 INFO os_vif [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=8ddbcf74-77c9-415e-9ff7-3416cf2f699f,network=Network(05dd4ee6-5f37-4402-88a5-db28b0b4198e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ddbcf74-77')
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.735 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.736 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.736 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] No VIF found with MAC fa:16:3e:2e:15:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.736 281103 INFO nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Using config drive
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.767 281103 DEBUG nova.storage.rbd_utils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] rbd image be3af3e0-e77e-4be9-9458-b874e91bdd42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.868 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.957 281103 INFO nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Creating config drive at /var/lib/nova/instances/be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.config
Dec 05 10:17:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:25.965 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk775nw4z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.096 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk775nw4z" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.140 281103 DEBUG nova.storage.rbd_utils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] rbd image be3af3e0-e77e-4be9-9458-b874e91bdd42_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.146 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.config be3af3e0-e77e-4be9-9458-b874e91bdd42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.379 281103 DEBUG oslo_concurrency.processutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.config be3af3e0-e77e-4be9-9458-b874e91bdd42_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.380 281103 INFO nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Deleting local config drive /var/lib/nova/instances/be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.config because it was imported into RBD.
Dec 05 10:17:26 np0005546420.localdomain systemd[1]: Started libvirt secret daemon.
Dec 05 10:17:26 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:26 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:26 np0005546420.localdomain ceph-mon[298353]: pgmap v627: 177 pgs: 177 active+clean; 202 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 79 KiB/s wr, 58 op/s
Dec 05 10:17:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2754966128' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:26 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929846.4956] manager: (tap8ddbcf74-77): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Dec 05 10:17:26 np0005546420.localdomain kernel: device tap8ddbcf74-77 entered promiscuous mode
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.499 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:17:26Z|00362|binding|INFO|Claiming lport 8ddbcf74-77c9-415e-9ff7-3416cf2f699f for this chassis.
Dec 05 10:17:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:17:26Z|00363|binding|INFO|8ddbcf74-77c9-415e-9ff7-3416cf2f699f: Claiming fa:16:3e:2e:15:a9 10.100.0.14
Dec 05 10:17:26 np0005546420.localdomain systemd-udevd[327125]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:17:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:17:26Z|00364|binding|INFO|Setting lport 8ddbcf74-77c9-415e-9ff7-3416cf2f699f ovn-installed in OVS
Dec 05 10:17:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:17:26Z|00365|binding|INFO|Setting lport 8ddbcf74-77c9-415e-9ff7-3416cf2f699f up in Southbound
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.513 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.510 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:15:a9 10.100.0.14'], port_security=['fa:16:3e:2e:15:a9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3554a89b305c449f9fd292eca5647512', 'neutron:revision_number': '2', 'neutron:security_group_ids': '811851ce-aefb-4b50-bb3d-fd5f8bc97e90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=135cedc8-bceb-4f2f-8778-26f5bc6f81d3, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=8ddbcf74-77c9-415e-9ff7-3416cf2f699f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.512 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 8ddbcf74-77c9-415e-9ff7-3416cf2f699f in datapath 05dd4ee6-5f37-4402-88a5-db28b0b4198e bound to our chassis
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.514 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port f6b4fd13-f8dc-480f-9f29-ae687770e358 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.515 159503 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 05dd4ee6-5f37-4402-88a5-db28b0b4198e
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.518 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:26 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929846.5238] device (tap8ddbcf74-77): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 05 10:17:26 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929846.5250] device (tap8ddbcf74-77): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.528 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[d87e28e2-a6e6-4037-82f4-0449195b2659]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.529 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap05dd4ee6-51 in ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.531 307492 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap05dd4ee6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.531 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[891367c4-84fc-4fa9-ac98-cd7221c93610]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.532 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[62f1b721-cda5-4e3f-927d-0d08e1855d04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.544 159609 DEBUG oslo.privsep.daemon [-] privsep: reply[65545956-5bc0-4021-a64a-74a73c40e22e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain systemd-machined[203266]: New machine qemu-2-instance-0000000a.
Dec 05 10:17:26 np0005546420.localdomain systemd[1]: Started Virtual Machine qemu-2-instance-0000000a.
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.564 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[8313b1ee-767a-4ee3-b704-3714bc1502e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.598 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[bca73960-0871-472d-979f-96950ac2caa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.603 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[7846b3ba-6b7b-4708-8e6f-087bcc73c1e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929846.6060] manager: (tap05dd4ee6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.640 281103 DEBUG nova.compute.manager [req-a9b2d978-bc4d-4797-abdc-4bad16eb3d28 req-6dd080bd-1cd1-470c-995e-5ff77054709c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Received event network-vif-plugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.641 281103 DEBUG oslo_concurrency.lockutils [req-a9b2d978-bc4d-4797-abdc-4bad16eb3d28 req-6dd080bd-1cd1-470c-995e-5ff77054709c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.641 281103 DEBUG oslo_concurrency.lockutils [req-a9b2d978-bc4d-4797-abdc-4bad16eb3d28 req-6dd080bd-1cd1-470c-995e-5ff77054709c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.642 281103 DEBUG oslo_concurrency.lockutils [req-a9b2d978-bc4d-4797-abdc-4bad16eb3d28 req-6dd080bd-1cd1-470c-995e-5ff77054709c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.642 281103 DEBUG nova.compute.manager [req-a9b2d978-bc4d-4797-abdc-4bad16eb3d28 req-6dd080bd-1cd1-470c-995e-5ff77054709c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Processing event network-vif-plugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.643 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3ff77d-5916-465b-a8d2-57691e5fe288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.646 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[c697b605-19d8-4ba3-910c-7779528632ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap05dd4ee6-51: link becomes ready
Dec 05 10:17:26 np0005546420.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap05dd4ee6-50: link becomes ready
Dec 05 10:17:26 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929846.6712] device (tap05dd4ee6-50): carrier: link connected
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.677 308862 DEBUG oslo.privsep.daemon [-] privsep: reply[690ab521-3041-4fda-a579-2d786b63834f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain podman[327135]: 2025-12-05 10:17:26.692977347 +0000 UTC m=+0.110430720 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.696 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a47439d2-8928-4459-b9ef-cfdc1a6832e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05dd4ee6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:24:80:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1296479, 'reachable_time': 44152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327175, 'error': None, 'target': 'ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain podman[327135]: 2025-12-05 10:17:26.706296408 +0000 UTC m=+0.123749781 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.716 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[6345446b-f785-4523-8a28-edb55702e983]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe24:80de'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1296479, 'tstamp': 1296479}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327187, 'error': None, 'target': 'ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.732 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[952913c1-839d-4cfe-9482-e48ed5b2f113]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05dd4ee6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:24:80:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1296479, 'reachable_time': 44152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327196, 'error': None, 'target': 'ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.755 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[562c579d-f034-45d3-9dee-be59445d5a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.816 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8f1efb-0587-43cc-9f92-f826285f6f39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.818 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05dd4ee6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.819 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.820 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05dd4ee6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.863 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:26 np0005546420.localdomain kernel: device tap05dd4ee6-50 entered promiscuous mode
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.868 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.869 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap05dd4ee6-50, col_values=(('external_ids', {'iface-id': '1ea8845d-38d2-4efb-91a5-56a9b0cf4fb7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.871 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:17:26Z|00366|binding|INFO|Releasing lport 1ea8845d-38d2-4efb-91a5-56a9b0cf4fb7 from this chassis (sb_readonly=0)
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.877 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.878 159503 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/05dd4ee6-5f37-4402-88a5-db28b0b4198e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/05dd4ee6-5f37-4402-88a5-db28b0b4198e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.878 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a727e4-91ad-4b24-8aba-b8507c326e28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.879 159503 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: global
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     log         /dev/log local0 debug
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     log-tag     haproxy-metadata-proxy-05dd4ee6-5f37-4402-88a5-db28b0b4198e
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     user        root
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     group       root
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     maxconn     1024
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     pidfile     /var/lib/neutron/external/pids/05dd4ee6-5f37-4402-88a5-db28b0b4198e.pid.haproxy
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     daemon
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: defaults
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     log global
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     mode http
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     option httplog
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     option dontlognull
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     option http-server-close
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     option forwardfor
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     retries                 3
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout http-request    30s
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout connect         30s
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout client          32s
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout server          32s
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     timeout http-keep-alive 30s
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: listen listener
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     bind 169.254.169.254:80
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:     http-request add-header X-OVN-Network-ID 05dd4ee6-5f37-4402-88a5-db28b0b4198e
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 10:17:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:26.880 159503 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'env', 'PROCESS_TAG=haproxy-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/05dd4ee6-5f37-4402-88a5-db28b0b4198e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.967 281103 DEBUG nova.compute.manager [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.969 281103 DEBUG nova.virt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Emitting event <LifecycleEvent: 1764929846.9670825, be3af3e0-e77e-4be9-9458-b874e91bdd42 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.969 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] VM Started (Lifecycle Event)
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.975 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.979 281103 INFO nova.virt.libvirt.driver [-] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Instance spawned successfully.
Dec 05 10:17:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:26.979 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.166 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.177 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.182 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.183 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.184 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.184 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.185 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.185 281103 DEBUG nova.virt.libvirt.driver [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.213 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.214 281103 DEBUG nova.virt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Emitting event <LifecycleEvent: 1764929846.9686964, be3af3e0-e77e-4be9-9458-b874e91bdd42 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.214 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] VM Paused (Lifecycle Event)
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.237 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.241 281103 DEBUG nova.virt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Emitting event <LifecycleEvent: 1764929846.9741578, be3af3e0-e77e-4be9-9458-b874e91bdd42 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.241 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] VM Resumed (Lifecycle Event)
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.259 281103 INFO nova.compute.manager [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Took 4.90 seconds to spawn the instance on the hypervisor.
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.259 281103 DEBUG nova.compute.manager [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.263 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.273 281103 DEBUG nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.310 281103 INFO nova.compute.manager [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.335 281103 INFO nova.compute.manager [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Took 6.02 seconds to build instance.
Dec 05 10:17:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:27.363 281103 DEBUG oslo_concurrency.lockutils [None req-883c4d7b-47a9-4234-a8cb-46707ff21e47 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:27 np0005546420.localdomain podman[327256]: 
Dec 05 10:17:27 np0005546420.localdomain podman[327256]: 2025-12-05 10:17:27.388041185 +0000 UTC m=+0.110728440 container create 942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 10:17:27 np0005546420.localdomain podman[327256]: 2025-12-05 10:17:27.336077621 +0000 UTC m=+0.058764916 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 10:17:27 np0005546420.localdomain systemd[1]: Started libpod-conmon-942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987.scope.
Dec 05 10:17:27 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:17:27 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7df46fcb1dbc419448566f52596c4c84782ca0e250fa0a138f39a78baeb8e2d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:17:27 np0005546420.localdomain podman[327256]: 2025-12-05 10:17:27.479016693 +0000 UTC m=+0.201703948 container init 942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:17:27 np0005546420.localdomain podman[327256]: 2025-12-05 10:17:27.488854658 +0000 UTC m=+0.211541923 container start 942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:17:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:17:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "format": "json"}]: dispatch
Dec 05 10:17:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:27 np0005546420.localdomain neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e[327270]: [NOTICE]   (327274) : New worker (327276) forked
Dec 05 10:17:27 np0005546420.localdomain neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e[327270]: [NOTICE]   (327274) : Loading success.
Dec 05 10:17:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:28.052 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:28 np0005546420.localdomain ceph-mon[298353]: pgmap v628: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 2.2 MiB/s wr, 43 op/s
Dec 05 10:17:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:17:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:17:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:28.717 281103 DEBUG nova.compute.manager [req-fe8332aa-7a95-40d6-a929-44de5b3e0150 req-87506e94-70fa-48dc-b103-532f924b2cd5 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Received event network-vif-plugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:17:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:28.718 281103 DEBUG oslo_concurrency.lockutils [req-fe8332aa-7a95-40d6-a929-44de5b3e0150 req-87506e94-70fa-48dc-b103-532f924b2cd5 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:28.719 281103 DEBUG oslo_concurrency.lockutils [req-fe8332aa-7a95-40d6-a929-44de5b3e0150 req-87506e94-70fa-48dc-b103-532f924b2cd5 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:28.719 281103 DEBUG oslo_concurrency.lockutils [req-fe8332aa-7a95-40d6-a929-44de5b3e0150 req-87506e94-70fa-48dc-b103-532f924b2cd5 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:28.719 281103 DEBUG nova.compute.manager [req-fe8332aa-7a95-40d6-a929-44de5b3e0150 req-87506e94-70fa-48dc-b103-532f924b2cd5 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] No waiting events found dispatching network-vif-plugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:17:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:28.720 281103 WARNING nova.compute.manager [req-fe8332aa-7a95-40d6-a929-44de5b3e0150 req-87506e94-70fa-48dc-b103-532f924b2cd5 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Received unexpected event network-vif-plugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f for instance with vm_state active and task_state None.
Dec 05 10:17:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:29.351 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:29.366 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:29 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:17:29Z|00367|binding|INFO|Releasing lport 1ea8845d-38d2-4efb-91a5-56a9b0cf4fb7 from this chassis (sb_readonly=0)
Dec 05 10:17:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:29.375 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:17:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:29.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:29.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:30 np0005546420.localdomain ceph-mon[298353]: pgmap v629: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 2.2 MiB/s wr, 43 op/s
Dec 05 10:17:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:17:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:17:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:17:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:30.658 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:30.750 281103 DEBUG nova.compute.manager [req-6899d9f4-fae9-48d5-9f9b-05f2bf635dc8 req-ede89e54-3d34-4047-bf5c-ef7610f0ea4a c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Received event network-changed-8ddbcf74-77c9-415e-9ff7-3416cf2f699f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:17:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:30.750 281103 DEBUG nova.compute.manager [req-6899d9f4-fae9-48d5-9f9b-05f2bf635dc8 req-ede89e54-3d34-4047-bf5c-ef7610f0ea4a c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Refreshing instance network info cache due to event network-changed-8ddbcf74-77c9-415e-9ff7-3416cf2f699f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 10:17:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:30.751 281103 DEBUG oslo_concurrency.lockutils [req-6899d9f4-fae9-48d5-9f9b-05f2bf635dc8 req-ede89e54-3d34-4047-bf5c-ef7610f0ea4a c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:17:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:30.751 281103 DEBUG oslo_concurrency.lockutils [req-6899d9f4-fae9-48d5-9f9b-05f2bf635dc8 req-ede89e54-3d34-4047-bf5c-ef7610f0ea4a c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquired lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:17:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:30.752 281103 DEBUG nova.network.neutron [req-6899d9f4-fae9-48d5-9f9b-05f2bf635dc8 req-ede89e54-3d34-4047-bf5c-ef7610f0ea4a c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Refreshing network info cache for port 8ddbcf74-77c9-415e-9ff7-3416cf2f699f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 10:17:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:30.868 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:30.888 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:31.145 281103 DEBUG nova.network.neutron [req-6899d9f4-fae9-48d5-9f9b-05f2bf635dc8 req-ede89e54-3d34-4047-bf5c-ef7610f0ea4a c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Updated VIF entry in instance network info cache for port 8ddbcf74-77c9-415e-9ff7-3416cf2f699f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 10:17:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:31.147 281103 DEBUG nova.network.neutron [req-6899d9f4-fae9-48d5-9f9b-05f2bf635dc8 req-ede89e54-3d34-4047-bf5c-ef7610f0ea4a c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Updating instance_info_cache with network_info: [{"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:17:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:31.166 281103 DEBUG oslo_concurrency.lockutils [req-6899d9f4-fae9-48d5-9f9b-05f2bf635dc8 req-ede89e54-3d34-4047-bf5c-ef7610f0ea4a c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Releasing lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:17:31 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=tempest-cephx-id-2094750145,client_metadata.root=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083],prefix=session evict} (starting...)
Dec 05 10:17:31 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:17:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch
Dec 05 10:17:31 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished
Dec 05 10:17:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:32.367 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:32.371 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:17:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:32.372 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:17:32 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:32 np0005546420.localdomain ceph-mon[298353]: pgmap v630: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 95 op/s
Dec 05 10:17:32 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1885903013' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:17:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:33.082 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:33 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:17:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/585223374' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:17:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:17:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 05 10:17:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 05 10:17:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:33.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:17:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:33.891 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:33.892 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:33.892 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:33.892 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:17:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:33.892 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:17:34 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2803234541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:17:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:34.293 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:34.374 281103 DEBUG nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 10:17:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:34.375 281103 DEBUG nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 10:17:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:34.595 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:17:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:34.596 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11340MB free_disk=41.77423095703125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:17:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:34.596 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:17:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:34.597 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:17:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:17:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:17:34 np0005546420.localdomain ceph-mon[298353]: pgmap v631: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 108 op/s
Dec 05 10:17:34 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2803234541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:17:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:34.650 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Instance be3af3e0-e77e-4be9-9458-b874e91bdd42 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 10:17:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:34.650 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:17:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:34.650 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:17:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:34.680 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:17:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:17:35 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3217458331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:17:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:35.174 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:17:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:35.182 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:17:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:35.206 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:17:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:17:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:17:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:35.537 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:17:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:35.538 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:17:35 np0005546420.localdomain podman[327332]: 2025-12-05 10:17:35.535061646 +0000 UTC m=+0.105687744 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 05 10:17:35 np0005546420.localdomain podman[327332]: 2025-12-05 10:17:35.572332046 +0000 UTC m=+0.142958194 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 05 10:17:35 np0005546420.localdomain podman[327331]: 2025-12-05 10:17:35.587844406 +0000 UTC m=+0.160212618 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:17:35 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:17:35 np0005546420.localdomain podman[327331]: 2025-12-05 10:17:35.598579856 +0000 UTC m=+0.170948068 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:17:35 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:17:35 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:17:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:17:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:17:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3217458331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:17:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:35.659 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:36 np0005546420.localdomain ceph-mon[298353]: pgmap v632: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 107 op/s
Dec 05 10:17:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:17:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:17:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:17:37 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:17:37 np0005546420.localdomain ceph-mon[298353]: pgmap v633: 177 pgs: 177 active+clean; 250 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 112 op/s
Dec 05 10:17:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:38.089 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:38 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=tempest-cephx-id-2094750145,client_metadata.root=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083],prefix=session evict} (starting...)
Dec 05 10:17:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:17:38 np0005546420.localdomain podman[327371]: 2025-12-05 10:17:38.537228817 +0000 UTC m=+0.102922808 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:17:38 np0005546420.localdomain podman[327371]: 2025-12-05 10:17:38.581436602 +0000 UTC m=+0.147130603 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd)
Dec 05 10:17:38 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:17:38 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch
Dec 05 10:17:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished
Dec 05 10:17:38 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:17:39.375 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:17:39 np0005546420.localdomain ceph-mon[298353]: pgmap v634: 177 pgs: 177 active+clean; 250 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 96 KiB/s wr, 79 op/s
Dec 05 10:17:39 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:17:39 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:17:39 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 05 10:17:39 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 05 10:17:39 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:17:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:40.698 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:40 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:17:42 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:17:42 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:42 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:17:42 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:17:42 np0005546420.localdomain ceph-mon[298353]: pgmap v635: 177 pgs: 177 active+clean; 263 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 103 op/s
Dec 05 10:17:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:43.110 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:43 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:17:43Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:15:a9 10.100.0.14
Dec 05 10:17:43 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:17:43Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:15:a9 10.100.0.14
Dec 05 10:17:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:17:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:17:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:17:44 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:17:44 np0005546420.localdomain ceph-mon[298353]: pgmap v636: 177 pgs: 177 active+clean; 271 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 898 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 05 10:17:44 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 05 10:17:44 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=tempest-cephx-id-2094750145,client_metadata.root=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083],prefix=session evict} (starting...)
Dec 05 10:17:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch
Dec 05 10:17:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch
Dec 05 10:17:45 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished
Dec 05 10:17:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:45.700 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch
Dec 05 10:17:46 np0005546420.localdomain ceph-mon[298353]: pgmap v637: 177 pgs: 177 active+clean; 271 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Dec 05 10:17:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:17:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 05 10:17:46 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:17:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:17:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:17:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:17:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156102 "" "Go-http-client/1.1"
Dec 05 10:17:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:17:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19217 "" "Go-http-client/1.1"
Dec 05 10:17:47 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:17:47 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 05 10:17:47 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:17:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:48.163 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:48 np0005546420.localdomain ceph-mon[298353]: pgmap v638: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 361 KiB/s rd, 2.3 MiB/s wr, 80 op/s
Dec 05 10:17:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:17:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:17:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:17:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:17:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:17:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:17:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:17:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:17:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:17:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:17:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:17:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:17:49 np0005546420.localdomain snmpd[68010]: empty variable list in _query
Dec 05 10:17:49 np0005546420.localdomain snmpd[68010]: empty variable list in _query
Dec 05 10:17:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "format": "json"}]: dispatch
Dec 05 10:17:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "force": true, "format": "json"}]: dispatch
Dec 05 10:17:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:17:50 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:17:50 np0005546420.localdomain podman[327392]: 2025-12-05 10:17:50.512904914 +0000 UTC m=+0.084986885 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, distribution-scope=public)
Dec 05 10:17:50 np0005546420.localdomain podman[327392]: 2025-12-05 10:17:50.528056132 +0000 UTC m=+0.100138123 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., version=9.6)
Dec 05 10:17:50 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:17:50 np0005546420.localdomain ceph-mon[298353]: pgmap v639: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 361 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Dec 05 10:17:50 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:17:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:17:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:17:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:17:50 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:17:50 np0005546420.localdomain podman[327393]: 2025-12-05 10:17:50.619896098 +0000 UTC m=+0.184752575 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:17:50 np0005546420.localdomain podman[327393]: 2025-12-05 10:17:50.65981033 +0000 UTC m=+0.224666817 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:17:50 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:17:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:50.702 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "format": "json"}]: dispatch
Dec 05 10:17:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "format": "json"}]: dispatch
Dec 05 10:17:52 np0005546420.localdomain ceph-mon[298353]: pgmap v640: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 362 KiB/s rd, 2.3 MiB/s wr, 80 op/s
Dec 05 10:17:53 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:17:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:53.213 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "format": "json"}]: dispatch
Dec 05 10:17:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "force": true, "format": "json"}]: dispatch
Dec 05 10:17:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:17:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 05 10:17:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 05 10:17:54 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:17:54 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:17:54 np0005546420.localdomain ceph-mon[298353]: pgmap v641: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 331 KiB/s rd, 1.1 MiB/s wr, 56 op/s
Dec 05 10:17:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:17:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:55.705 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:56 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:17:56 np0005546420.localdomain podman[327436]: 2025-12-05 10:17:56.517041371 +0000 UTC m=+0.089227416 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:17:56 np0005546420.localdomain podman[327436]: 2025-12-05 10:17:56.567464418 +0000 UTC m=+0.139650463 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:17:56 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:17:56 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "snap_name": "aa63b329-df82-4a19-aa39-a051e214eb1e_9eff7ce6-e5ba-47c0-8e34-473dc31708f3", "force": true, "format": "json"}]: dispatch
Dec 05 10:17:56 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "snap_name": "aa63b329-df82-4a19-aa39-a051e214eb1e", "force": true, "format": "json"}]: dispatch
Dec 05 10:17:56 np0005546420.localdomain ceph-mon[298353]: pgmap v642: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 283 KiB/s rd, 270 KiB/s wr, 51 op/s
Dec 05 10:17:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:17:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:17:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:17:57 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e246 e246: 6 total, 6 up, 6 in
Dec 05 10:17:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:17:57 np0005546420.localdomain podman[327461]: 2025-12-05 10:17:57.499107409 +0000 UTC m=+0.076782101 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 10:17:57 np0005546420.localdomain podman[327461]: 2025-12-05 10:17:57.509424817 +0000 UTC m=+0.087099549 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 05 10:17:57 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:17:57 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:17:57 np0005546420.localdomain ceph-mon[298353]: osdmap e246: 6 total, 6 up, 6 in
Dec 05 10:17:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:17:58.214 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:17:58 np0005546420.localdomain ceph-mon[298353]: pgmap v644: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 97 KiB/s wr, 10 op/s
Dec 05 10:17:59 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:17:59Z|00368|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 05 10:17:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "63723774-424e-4107-9f82-8c494eaae0eb", "format": "json"}]: dispatch
Dec 05 10:17:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "force": true, "format": "json"}]: dispatch
Dec 05 10:17:59 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:18:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:00 np0005546420.localdomain ceph-mon[298353]: pgmap v645: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 97 KiB/s wr, 10 op/s
Dec 05 10:18:00 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:18:00 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:18:00 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 05 10:18:00 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 05 10:18:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:00.708 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:01 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:18:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:02.235 281103 DEBUG oslo_concurrency.lockutils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:02.236 281103 DEBUG oslo_concurrency.lockutils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:02.257 281103 DEBUG nova.objects.instance [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lazy-loading 'flavor' on Instance uuid be3af3e0-e77e-4be9-9458-b874e91bdd42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:18:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:02.301 281103 INFO nova.virt.libvirt.driver [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Ignoring supplied device name: /dev/vdb
Dec 05 10:18:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:02.318 281103 DEBUG oslo_concurrency.lockutils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:02.498 281103 DEBUG oslo_concurrency.lockutils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:02.498 281103 DEBUG oslo_concurrency.lockutils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:02.499 281103 INFO nova.compute.manager [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Attaching volume c4d37f21-a692-4fd8-a5cf-3ec6f97372f0 to /dev/vdb
Dec 05 10:18:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:02.585 281103 DEBUG os_brick.utils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.107', 'multipath': True, 'enforce_multipath': True, 'host': 'np0005546420.localdomain', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 05 10:18:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:02.587 281103 INFO oslo.privsep.daemon [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmppc3i9scf/privsep.sock']
Dec 05 10:18:02 np0005546420.localdomain ceph-mon[298353]: pgmap v646: 177 pgs: 177 active+clean; 285 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 82 KiB/s wr, 9 op/s
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.217 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.358 281103 INFO oslo.privsep.daemon [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Spawned new privsep daemon via rootwrap
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.227 327483 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.232 327483 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.235 327483 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.235 327483 INFO oslo.privsep.daemon [-] privsep daemon running as pid 327483
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.362 327483 DEBUG oslo.privsep.daemon [-] privsep: reply[0a42af36-7d7a-4522-a6c2-2caa3fa9a5b5]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.467 327483 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.481 327483 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.482 327483 DEBUG oslo.privsep.daemon [-] privsep: reply[be2be383-a321-4428-8386-795939c92311]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.484 327483 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.495 327483 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.495 327483 DEBUG oslo.privsep.daemon [-] privsep: reply[d45286ea-65b3-4a66-8d80-0aab7ad10e52]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4f5bb6fc28b8\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.498 327483 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.508 327483 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.508 327483 DEBUG oslo.privsep.daemon [-] privsep: reply[775b68b5-1188-40df-afd9-ac68e41246af]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.512 327483 DEBUG oslo.privsep.daemon [-] privsep: reply[65a74dde-e4a5-448d-95db-3c4a8ceefd14]: (4, '38a014e5-f211-4fa1-8868-c362af7c3bc6') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.512 281103 DEBUG oslo_concurrency.processutils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.537 281103 DEBUG oslo_concurrency.processutils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.539 281103 DEBUG os_brick.initiator.connectors.lightos [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.540 281103 DEBUG os_brick.initiator.connectors.lightos [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.541 281103 DEBUG os_brick.initiator.connectors.lightos [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:38a014e5-f211-4fa1-8868-c362af7c3bc6 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.541 281103 DEBUG os_brick.utils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] <== get_connector_properties: return (954ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.107', 'host': 'np0005546420.localdomain', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4f5bb6fc28b8', 'do_local_attach': False, 'nvme_hostid': '38a014e5-f211-4fa1-8868-c362af7c3bc6', 'system uuid': '38a014e5-f211-4fa1-8868-c362af7c3bc6', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:38a014e5-f211-4fa1-8868-c362af7c3bc6', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 05 10:18:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:03.541 281103 DEBUG nova.virt.block_device [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Updating existing volume attachment record: 700f71d7-2eb6-4023-9419-e493904847e4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 05 10:18:03 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:18:03 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:18:03 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:18:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2133922943' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:18:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2133922943' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:18:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:04.136 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:04.137 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:04.137 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:04 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:18:04 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3970738749' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:04.287 281103 DEBUG oslo_concurrency.lockutils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:04.288 281103 DEBUG oslo_concurrency.lockutils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:04.290 281103 DEBUG oslo_concurrency.lockutils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:04.302 281103 DEBUG nova.objects.instance [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lazy-loading 'flavor' on Instance uuid be3af3e0-e77e-4be9-9458-b874e91bdd42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:04.327 281103 DEBUG nova.virt.libvirt.driver [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Attempting to attach volume c4d37f21-a692-4fd8-a5cf-3ec6f97372f0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:04.331 281103 DEBUG nova.virt.libvirt.guest [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] attach device xml: <disk type="network" device="disk">
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:   <source protocol="rbd" name="volumes/volume-c4d37f21-a692-4fd8-a5cf-3ec6f97372f0">
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:     <host name="172.18.0.103" port="6789"/>
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:     <host name="172.18.0.104" port="6789"/>
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:     <host name="172.18.0.105" port="6789"/>
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:   </source>
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:   <auth username="openstack">
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:     <secret type="ceph" uuid="79feddb1-4bfc-557f-83b9-0d57c9f66c1b"/>
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:   </auth>
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:   <target dev="vdb" bus="virtio"/>
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:   <serial>c4d37f21-a692-4fd8-a5cf-3ec6f97372f0</serial>
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: </disk>
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:04.483 281103 DEBUG nova.virt.libvirt.driver [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:04.484 281103 DEBUG nova.virt.libvirt.driver [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:04.485 281103 DEBUG nova.virt.libvirt.driver [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:04.486 281103 DEBUG nova.virt.libvirt.driver [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] No VIF found with MAC fa:16:3e:2e:15:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 10:18:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:04.639 281103 DEBUG oslo_concurrency.lockutils [None req-8fb546f3-54d2-4a3b-b41e-805c563a99cd 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:04 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:18:04 np0005546420.localdomain ceph-mon[298353]: pgmap v647: 177 pgs: 177 active+clean; 285 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 82 KiB/s wr, 9 op/s
Dec 05 10:18:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3970738749' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:05.710 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:05.763 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e247 e247: 6 total, 6 up, 6 in
Dec 05 10:18:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:18:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:18:06 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:18:06 np0005546420.localdomain podman[327513]: 2025-12-05 10:18:06.518026946 +0000 UTC m=+0.087002926 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:18:06 np0005546420.localdomain podman[327513]: 2025-12-05 10:18:06.552456659 +0000 UTC m=+0.121432699 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 05 10:18:06 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:18:06 np0005546420.localdomain podman[327512]: 2025-12-05 10:18:06.576581344 +0000 UTC m=+0.145401930 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:18:06 np0005546420.localdomain podman[327512]: 2025-12-05 10:18:06.59134731 +0000 UTC m=+0.160167886 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:18:06 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:18:06 np0005546420.localdomain ceph-mon[298353]: pgmap v648: 177 pgs: 177 active+clean; 285 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 82 KiB/s wr, 9 op/s
Dec 05 10:18:06 np0005546420.localdomain ceph-mon[298353]: osdmap e247: 6 total, 6 up, 6 in
Dec 05 10:18:06 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:18:06 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 05 10:18:06 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 05 10:18:06 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2562947484' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:18:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:18:07 np0005546420.localdomain ceph-mon[298353]: pgmap v650: 177 pgs: 177 active+clean; 285 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.6 KiB/s rd, 76 KiB/s wr, 11 op/s
Dec 05 10:18:07 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e248 e248: 6 total, 6 up, 6 in
Dec 05 10:18:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:08.220 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:08.329 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:08 np0005546420.localdomain ceph-mon[298353]: osdmap e248: 6 total, 6 up, 6 in
Dec 05 10:18:08 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e249 e249: 6 total, 6 up, 6 in
Dec 05 10:18:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:18:09 np0005546420.localdomain podman[327554]: 2025-12-05 10:18:09.510131257 +0000 UTC m=+0.091090784 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:18:09 np0005546420.localdomain podman[327554]: 2025-12-05 10:18:09.52545495 +0000 UTC m=+0.106414477 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:18:09 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:18:09 np0005546420.localdomain ceph-mon[298353]: osdmap e249: 6 total, 6 up, 6 in
Dec 05 10:18:09 np0005546420.localdomain ceph-mon[298353]: pgmap v653: 177 pgs: 177 active+clean; 285 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 5.5 KiB/s rd, 47 KiB/s wr, 10 op/s
Dec 05 10:18:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:18:09 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:18:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:10.712 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:11 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:18:11 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4056044193' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:11 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:18:11 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:18:12 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4056044193' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:12 np0005546420.localdomain ceph-mon[298353]: pgmap v654: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 119 KiB/s wr, 45 op/s
Dec 05 10:18:12 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e250 e250: 6 total, 6 up, 6 in
Dec 05 10:18:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:13.223 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:13 np0005546420.localdomain ceph-mon[298353]: osdmap e250: 6 total, 6 up, 6 in
Dec 05 10:18:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:18:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 05 10:18:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 05 10:18:13 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:18:13 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e251 e251: 6 total, 6 up, 6 in
Dec 05 10:18:13 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:13.537 262769 INFO neutron.agent.linux.ip_lib [None req-f296d088-b71a-4439-a849-f02d122e77ae - - - - - -] Device tap3ef9b2ec-8c cannot be used as it has no MAC address
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.576 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}337ba064ca738369bb927fa0024d0466ca16dbd55f55284a981534d9ed682655" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519
Dec 05 10:18:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:13.614 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:13 np0005546420.localdomain kernel: device tap3ef9b2ec-8c entered promiscuous mode
Dec 05 10:18:13 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929893.6264] manager: (tap3ef9b2ec-8c): new Generic device (/org/freedesktop/NetworkManager/Devices/64)
Dec 05 10:18:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:13Z|00369|binding|INFO|Claiming lport 3ef9b2ec-8c68-40f2-a60a-84c7dfe5c540 for this chassis.
Dec 05 10:18:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:13Z|00370|binding|INFO|3ef9b2ec-8c68-40f2-a60a-84c7dfe5c540: Claiming unknown
Dec 05 10:18:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:13.627 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:13 np0005546420.localdomain systemd-udevd[327583]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:18:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:13.638 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-936d20b6-4037-4d6d-940d-7d41a03115b3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936d20b6-4037-4d6d-940d-7d41a03115b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af5a639003e24f09b8489ea02f308e0f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b9e4954-e6a8-4a58-9b03-332fb24a38bd, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=3ef9b2ec-8c68-40f2-a60a-84c7dfe5c540) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:18:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:13.640 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 3ef9b2ec-8c68-40f2-a60a-84c7dfe5c540 in datapath 936d20b6-4037-4d6d-940d-7d41a03115b3 bound to our chassis
Dec 05 10:18:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:13.644 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 4d14196b-0900-41ad-9dd0-a079084d5d46 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:18:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:13.644 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 936d20b6-4037-4d6d-940d-7d41a03115b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:18:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:13.646 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8bc16b-c944-419c-bbfc-c0360d0cd03a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap3ef9b2ec-8c: No such device
Dec 05 10:18:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:13Z|00371|binding|INFO|Setting lport 3ef9b2ec-8c68-40f2-a60a-84c7dfe5c540 ovn-installed in OVS
Dec 05 10:18:13 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:13Z|00372|binding|INFO|Setting lport 3ef9b2ec-8c68-40f2-a60a-84c7dfe5c540 up in Southbound
Dec 05 10:18:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:13.666 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap3ef9b2ec-8c: No such device
Dec 05 10:18:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap3ef9b2ec-8c: No such device
Dec 05 10:18:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap3ef9b2ec-8c: No such device
Dec 05 10:18:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap3ef9b2ec-8c: No such device
Dec 05 10:18:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap3ef9b2ec-8c: No such device
Dec 05 10:18:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap3ef9b2ec-8c: No such device
Dec 05 10:18:13 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap3ef9b2ec-8c: No such device
Dec 05 10:18:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:13.740 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.751 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 954 Content-Type: application/json Date: Fri, 05 Dec 2025 10:18:13 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-562d2b4d-9c4f-43de-97fe-124e06c9ee8b x-openstack-request-id: req-562d2b4d-9c4f-43de-97fe-124e06c9ee8b _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.752 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "445199a6-1f73-405e-82f4-8bd8c4bb34c6", "name": "m1.nano", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/445199a6-1f73-405e-82f4-8bd8c4bb34c6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/445199a6-1f73-405e-82f4-8bd8c4bb34c6"}]}, {"id": "82e6442b-8002-49d4-85f7-18b877efaccf", "name": "m1.micro", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/82e6442b-8002-49d4-85f7-18b877efaccf"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/82e6442b-8002-49d4-85f7-18b877efaccf"}]}, {"id": "bb6181df-1ada-42c2-81f6-896f08302073", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/bb6181df-1ada-42c2-81f6-896f08302073"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/bb6181df-1ada-42c2-81f6-896f08302073"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.752 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-562d2b4d-9c4f-43de-97fe-124e06c9ee8b request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.756 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/445199a6-1f73-405e-82f4-8bd8c4bb34c6 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}337ba064ca738369bb927fa0024d0466ca16dbd55f55284a981534d9ed682655" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.772 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 493 Content-Type: application/json Date: Fri, 05 Dec 2025 10:18:13 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-15ea0c23-e505-43be-b90c-c90c62c63c1b x-openstack-request-id: req-15ea0c23-e505-43be-b90c-c90c62c63c1b _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.773 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "445199a6-1f73-405e-82f4-8bd8c4bb34c6", "name": "m1.nano", "ram": 128, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/445199a6-1f73-405e-82f4-8bd8c4bb34c6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/445199a6-1f73-405e-82f4-8bd8c4bb34c6"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.773 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/445199a6-1f73-405e-82f4-8bd8c4bb34c6 used request id req-15ea0c23-e505-43be-b90c-c90c62c63c1b request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.776 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'name': 'tempest-VolumesBackupsTest-instance-1667374142', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'np0005546420.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3554a89b305c449f9fd292eca5647512', 'user_id': '0b795e7702e342d9821a3667644be5b0', 'hostId': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.776 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.836 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.write.bytes volume: 72949760 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.837 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.838 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f4da9aaf-10d4-4316-997f-195ef46c192f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 72949760, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vda', 'timestamp': '2025-12-05T10:18:13.777143', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'b542e82e-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '0b01624b6698b79c6242cd82c86106033aa7ac2e3c4314e9099cd65424e3ef15'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vdb', 'timestamp': '2025-12-05T10:18:13.777143', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'b54306ce-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '172f396f3d2821aa5b01864bacd9237bad89f9e68584e2c103b1d956b4515b0b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-sda', 'timestamp': '2025-12-05T10:18:13.777143', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'b5431aba-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': 'e13f96a080391383f3067a422980ee51ad7a93c346887f664017a4679c005035'}]}, 'timestamp': '2025-12-05 10:18:13.838993', '_unique_id': '5ff7f64230784bdc8f049ae0b1b496e2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.847 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.852 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.857 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for be3af3e0-e77e-4be9-9458-b874e91bdd42 / tap8ddbcf74-77 inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.857 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '698833a2-6d87-4aac-9412-2d651dd1f4cb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'instance-0000000a-be3af3e0-e77e-4be9-9458-b874e91bdd42-tap8ddbcf74-77', 'timestamp': '2025-12-05T10:18:13.853130', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'tap8ddbcf74-77', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:2e:15:a9', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8ddbcf74-77'}, 'message_id': 'b54619ae-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.04323284, 'message_signature': '4e5beb2db2afe02f4812adc624a1659a463bef213ebe26c0625daf2abb6a0c01'}]}, 'timestamp': '2025-12-05 10:18:13.858646', '_unique_id': 'bf0d2ba93eb24a5fbd9bbd48e885ffee'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.859 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.861 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.881 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.882 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.882 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd3857bce-0d23-4b3e-adda-d6edf8a8d651', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vda', 'timestamp': '2025-12-05T10:18:13.861835', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'b549aca4-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.051880666, 'message_signature': '4ac3e5dc6575716c03cb07b6de8b9d84ed308327a2b69a18b065886566c58da9'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vdb', 'timestamp': '2025-12-05T10:18:13.861835', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'b549c39c-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.051880666, 'message_signature': '302a4f32c3880762dfa88aae72e04bdb92c0334acb12425fc5efe035872424d2'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-sda', 'timestamp': '2025-12-05T10:18:13.861835', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'b549d684-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.051880666, 'message_signature': '64f53a37aa8a0939e4800208cca2f9422ba089bd7d52ba6524b2757a445dc458'}]}, 'timestamp': '2025-12-05 10:18:13.883094', '_unique_id': '9760edae3ee84d51a5f5b6abc9546b23'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.884 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.886 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.887 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.887 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-VolumesBackupsTest-instance-1667374142>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-VolumesBackupsTest-instance-1667374142>]
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.888 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.888 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.888 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [<NovaLikeServer: tempest-VolumesBackupsTest-instance-1667374142>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-VolumesBackupsTest-instance-1667374142>]
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.888 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.889 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '760fa5d2-c7ed-4ed4-b032-501e7a9c3073', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'instance-0000000a-be3af3e0-e77e-4be9-9458-b874e91bdd42-tap8ddbcf74-77', 'timestamp': '2025-12-05T10:18:13.888923', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'tap8ddbcf74-77', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:2e:15:a9', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8ddbcf74-77'}, 'message_id': 'b54ad1ec-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.04323284, 'message_signature': 'd74bc64ac9967e50ef765f31a7cdfe418f343db7f1a8772465061a434251c989'}]}, 'timestamp': '2025-12-05 10:18:13.889661', '_unique_id': '29a6a78ed7624a8a9bddde627c8db14a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.891 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.893 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.893 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd197b198-da48-4515-8b5c-7130005f4e41', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 28, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'instance-0000000a-be3af3e0-e77e-4be9-9458-b874e91bdd42-tap8ddbcf74-77', 'timestamp': '2025-12-05T10:18:13.893421', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'tap8ddbcf74-77', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:2e:15:a9', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8ddbcf74-77'}, 'message_id': 'b54b7d0e-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.04323284, 'message_signature': 'b7f079932f295f9887951846dfb3363adec1e90de2fd504b90a27d3f8b8cd17a'}]}, 'timestamp': '2025-12-05 10:18:13.893911', '_unique_id': '20a5fb1a5476482b99926fed8b068dcc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.894 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.897 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.897 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '94853e36-3415-4a71-9e4e-ab042da3bf3f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 26, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'instance-0000000a-be3af3e0-e77e-4be9-9458-b874e91bdd42-tap8ddbcf74-77', 'timestamp': '2025-12-05T10:18:13.897234', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'tap8ddbcf74-77', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:2e:15:a9', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8ddbcf74-77'}, 'message_id': 'b54c11ec-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.04323284, 'message_signature': '8eb33d135d52f1369007e3ad9d4d26eeee2f3c0d0f25f6aecdaad426f125dce9'}]}, 'timestamp': '2025-12-05 10:18:13.897716', '_unique_id': '032d11a57a5f438dbbafa18d86b9eafe'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.898 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.900 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.900 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/network.incoming.bytes volume: 4179 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7d323a42-ca7b-4795-8f64-7dda709488ac', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4179, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'instance-0000000a-be3af3e0-e77e-4be9-9458-b874e91bdd42-tap8ddbcf74-77', 'timestamp': '2025-12-05T10:18:13.900458', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'tap8ddbcf74-77', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:2e:15:a9', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8ddbcf74-77'}, 'message_id': 'b54c93a6-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.04323284, 'message_signature': '70e6d96fb41dae4d16b80187abc54a12f859506472581fba5dd3f36f3c298124'}]}, 'timestamp': '2025-12-05 10:18:13.901070', '_unique_id': 'd79f6b09755f44ef9039df7fb9919ae9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.902 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.903 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.925 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/memory.usage volume: 42.98828125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '57c662c5-20ff-4080-91da-922cae87fb39', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 42.98828125, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'timestamp': '2025-12-05T10:18:13.903752', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1}, 'message_id': 'b5506dfa-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.114863141, 'message_signature': '903adaf61d409023b8aefa6656f16240541e757a28209e3ff131d4c1f8f612ca'}]}, 'timestamp': '2025-12-05 10:18:13.926430', '_unique_id': '12b86c43b8d442aca3a8049e2ab60ae4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.927 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.929 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.929 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/cpu volume: 14630000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6b92ca5a-b67f-438f-88c5-3f22bed617b7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 14630000000, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'timestamp': '2025-12-05T10:18:13.929830', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'cpu_number': 1}, 'message_id': 'b5511296-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.114863141, 'message_signature': 'c98636caeffb581fa7c419ae5019271c7f5fcd857114503d56487d6b7f9c9861'}]}, 'timestamp': '2025-12-05 10:18:13.930497', '_unique_id': '79852beb379c4b2bae1c60004ca27b10'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.931 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.933 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.933 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.read.bytes volume: 31431168 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.933 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.read.bytes volume: 12288 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.934 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1a0de4fe-de77-4f56-83c9-447ff0e330bd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 31431168, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vda', 'timestamp': '2025-12-05T10:18:13.933393', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'b55198d8-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '47f27b917e7b54d82aa0db6c0a0e5cc1893af1ec745d13fca3f60c9a094a7751'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 12288, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vdb', 'timestamp': '2025-12-05T10:18:13.933393', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'b551ac42-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '3050e515f57bf89f56d1b839ce91d2f689068962f9ac0fb8f6182ab577c50b4a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 299326, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-sda', 'timestamp': '2025-12-05T10:18:13.933393', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'b551bd2c-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': 'a18cc1d1ef717c632810e12b8aa5910ee57bd3bef03c51cb7f6021f75f29bf9e'}]}, 'timestamp': '2025-12-05 10:18:13.934826', '_unique_id': 'f2ce5bfdc2d3446497c7d827737cdeca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.935 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.937 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.937 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2b922e5b-a987-4b53-9239-7f5feb73f7c0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'instance-0000000a-be3af3e0-e77e-4be9-9458-b874e91bdd42-tap8ddbcf74-77', 'timestamp': '2025-12-05T10:18:13.937333', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'tap8ddbcf74-77', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:2e:15:a9', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8ddbcf74-77'}, 'message_id': 'b55230e0-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.04323284, 'message_signature': 'edab0920ea2640301dba5712444376206e93ff66c39c2f2c158b275bd476df6f'}]}, 'timestamp': '2025-12-05 10:18:13.937821', '_unique_id': '5f3743b369bd4866810d7075a972148f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.938 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.939 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.940 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.write.latency volume: 18417848314 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.940 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.941 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7f035515-b89f-492f-abff-edbbfd86a8f1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 18417848314, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vda', 'timestamp': '2025-12-05T10:18:13.940074', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'b5529b84-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '7a71fcdda0f60d757d49a7902fe57b12c253f7ced0d137e4ac2ab18c2e167827'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vdb', 'timestamp': '2025-12-05T10:18:13.940074', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'b552ad04-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '7453e5ff4006eff5adae944f51599080c80a485f4f741dca5ba4eb3394c2eec6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-sda', 'timestamp': '2025-12-05T10:18:13.940074', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'b552bf10-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': 'b568f25b7b8f206cd1c7c8ee48bc219a98fed0ee094aa0774929aedfa96aeb6b'}]}, 'timestamp': '2025-12-05 10:18:13.941428', '_unique_id': 'bd4afbf6c4634d928c5c0b143312c3c7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.942 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.943 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.943 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e3b84440-400f-4d8e-8f33-4de04f77c41d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'instance-0000000a-be3af3e0-e77e-4be9-9458-b874e91bdd42-tap8ddbcf74-77', 'timestamp': '2025-12-05T10:18:13.943881', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'tap8ddbcf74-77', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:2e:15:a9', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8ddbcf74-77'}, 'message_id': 'b5533292-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.04323284, 'message_signature': 'b1c3f94ad15ae808e28ae2be84d359b4a4d710007920ed6015d8b44674eeceb5'}]}, 'timestamp': '2025-12-05 10:18:13.944421', '_unique_id': 'e3b756ad305a42ba9c4fbb3b14e2cf57'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.945 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.946 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.946 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.947 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-VolumesBackupsTest-instance-1667374142>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-VolumesBackupsTest-instance-1667374142>]
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.947 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.947 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9be04448-2c50-4a57-81e5-da45827c8db6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3390, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'instance-0000000a-be3af3e0-e77e-4be9-9458-b874e91bdd42-tap8ddbcf74-77', 'timestamp': '2025-12-05T10:18:13.947525', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'tap8ddbcf74-77', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:2e:15:a9', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8ddbcf74-77'}, 'message_id': 'b553c194-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.04323284, 'message_signature': '21bbe81142aca51a079755c738699c1ab544d797bbde1f73ce0e3659a05a28dc'}]}, 'timestamp': '2025-12-05 10:18:13.948203', '_unique_id': 'fd7759fadb2d4967b25f72002e19a3ba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.949 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.950 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.950 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd31ab888-cce9-4c82-acb6-5c73bb27fe94', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'instance-0000000a-be3af3e0-e77e-4be9-9458-b874e91bdd42-tap8ddbcf74-77', 'timestamp': '2025-12-05T10:18:13.950648', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'tap8ddbcf74-77', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:2e:15:a9', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8ddbcf74-77'}, 'message_id': 'b5543840-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.04323284, 'message_signature': 'f234cb35ff62a053d9c01c5a09ffa8aaeb6fd3f29c784eef16ba4debf7ce485f'}]}, 'timestamp': '2025-12-05 10:18:13.951184', '_unique_id': '09913abc227d43ab8e93d12f53d1649b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.952 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.953 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.953 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a51fd399-1e93-4c16-bdba-3f00c280a504', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'instance-0000000a-be3af3e0-e77e-4be9-9458-b874e91bdd42-tap8ddbcf74-77', 'timestamp': '2025-12-05T10:18:13.953461', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'tap8ddbcf74-77', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:2e:15:a9', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap8ddbcf74-77'}, 'message_id': 'b554a5dc-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.04323284, 'message_signature': '2e57a635fb0fb9037083f69c772d9e4eefb2d49265cfc25334be80373779836b'}]}, 'timestamp': '2025-12-05 10:18:13.954057', '_unique_id': '62854f6de6de464dabb065f946a09880'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.955 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.956 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.956 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.write.requests volume: 301 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.956 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.957 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '05b1e7c3-7946-48f9-82f1-f0368ab6724a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 301, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vda', 'timestamp': '2025-12-05T10:18:13.956233', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'b55511c0-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': 'aa91f3e45e5d6caa8a2b664885d647fb318e5e9ac5c2571878f1589c4ed3e057'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vdb', 'timestamp': '2025-12-05T10:18:13.956233', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'b5552386-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': 'c32abbd1b1af907b0bd6c0f39bb96b23ca6d880719c3bcd7d8906e2f15764f77'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-sda', 'timestamp': '2025-12-05T10:18:13.956233', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'b55533ee-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '7b403c3463c8c36d8c4516bf7b17601a654672d8a15b22a36436d5568c637fbc'}]}, 'timestamp': '2025-12-05 10:18:13.957522', '_unique_id': '963318a380484bc493de66ec2b04fc67'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.958 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.959 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.959 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.read.latency volume: 1606483962 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.960 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.read.latency volume: 3389955 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.960 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.read.latency volume: 110538811 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0febd3ab-d531-45bc-8c5b-a33ae9728c64', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1606483962, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vda', 'timestamp': '2025-12-05T10:18:13.959704', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'b555996a-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '4e6d88fdc84050ba76eb8b9aa691fd9df97b37cceb81b8c7b1d97b1f7c2f2dff'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3389955, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vdb', 'timestamp': '2025-12-05T10:18:13.959704', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'b555af9a-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': 'a3f622de7ba41943173ca802f167584469f0d0a56c3e2e86403f8cc97350753b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 110538811, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-sda', 'timestamp': '2025-12-05T10:18:13.959704', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'b555c14c-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '65c1054d383aac6bec608439a9c6631f7557443ff231ed9363fa267b4aac6351'}]}, 'timestamp': '2025-12-05 10:18:13.961214', '_unique_id': 'c3c2b61006c8498aa2d0eaec4b7ff925'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.962 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.963 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.963 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.964 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.964 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fe7cfde2-83d3-4f79-9b53-9ad10d95d1dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vda', 'timestamp': '2025-12-05T10:18:13.963487', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'b5562d6c-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.051880666, 'message_signature': 'dd4fdf7ad5ec3634eaf6e32e40ec8d663c0b070b9b887192f597fabc6a57905f'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vdb', 'timestamp': '2025-12-05T10:18:13.963487', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'b5564676-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.051880666, 'message_signature': 'cf1abd66cd2ccecd78777ffe0054249bac5ca55aa4c46e4697cd9e19c26cfe04'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-sda', 'timestamp': '2025-12-05T10:18:13.963487', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'b5565756-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.051880666, 'message_signature': 'cd54ccc4bf7a128e22c37b652c986811396f66fb19e308b7117222d6fd1f9db0'}]}, 'timestamp': '2025-12-05 10:18:13.965030', '_unique_id': '02aa234e8e6f4d2290312f56ced960e9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.966 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.967 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.968 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.968 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.969 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6a115e31-46d7-4f79-b3dd-4a355c168f90', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vda', 'timestamp': '2025-12-05T10:18:13.967997', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'b556e00e-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.051880666, 'message_signature': '269e13839d65a71efbe8bca9a7f6ae66d83d3a5bdf0e5608045aec5525791747'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vdb', 'timestamp': '2025-12-05T10:18:13.967997', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'b556f486-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.051880666, 'message_signature': '11e68d2864661550b43b91cd452b645865c39a9909e0d4ce18439db208e6cd5f'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-sda', 'timestamp': '2025-12-05T10:18:13.967997', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'b5570aa2-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13012.051880666, 'message_signature': '72655467e8d6edf5e8c96543ddc62c1b785384068e15a307d43d32c60712640b'}]}, 'timestamp': '2025-12-05 10:18:13.969628', '_unique_id': '6813c1e217094365bd196d5d8007a889'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.970 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.972 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.972 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.972 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [<NovaLikeServer: tempest-VolumesBackupsTest-instance-1667374142>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-VolumesBackupsTest-instance-1667374142>]
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.973 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.973 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.read.requests volume: 1151 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.973 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.read.requests volume: 3 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.973 12 DEBUG ceilometer.compute.pollsters [-] be3af3e0-e77e-4be9-9458-b874e91bdd42/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '19428e89-e582-4d22-8f50-abd7ca78be69', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1151, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vda', 'timestamp': '2025-12-05T10:18:13.973226', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'b557a9a8-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '715ae2dcf2d9827cab7a0adccf8490d1ca64016e791d895c201af80b805f2386'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 3, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-vdb', 'timestamp': '2025-12-05T10:18:13.973226', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'b557b5e2-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '18e2b38b92ca642c2b3f8a258bff64a4becd4557a919e179a90291d0d2eecebb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 120, 'user_id': '0b795e7702e342d9821a3667644be5b0', 'user_name': None, 'project_id': '3554a89b305c449f9fd292eca5647512', 'project_name': None, 'resource_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42-sda', 'timestamp': '2025-12-05T10:18:13.973226', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-1667374142', 'name': 'instance-0000000a', 'instance_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'instance_type': 'm1.nano', 'host': '97e722c62cc68f9e4d4d2f3e563a6c7c1ced95da57a796d7a2f89adb', 'instance_host': 'np0005546420.localdomain', 'flavor': {'id': '445199a6-1f73-405e-82f4-8bd8c4bb34c6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}, 'image_ref': '3647d20f-5e09-41b2-a6f3-f320b9e4e343', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'b557c276-d1c3-11f0-8023-fa163eed1bd3', 'monotonic_time': 13011.967266124, 'message_signature': '29473304a296cbef71f88e1723b43e1501f9c62c0cffec78cfa3d97ddecdfffd'}]}, 'timestamp': '2025-12-05 10:18:13.974238', '_unique_id': '593162205b084e78b46c0c8a1a3b7fa8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     yield
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 05 10:18:13 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:18:13.974 12 ERROR oslo_messaging.notify.messaging 
Dec 05 10:18:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:14.282 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:18:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:18:14 np0005546420.localdomain ceph-mon[298353]: pgmap v656: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 71 KiB/s wr, 36 op/s
Dec 05 10:18:14 np0005546420.localdomain ceph-mon[298353]: osdmap e251: 6 total, 6 up, 6 in
Dec 05 10:18:14 np0005546420.localdomain podman[327654]: 
Dec 05 10:18:14 np0005546420.localdomain podman[327654]: 2025-12-05 10:18:14.764516887 +0000 UTC m=+0.093468006 container create dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-936d20b6-4037-4d6d-940d-7d41a03115b3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 10:18:14 np0005546420.localdomain systemd[1]: Started libpod-conmon-dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096.scope.
Dec 05 10:18:14 np0005546420.localdomain podman[327654]: 2025-12-05 10:18:14.718950321 +0000 UTC m=+0.047901470 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:18:14 np0005546420.localdomain systemd[1]: tmp-crun.WpOEAu.mount: Deactivated successfully.
Dec 05 10:18:14 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:18:14 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e51660462ff53b2b29be67e373a9d944f62adbf427ec2a0a5a66e7d30508d27/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:14 np0005546420.localdomain podman[327654]: 2025-12-05 10:18:14.865551336 +0000 UTC m=+0.194502455 container init dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-936d20b6-4037-4d6d-940d-7d41a03115b3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:18:14 np0005546420.localdomain podman[327654]: 2025-12-05 10:18:14.875814654 +0000 UTC m=+0.204765773 container start dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-936d20b6-4037-4d6d-940d-7d41a03115b3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:18:14 np0005546420.localdomain dnsmasq[327672]: started, version 2.85 cachesize 150
Dec 05 10:18:14 np0005546420.localdomain dnsmasq[327672]: DNS service limited to local subnets
Dec 05 10:18:14 np0005546420.localdomain dnsmasq[327672]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:18:14 np0005546420.localdomain dnsmasq[327672]: warning: no upstream servers configured
Dec 05 10:18:14 np0005546420.localdomain dnsmasq-dhcp[327672]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:18:14 np0005546420.localdomain dnsmasq[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/addn_hosts - 0 addresses
Dec 05 10:18:14 np0005546420.localdomain dnsmasq-dhcp[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/host
Dec 05 10:18:14 np0005546420.localdomain dnsmasq-dhcp[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/opts
Dec 05 10:18:14 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:14.952 262769 INFO neutron.agent.dhcp.agent [None req-cbbad1a9-4f59-4280-bb91-d574e6ac6eb6 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:18:14Z, description=, device_id=dc40d8ce-e305-494c-ab92-e75552d59e7b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e327c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e32eb0>], id=9e1a553e-14e2-44c7-a5f7-a8a7114e8ad8, ip_allocation=immediate, mac_address=fa:16:3e:ac:de:53, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:18:09Z, description=, dns_domain=, id=936d20b6-4037-4d6d-940d-7d41a03115b3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-998438111-network, port_security_enabled=True, project_id=af5a639003e24f09b8489ea02f308e0f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7951, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3526, status=ACTIVE, subnets=['8df18f7d-9b0a-4660-823f-fa3294856a13'], tags=[], tenant_id=af5a639003e24f09b8489ea02f308e0f, updated_at=2025-12-05T10:18:11Z, vlan_transparent=None, network_id=936d20b6-4037-4d6d-940d-7d41a03115b3, port_security_enabled=False, project_id=af5a639003e24f09b8489ea02f308e0f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3534, status=DOWN, tags=[], tenant_id=af5a639003e24f09b8489ea02f308e0f, updated_at=2025-12-05T10:18:14Z on network 936d20b6-4037-4d6d-940d-7d41a03115b3
Dec 05 10:18:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:15.038 262769 INFO neutron.agent.dhcp.agent [None req-63b9deda-c3f0-45d6-a228-6d42d1425cab - - - - - -] DHCP configuration for ports {'7e037ca2-3071-49c9-83d0-8ffc4dcf19b2'} is completed
Dec 05 10:18:15 np0005546420.localdomain dnsmasq[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/addn_hosts - 1 addresses
Dec 05 10:18:15 np0005546420.localdomain dnsmasq-dhcp[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/host
Dec 05 10:18:15 np0005546420.localdomain dnsmasq-dhcp[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/opts
Dec 05 10:18:15 np0005546420.localdomain podman[327691]: 2025-12-05 10:18:15.195249955 +0000 UTC m=+0.063126189 container kill dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-936d20b6-4037-4d6d-940d-7d41a03115b3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:18:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:15.482 262769 INFO neutron.agent.dhcp.agent [None req-5b13921b-feca-4d62-b3f3-0d95b1337e45 - - - - - -] DHCP configuration for ports {'9e1a553e-14e2-44c7-a5f7-a8a7114e8ad8'} is completed
Dec 05 10:18:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d8bed738-7e8a-4e2b-9281-ea986a53728d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:18:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d8bed738-7e8a-4e2b-9281-ea986a53728d", "format": "json"}]: dispatch
Dec 05 10:18:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e252 e252: 6 total, 6 up, 6 in
Dec 05 10:18:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:15.715 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:15 np0005546420.localdomain sudo[327713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:18:15 np0005546420.localdomain sudo[327713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:18:15 np0005546420.localdomain sudo[327713]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:15 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:15.855 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:18:14Z, description=, device_id=dc40d8ce-e305-494c-ab92-e75552d59e7b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99db8d30>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99db84f0>], id=9e1a553e-14e2-44c7-a5f7-a8a7114e8ad8, ip_allocation=immediate, mac_address=fa:16:3e:ac:de:53, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:18:09Z, description=, dns_domain=, id=936d20b6-4037-4d6d-940d-7d41a03115b3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-998438111-network, port_security_enabled=True, project_id=af5a639003e24f09b8489ea02f308e0f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7951, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3526, status=ACTIVE, subnets=['8df18f7d-9b0a-4660-823f-fa3294856a13'], tags=[], tenant_id=af5a639003e24f09b8489ea02f308e0f, updated_at=2025-12-05T10:18:11Z, vlan_transparent=None, network_id=936d20b6-4037-4d6d-940d-7d41a03115b3, port_security_enabled=False, project_id=af5a639003e24f09b8489ea02f308e0f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3534, status=DOWN, tags=[], tenant_id=af5a639003e24f09b8489ea02f308e0f, updated_at=2025-12-05T10:18:14Z on network 936d20b6-4037-4d6d-940d-7d41a03115b3
Dec 05 10:18:15 np0005546420.localdomain sudo[327731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:18:15 np0005546420.localdomain sudo[327731]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:18:16 np0005546420.localdomain dnsmasq[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/addn_hosts - 1 addresses
Dec 05 10:18:16 np0005546420.localdomain dnsmasq-dhcp[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/host
Dec 05 10:18:16 np0005546420.localdomain podman[327766]: 2025-12-05 10:18:16.105172896 +0000 UTC m=+0.067503325 container kill dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-936d20b6-4037-4d6d-940d-7d41a03115b3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 10:18:16 np0005546420.localdomain dnsmasq-dhcp[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/opts
Dec 05 10:18:16 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:16.330 262769 INFO neutron.agent.dhcp.agent [None req-2911b42f-da10-42c1-adc3-342fbf7d42dd - - - - - -] DHCP configuration for ports {'9e1a553e-14e2-44c7-a5f7-a8a7114e8ad8'} is completed
Dec 05 10:18:16 np0005546420.localdomain ceph-mon[298353]: pgmap v658: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 64 KiB/s wr, 33 op/s
Dec 05 10:18:16 np0005546420.localdomain ceph-mon[298353]: osdmap e252: 6 total, 6 up, 6 in
Dec 05 10:18:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:18:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:18:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:18:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:18:16 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:18:16 np0005546420.localdomain sudo[327731]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:18:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:18:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:18:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157926 "" "Go-http-client/1.1"
Dec 05 10:18:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:18:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19712 "" "Go-http-client/1.1"
Dec 05 10:18:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:18:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3226208543' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:18:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:18:17 np0005546420.localdomain sudo[327815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:18:17 np0005546420.localdomain sudo[327815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:18:17 np0005546420.localdomain sudo[327815]: pam_unix(sudo:session): session closed for user root
Dec 05 10:18:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:18.227 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:18 np0005546420.localdomain ceph-mon[298353]: pgmap v660: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 50 KiB/s wr, 67 op/s
Dec 05 10:18:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:18:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:18:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e253 e253: 6 total, 6 up, 6 in
Dec 05 10:18:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:18:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:18:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:18:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:18:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:18:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:18:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:18:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:18:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:18:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:18:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:18:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:18:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d8bed738-7e8a-4e2b-9281-ea986a53728d", "format": "json"}]: dispatch
Dec 05 10:18:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d8bed738-7e8a-4e2b-9281-ea986a53728d", "force": true, "format": "json"}]: dispatch
Dec 05 10:18:19 np0005546420.localdomain ceph-mon[298353]: osdmap e253: 6 total, 6 up, 6 in
Dec 05 10:18:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e254 e254: 6 total, 6 up, 6 in
Dec 05 10:18:19 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:18:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:18:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 1200.0 total, 600.0 interval
                                                           Cumulative writes: 4572 writes, 35K keys, 4572 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.05 MB/s
                                                           Cumulative WAL: 4572 writes, 4572 syncs, 1.00 writes per sync, written: 0.06 GB, 0.05 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 2327 writes, 11K keys, 2327 commit groups, 1.0 writes per commit group, ingest: 15.69 MB, 0.03 MB/s
                                                           Interval WAL: 2327 writes, 2327 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    134.3      0.32              0.10        17    0.019       0      0       0.0       0.0
                                                             L6      1/0   16.98 MB   0.0      0.3     0.0      0.2       0.3      0.0       0.0   6.4    181.3    165.6      1.64              0.69        16    0.102    202K   8251       0.0       0.0
                                                            Sum      1/0   16.98 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   7.4    151.9    160.6      1.96              0.79        33    0.059    202K   8251       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0  13.6    156.0    157.6      0.80              0.36        14    0.057     94K   3735       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.3      0.0       0.0   0.0    181.3    165.6      1.64              0.69        16    0.102    202K   8251       0.0       0.0
                                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    136.1      0.31              0.10        16    0.020       0      0       0.0       0.0
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 1200.0 total, 600.0 interval
                                                           Flush(GB): cumulative 0.042, interval 0.009
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.31 GB write, 0.26 MB/s write, 0.29 GB read, 0.25 MB/s read, 2.0 seconds
                                                           Interval compaction: 0.12 GB write, 0.21 MB/s write, 0.12 GB read, 0.21 MB/s read, 0.8 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x557fb868b350#2 capacity: 304.00 MB usage: 28.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000363 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(1475,26.93 MB,8.8581%) FilterBlock(33,583.86 KB,0.187558%) IndexBlock(33,752.11 KB,0.241606%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 05 10:18:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:20 np0005546420.localdomain ceph-mon[298353]: pgmap v662: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 49 KiB/s wr, 66 op/s
Dec 05 10:18:20 np0005546420.localdomain ceph-mon[298353]: osdmap e254: 6 total, 6 up, 6 in
Dec 05 10:18:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:18:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:18:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 05 10:18:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 05 10:18:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:18:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:20.720 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e255 e255: 6 total, 6 up, 6 in
Dec 05 10:18:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:18:21 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:18:21 np0005546420.localdomain podman[327835]: 2025-12-05 10:18:21.497422823 +0000 UTC m=+0.070620571 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:18:21 np0005546420.localdomain podman[327834]: 2025-12-05 10:18:21.514004075 +0000 UTC m=+0.086104559 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Dec 05 10:18:21 np0005546420.localdomain podman[327834]: 2025-12-05 10:18:21.526317555 +0000 UTC m=+0.098418059 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 10:18:21 np0005546420.localdomain podman[327835]: 2025-12-05 10:18:21.537338435 +0000 UTC m=+0.110536203 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:18:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:21.539 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:21.540 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:18:21 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:18:21 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:18:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:18:21 np0005546420.localdomain ceph-mon[298353]: osdmap e255: 6 total, 6 up, 6 in
Dec 05 10:18:22 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e256 e256: 6 total, 6 up, 6 in
Dec 05 10:18:22 np0005546420.localdomain ceph-mon[298353]: pgmap v665: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 132 KiB/s wr, 124 op/s
Dec 05 10:18:22 np0005546420.localdomain ceph-mon[298353]: osdmap e256: 6 total, 6 up, 6 in
Dec 05 10:18:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:22.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:22.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:18:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:22.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:18:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:22.976 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 10:18:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:22.977 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquired lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 10:18:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:22.978 281103 DEBUG nova.network.neutron [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 10:18:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:22.978 281103 DEBUG nova.objects.instance [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lazy-loading 'info_cache' on Instance uuid be3af3e0-e77e-4be9-9458-b874e91bdd42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:18:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:23.230 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:23 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:23Z|00373|binding|INFO|Releasing lport 1ea8845d-38d2-4efb-91a5-56a9b0cf4fb7 from this chassis (sb_readonly=0)
Dec 05 10:18:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:23.667 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:18:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:18:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:18:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:23.685 281103 DEBUG nova.network.neutron [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Updating instance_info_cache with network_info: [{"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:18:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:23.705 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Releasing lock "refresh_cache-be3af3e0-e77e-4be9-9458-b874e91bdd42" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 10:18:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:23.706 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 10:18:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e257 e257: 6 total, 6 up, 6 in
Dec 05 10:18:24 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:18:24 np0005546420.localdomain ceph-mon[298353]: pgmap v667: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 101 KiB/s wr, 75 op/s
Dec 05 10:18:24 np0005546420.localdomain ceph-mon[298353]: osdmap e257: 6 total, 6 up, 6 in
Dec 05 10:18:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:24.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.587 281103 DEBUG oslo_concurrency.lockutils [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.587 281103 DEBUG oslo_concurrency.lockutils [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.606 281103 INFO nova.compute.manager [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Detaching volume c4d37f21-a692-4fd8-a5cf-3ec6f97372f0
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.649 281103 INFO nova.virt.block_device [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Attempting to driver detach volume c4d37f21-a692-4fd8-a5cf-3ec6f97372f0 from mountpoint /dev/vdb
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.659 281103 DEBUG nova.virt.libvirt.driver [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Attempting to detach device vdb from instance be3af3e0-e77e-4be9-9458-b874e91bdd42 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.659 281103 DEBUG nova.virt.libvirt.guest [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] detach device xml: <disk type="network" device="disk">
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   <source protocol="rbd" name="volumes/volume-c4d37f21-a692-4fd8-a5cf-3ec6f97372f0">
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:     <host name="172.18.0.103" port="6789"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:     <host name="172.18.0.104" port="6789"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:     <host name="172.18.0.105" port="6789"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   </source>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   <target dev="vdb" bus="virtio"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   <serial>c4d37f21-a692-4fd8-a5cf-3ec6f97372f0</serial>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: </disk>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.670 281103 INFO nova.virt.libvirt.driver [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Successfully detached device vdb from instance be3af3e0-e77e-4be9-9458-b874e91bdd42 from the persistent domain config.
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.671 281103 DEBUG nova.virt.libvirt.driver [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance be3af3e0-e77e-4be9-9458-b874e91bdd42 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.671 281103 DEBUG nova.virt.libvirt.guest [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] detach device xml: <disk type="network" device="disk">
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   <source protocol="rbd" name="volumes/volume-c4d37f21-a692-4fd8-a5cf-3ec6f97372f0">
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:     <host name="172.18.0.103" port="6789"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:     <host name="172.18.0.104" port="6789"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:     <host name="172.18.0.105" port="6789"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   </source>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   <target dev="vdb" bus="virtio"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   <serial>c4d37f21-a692-4fd8-a5cf-3ec6f97372f0</serial>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: </disk>
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 05 10:18:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/148606033' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.723 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.791 281103 DEBUG nova.virt.libvirt.driver [None req-fd91d540-494a-4bbb-80e5-16c62df99abd - - - - - -] Received event <DeviceRemovedEvent: 1764929905.790809, be3af3e0-e77e-4be9-9458-b874e91bdd42 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.796 281103 DEBUG nova.virt.libvirt.driver [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance be3af3e0-e77e-4be9-9458-b874e91bdd42 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.799 281103 INFO nova.virt.libvirt.driver [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Successfully detached device vdb from instance be3af3e0-e77e-4be9-9458-b874e91bdd42 from the live domain config.
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.907 281103 DEBUG nova.objects.instance [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lazy-loading 'flavor' on Instance uuid be3af3e0-e77e-4be9-9458-b874e91bdd42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:18:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:25.953 281103 DEBUG oslo_concurrency.lockutils [None req-58e2c678-3059-40f6-b929-bba07d13e302 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:25 np0005546420.localdomain systemd[1]: tmp-crun.2n5Chk.mount: Deactivated successfully.
Dec 05 10:18:25 np0005546420.localdomain dnsmasq[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/addn_hosts - 0 addresses
Dec 05 10:18:25 np0005546420.localdomain dnsmasq-dhcp[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/host
Dec 05 10:18:25 np0005546420.localdomain dnsmasq-dhcp[327672]: read /var/lib/neutron/dhcp/936d20b6-4037-4d6d-940d-7d41a03115b3/opts
Dec 05 10:18:25 np0005546420.localdomain podman[327896]: 2025-12-05 10:18:25.994207636 +0000 UTC m=+0.077599287 container kill dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-936d20b6-4037-4d6d-940d-7d41a03115b3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.257 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:26Z|00374|binding|INFO|Releasing lport 3ef9b2ec-8c68-40f2-a60a-84c7dfe5c540 from this chassis (sb_readonly=0)
Dec 05 10:18:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:26Z|00375|binding|INFO|Setting lport 3ef9b2ec-8c68-40f2-a60a-84c7dfe5c540 down in Southbound
Dec 05 10:18:26 np0005546420.localdomain kernel: device tap3ef9b2ec-8c left promiscuous mode
Dec 05 10:18:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:26.272 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-936d20b6-4037-4d6d-940d-7d41a03115b3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936d20b6-4037-4d6d-940d-7d41a03115b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af5a639003e24f09b8489ea02f308e0f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b9e4954-e6a8-4a58-9b03-332fb24a38bd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=3ef9b2ec-8c68-40f2-a60a-84c7dfe5c540) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:18:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:26.274 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 3ef9b2ec-8c68-40f2-a60a-84c7dfe5c540 in datapath 936d20b6-4037-4d6d-940d-7d41a03115b3 unbound from our chassis
Dec 05 10:18:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:26.276 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 936d20b6-4037-4d6d-940d-7d41a03115b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:18:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:26.278 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2db45e-a7d0-425b-a034-feb50dc4e4d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.284 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:26 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:18:26 np0005546420.localdomain ceph-mon[298353]: pgmap v669: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 85 KiB/s wr, 62 op/s
Dec 05 10:18:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2805542685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:18:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:18:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 05 10:18:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.819 281103 DEBUG oslo_concurrency.lockutils [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.820 281103 DEBUG oslo_concurrency.lockutils [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.820 281103 DEBUG oslo_concurrency.lockutils [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.821 281103 DEBUG oslo_concurrency.lockutils [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.821 281103 DEBUG oslo_concurrency.lockutils [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.826 281103 INFO nova.compute.manager [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Terminating instance
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.828 281103 DEBUG nova.compute.manager [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:26 np0005546420.localdomain kernel: device tap8ddbcf74-77 left promiscuous mode
Dec 05 10:18:26 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929906.9024] device (tap8ddbcf74-77): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Dec 05 10:18:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:26Z|00376|binding|INFO|Releasing lport 8ddbcf74-77c9-415e-9ff7-3416cf2f699f from this chassis (sb_readonly=0)
Dec 05 10:18:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:26Z|00377|binding|INFO|Setting lport 8ddbcf74-77c9-415e-9ff7-3416cf2f699f down in Southbound
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.916 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:18:26 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:26Z|00378|binding|INFO|Removing iface tap8ddbcf74-77 ovn-installed in OVS
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.930 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:26.940 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:26.942 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:15:a9 10.100.0.14'], port_security=['fa:16:3e:2e:15:a9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be3af3e0-e77e-4be9-9458-b874e91bdd42', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3554a89b305c449f9fd292eca5647512', 'neutron:revision_number': '4', 'neutron:security_group_ids': '811851ce-aefb-4b50-bb3d-fd5f8bc97e90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=135cedc8-bceb-4f2f-8778-26f5bc6f81d3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=8ddbcf74-77c9-415e-9ff7-3416cf2f699f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:18:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:26.944 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 8ddbcf74-77c9-415e-9ff7-3416cf2f699f in datapath 05dd4ee6-5f37-4402-88a5-db28b0b4198e unbound from our chassis
Dec 05 10:18:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:26.946 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port f6b4fd13-f8dc-480f-9f29-ae687770e358 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:18:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:26.946 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05dd4ee6-5f37-4402-88a5-db28b0b4198e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:18:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:26.947 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac2b28c-46e0-4348-9a4d-0273735e8fb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:26 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:26.948 159503 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e namespace which is not needed anymore
Dec 05 10:18:26 np0005546420.localdomain systemd[1]: machine-qemu\x2d2\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec 05 10:18:26 np0005546420.localdomain systemd[1]: machine-qemu\x2d2\x2dinstance\x2d0000000a.scope: Consumed 18.280s CPU time.
Dec 05 10:18:26 np0005546420.localdomain systemd-machined[203266]: Machine qemu-2-instance-0000000a terminated.
Dec 05 10:18:27 np0005546420.localdomain podman[327924]: 2025-12-05 10:18:27.028490016 +0000 UTC m=+0.093088705 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.059 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.064 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.082 281103 INFO nova.virt.libvirt.driver [-] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Instance destroyed successfully.
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.083 281103 DEBUG nova.objects.instance [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lazy-loading 'resources' on Instance uuid be3af3e0-e77e-4be9-9458-b874e91bdd42 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.098 281103 DEBUG nova.virt.libvirt.vif [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T10:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1667374142',display_name='tempest-VolumesBackupsTest-instance-1667374142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005546420.localdomain',hostname='tempest-volumesbackupstest-instance-1667374142',id=10,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLyKqIsd+SCFDxbPlfFsm08RfHyG+rO21YddeoWaTQBVVF6Cco+Ied8SAdXLL5E1pUN90Je2RcobCpuLF0gx9YPyalVuHrbR3g3NAZ05ZQKCynlq8k7BUsFjqSil42j2GA==',key_name='tempest-keypair-1886164452',keypairs=<?>,launch_index=0,launched_at=2025-12-05T10:17:27Z,launched_on='np0005546420.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='np0005546420.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3554a89b305c449f9fd292eca5647512',ramdisk_id='',reservation_id='r-7yg9v6dz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-833308520',owner_user_name='tempest-VolumesBackupsTest-833308520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T10:17:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0b795e7702e342d9821a3667644be5b0',uuid=be3af3e0-e77e-4be9-9458-b874e91bdd42,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.100 281103 DEBUG nova.network.os_vif_util [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Converting VIF {"id": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "address": "fa:16:3e:2e:15:a9", "network": {"id": "05dd4ee6-5f37-4402-88a5-db28b0b4198e", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1196921280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "3554a89b305c449f9fd292eca5647512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ddbcf74-77", "ovs_interfaceid": "8ddbcf74-77c9-415e-9ff7-3416cf2f699f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.102 281103 DEBUG nova.network.os_vif_util [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=8ddbcf74-77c9-415e-9ff7-3416cf2f699f,network=Network(05dd4ee6-5f37-4402-88a5-db28b0b4198e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ddbcf74-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.103 281103 DEBUG os_vif [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=8ddbcf74-77c9-415e-9ff7-3416cf2f699f,network=Network(05dd4ee6-5f37-4402-88a5-db28b0b4198e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ddbcf74-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.108 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.109 281103 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ddbcf74-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.112 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:27 np0005546420.localdomain podman[327924]: 2025-12-05 10:18:27.115625305 +0000 UTC m=+0.180224004 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.115 281103 INFO os_vif [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:15:a9,bridge_name='br-int',has_traffic_filtering=True,id=8ddbcf74-77c9-415e-9ff7-3416cf2f699f,network=Network(05dd4ee6-5f37-4402-88a5-db28b0b4198e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ddbcf74-77')
Dec 05 10:18:27 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.146 281103 DEBUG nova.compute.manager [req-eab00d50-398f-4cad-b5fd-4eee2d94e5e5 req-c716a431-c6b9-41df-9d49-397df2781dba c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Received event network-vif-unplugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.147 281103 DEBUG oslo_concurrency.lockutils [req-eab00d50-398f-4cad-b5fd-4eee2d94e5e5 req-c716a431-c6b9-41df-9d49-397df2781dba c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.148 281103 DEBUG oslo_concurrency.lockutils [req-eab00d50-398f-4cad-b5fd-4eee2d94e5e5 req-c716a431-c6b9-41df-9d49-397df2781dba c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.149 281103 DEBUG oslo_concurrency.lockutils [req-eab00d50-398f-4cad-b5fd-4eee2d94e5e5 req-c716a431-c6b9-41df-9d49-397df2781dba c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.149 281103 DEBUG nova.compute.manager [req-eab00d50-398f-4cad-b5fd-4eee2d94e5e5 req-c716a431-c6b9-41df-9d49-397df2781dba c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] No waiting events found dispatching network-vif-unplugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.149 281103 DEBUG nova.compute.manager [req-eab00d50-398f-4cad-b5fd-4eee2d94e5e5 req-c716a431-c6b9-41df-9d49-397df2781dba c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Received event network-vif-unplugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 10:18:27 np0005546420.localdomain neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e[327270]: [NOTICE]   (327274) : haproxy version is 2.8.14-c23fe91
Dec 05 10:18:27 np0005546420.localdomain neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e[327270]: [NOTICE]   (327274) : path to executable is /usr/sbin/haproxy
Dec 05 10:18:27 np0005546420.localdomain neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e[327270]: [WARNING]  (327274) : Exiting Master process...
Dec 05 10:18:27 np0005546420.localdomain neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e[327270]: [WARNING]  (327274) : Exiting Master process...
Dec 05 10:18:27 np0005546420.localdomain neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e[327270]: [ALERT]    (327274) : Current worker (327276) exited with code 143 (Terminated)
Dec 05 10:18:27 np0005546420.localdomain neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e[327270]: [WARNING]  (327274) : All workers exited. Exiting... (0)
Dec 05 10:18:27 np0005546420.localdomain systemd[1]: libpod-942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987.scope: Deactivated successfully.
Dec 05 10:18:27 np0005546420.localdomain podman[327977]: 2025-12-05 10:18:27.172791691 +0000 UTC m=+0.077056301 container died 942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 10:18:27 np0005546420.localdomain podman[327977]: 2025-12-05 10:18:27.224682142 +0000 UTC m=+0.128946712 container cleanup 942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:18:27 np0005546420.localdomain podman[328006]: 2025-12-05 10:18:27.283395925 +0000 UTC m=+0.104035193 container cleanup 942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 10:18:27 np0005546420.localdomain systemd[1]: libpod-conmon-942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987.scope: Deactivated successfully.
Dec 05 10:18:27 np0005546420.localdomain podman[328023]: 2025-12-05 10:18:27.329367345 +0000 UTC m=+0.081362513 container remove 942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:18:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:27.333 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bfa511-a5b6-404f-a8f4-38c92e1cfffe]: (4, ('Fri Dec  5 10:18:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e (942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987)\n942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987\nFri Dec  5 10:18:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e (942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987)\n942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:27.336 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[32175cb9-1e39-45d5-a172-592ce73ff0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:27.338 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05dd4ee6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.376 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:27 np0005546420.localdomain kernel: device tap05dd4ee6-50 left promiscuous mode
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.379 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:27.384 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf44b5b-e81b-4cdb-95cc-0be7e7501f9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.387 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:27.396 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ce00d7-5bdd-4695-8a16-68c22ed57b5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:27.399 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[c65c496f-16f1-4dac-85a7-97059f17aaaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:27.409 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b6e7bc-8119-4377-a93a-24e38fa72abc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1296471, 'reachable_time': 39115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328043, 'error': None, 'target': 'ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:27.412 159609 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-05dd4ee6-5f37-4402-88a5-db28b0b4198e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 10:18:27 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:27.412 159609 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfbe0ed-ed16-4243-9beb-fd9aac15fa7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.638 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:18:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.821 281103 INFO nova.virt.libvirt.driver [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Deleting instance files /var/lib/nova/instances/be3af3e0-e77e-4be9-9458-b874e91bdd42_del
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.822 281103 INFO nova.virt.libvirt.driver [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Deletion of /var/lib/nova/instances/be3af3e0-e77e-4be9-9458-b874e91bdd42_del complete
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.866 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.893 281103 DEBUG nova.virt.libvirt.host [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.894 281103 INFO nova.virt.libvirt.host [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] UEFI support detected
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.898 281103 INFO nova.compute.manager [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Took 1.07 seconds to destroy the instance on the hypervisor.
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.899 281103 DEBUG oslo.service.loopingcall [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.899 281103 DEBUG nova.compute.manager [-] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 10:18:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:27.900 281103 DEBUG nova.network.neutron [-] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 10:18:27 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:18:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-c7df46fcb1dbc419448566f52596c4c84782ca0e250fa0a138f39a78baeb8e2d-merged.mount: Deactivated successfully.
Dec 05 10:18:28 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-942dfbb0fbdef50323c3a2b532ee91f151cfd581a87a390b45b38f1261096987-userdata-shm.mount: Deactivated successfully.
Dec 05 10:18:28 np0005546420.localdomain systemd[1]: run-netns-ovnmeta\x2d05dd4ee6\x2d5f37\x2d4402\x2d88a5\x2ddb28b0b4198e.mount: Deactivated successfully.
Dec 05 10:18:28 np0005546420.localdomain podman[328045]: 2025-12-05 10:18:28.028054934 +0000 UTC m=+0.099626767 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:18:28 np0005546420.localdomain podman[328045]: 2025-12-05 10:18:28.039409834 +0000 UTC m=+0.110981627 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:18:28 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:18:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:28.237 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:28 np0005546420.localdomain systemd[1]: tmp-crun.li6KAp.mount: Deactivated successfully.
Dec 05 10:18:28 np0005546420.localdomain dnsmasq[327672]: exiting on receipt of SIGTERM
Dec 05 10:18:28 np0005546420.localdomain podman[328080]: 2025-12-05 10:18:28.305242321 +0000 UTC m=+0.069147706 container kill dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-936d20b6-4037-4d6d-940d-7d41a03115b3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 10:18:28 np0005546420.localdomain systemd[1]: libpod-dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096.scope: Deactivated successfully.
Dec 05 10:18:28 np0005546420.localdomain podman[328094]: 2025-12-05 10:18:28.388601625 +0000 UTC m=+0.061093637 container died dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-936d20b6-4037-4d6d-940d-7d41a03115b3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 10:18:28 np0005546420.localdomain podman[328094]: 2025-12-05 10:18:28.437766682 +0000 UTC m=+0.110258634 container remove dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-936d20b6-4037-4d6d-940d-7d41a03115b3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:18:28 np0005546420.localdomain systemd[1]: libpod-conmon-dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096.scope: Deactivated successfully.
Dec 05 10:18:28 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:28.572 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:18:28 np0005546420.localdomain ceph-mon[298353]: pgmap v670: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 66 KiB/s wr, 101 op/s
Dec 05 10:18:29 np0005546420.localdomain neutron_sriov_agent[255821]: 2025-12-05 10:18:29.015 2 INFO neutron.agent.securitygroups_rpc [req-32c68d04-fa19-4a2e-82f4-af84e05aac5a req-b8f049fe-8e8e-43d6-8b20-d70007e03bd7 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Security group member updated ['811851ce-aefb-4b50-bb3d-fd5f8bc97e90']
Dec 05 10:18:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-8e51660462ff53b2b29be67e373a9d944f62adbf427ec2a0a5a66e7d30508d27-merged.mount: Deactivated successfully.
Dec 05 10:18:29 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc37dccbfc67fba74b0e4a719c3fdd3dcc72d2ca88c59a54dcae4a88f797e096-userdata-shm.mount: Deactivated successfully.
Dec 05 10:18:29 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d936d20b6\x2d4037\x2d4d6d\x2d940d\x2d7d41a03115b3.mount: Deactivated successfully.
Dec 05 10:18:29 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:29.116 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.155 281103 DEBUG nova.network.neutron [-] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.176 281103 INFO nova.compute.manager [-] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Took 1.28 seconds to deallocate network for instance.
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.212 281103 DEBUG nova.compute.manager [req-339e83e5-6da2-467c-a9d1-18416cf1ca80 req-f7432f2a-f5c1-4d12-9784-0276c8735978 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Received event network-vif-plugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.213 281103 DEBUG oslo_concurrency.lockutils [req-339e83e5-6da2-467c-a9d1-18416cf1ca80 req-f7432f2a-f5c1-4d12-9784-0276c8735978 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.214 281103 DEBUG oslo_concurrency.lockutils [req-339e83e5-6da2-467c-a9d1-18416cf1ca80 req-f7432f2a-f5c1-4d12-9784-0276c8735978 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.215 281103 DEBUG oslo_concurrency.lockutils [req-339e83e5-6da2-467c-a9d1-18416cf1ca80 req-f7432f2a-f5c1-4d12-9784-0276c8735978 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.215 281103 DEBUG nova.compute.manager [req-339e83e5-6da2-467c-a9d1-18416cf1ca80 req-f7432f2a-f5c1-4d12-9784-0276c8735978 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] No waiting events found dispatching network-vif-plugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.216 281103 WARNING nova.compute.manager [req-339e83e5-6da2-467c-a9d1-18416cf1ca80 req-f7432f2a-f5c1-4d12-9784-0276c8735978 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Received unexpected event network-vif-plugged-8ddbcf74-77c9-415e-9ff7-3416cf2f699f for instance with vm_state active and task_state deleting.
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.216 281103 DEBUG nova.compute.manager [req-339e83e5-6da2-467c-a9d1-18416cf1ca80 req-f7432f2a-f5c1-4d12-9784-0276c8735978 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Received event network-vif-deleted-8ddbcf74-77c9-415e-9ff7-3416cf2f699f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 10:18:29 np0005546420.localdomain systemd[1]: tmp-crun.ZXrFDr.mount: Deactivated successfully.
Dec 05 10:18:29 np0005546420.localdomain dnsmasq[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/addn_hosts - 1 addresses
Dec 05 10:18:29 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/host
Dec 05 10:18:29 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/opts
Dec 05 10:18:29 np0005546420.localdomain podman[328137]: 2025-12-05 10:18:29.245087555 +0000 UTC m=+0.046756674 container kill a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.244 281103 DEBUG oslo_concurrency.lockutils [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.246 281103 DEBUG oslo_concurrency.lockutils [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.327 281103 DEBUG oslo_concurrency.processutils [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:18:29 np0005546420.localdomain ceph-mon[298353]: pgmap v671: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 50 KiB/s wr, 40 op/s
Dec 05 10:18:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:18:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:18:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:18:29 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:18:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:18:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2781236374' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.814 281103 DEBUG oslo_concurrency.processutils [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.821 281103 DEBUG nova.compute.provider_tree [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.838 281103 DEBUG nova.scheduler.client.report [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.867 281103 DEBUG oslo_concurrency.lockutils [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.904 281103 INFO nova.scheduler.client.report [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Deleted allocations for instance be3af3e0-e77e-4be9-9458-b874e91bdd42
Dec 05 10:18:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:29.974 281103 DEBUG oslo_concurrency.lockutils [None req-32c68d04-fa19-4a2e-82f4-af84e05aac5a 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Lock "be3af3e0-e77e-4be9-9458-b874e91bdd42" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:30 np0005546420.localdomain systemd-journald[48245]: Data hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 75.0 (53724 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation.
Dec 05 10:18:30 np0005546420.localdomain systemd-journald[48245]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 05 10:18:30 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:18:30 np0005546420.localdomain rsyslogd[756]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 10:18:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2781236374' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:18:30 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:18:30 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "format": "json"}]: dispatch
Dec 05 10:18:30 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:30.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e258 e258: 6 total, 6 up, 6 in
Dec 05 10:18:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:31.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:32.112 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:32 np0005546420.localdomain ceph-mon[298353]: osdmap e258: 6 total, 6 up, 6 in
Dec 05 10:18:32 np0005546420.localdomain ceph-mon[298353]: pgmap v673: 177 pgs: 177 active+clean; 232 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 108 KiB/s wr, 71 op/s
Dec 05 10:18:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e259 e259: 6 total, 6 up, 6 in
Dec 05 10:18:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:32.980 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:18:32 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:32.981 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:18:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:32.982 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:33 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e260 e260: 6 total, 6 up, 6 in
Dec 05 10:18:33 np0005546420.localdomain ceph-mon[298353]: osdmap e259: 6 total, 6 up, 6 in
Dec 05 10:18:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2134779501' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:18:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:18:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 05 10:18:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:33.243 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:33 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:18:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:33.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:18:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:33.892 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:33.892 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:33.893 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:33.893 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:18:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:33.893 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: osdmap e260: 6 total, 6 up, 6 in
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/4052893673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "auth_id": "tempest-cephx-id-1284922822", "tenant_id": "713485f6825d4fbb96a3a6dfd0cac4e0", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1284922822", "format": "json"} : dispatch
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: pgmap v676: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 78 KiB/s wr, 63 op/s
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1284922822", "caps": ["mds", "allow rw path=/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/15d5ebce-8438-4ec7-a76c-c06816e0fb9d", "osd", "allow rw pool=manila_data namespace=fsvolumens_9c8dec9b-5cd8-4827-b98c-9462c244cafe", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1284922822", "caps": ["mds", "allow rw path=/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/15d5ebce-8438-4ec7-a76c-c06816e0fb9d", "osd", "allow rw pool=manila_data namespace=fsvolumens_9c8dec9b-5cd8-4827-b98c-9462c244cafe", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e261 e261: 6 total, 6 up, 6 in
Dec 05 10:18:34 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=tempest-cephx-id-1284922822,client_metadata.root=/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/15d5ebce-8438-4ec7-a76c-c06816e0fb9d],prefix=session evict} (starting...)
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1415403019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:18:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:34.478 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:18:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:34.738 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:18:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:34.740 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11463MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:18:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:34.741 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:18:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:34.741 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:18:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:34.793 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:18:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:34.793 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1786875066' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:18:34 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1786875066' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:18:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:34.809 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3073646646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "auth_id": "tempest-cephx-id-1284922822", "format": "json"}]: dispatch
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: osdmap e261: 6 total, 6 up, 6 in
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1284922822", "format": "json"} : dispatch
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1284922822"} : dispatch
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1284922822"}]': finished
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "auth_id": "tempest-cephx-id-1284922822", "format": "json"}]: dispatch
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1415403019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "format": "json"}]: dispatch
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "force": true, "format": "json"}]: dispatch
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1786875066' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1786875066' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:18:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3073646646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:18:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:35.249 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:18:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:35.256 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:18:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:35.281 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:18:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:35.307 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:18:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:35.308 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:18:36 np0005546420.localdomain ceph-mon[298353]: pgmap v678: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 5.3 KiB/s wr, 80 op/s
Dec 05 10:18:36 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:36Z|00379|ovn_bfd|INFO|Disabled BFD on interface ovn-473cc8-0
Dec 05 10:18:36 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:36Z|00380|ovn_bfd|INFO|Disabled BFD on interface ovn-f5bb44-0
Dec 05 10:18:36 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:36Z|00381|ovn_bfd|INFO|Disabled BFD on interface ovn-40c64e-0
Dec 05 10:18:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:36.444 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:36.446 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:36.450 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:36.515 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:36 np0005546420.localdomain dnsmasq[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/addn_hosts - 0 addresses
Dec 05 10:18:36 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/host
Dec 05 10:18:36 np0005546420.localdomain dnsmasq-dhcp[326313]: read /var/lib/neutron/dhcp/05dd4ee6-5f37-4402-88a5-db28b0b4198e/opts
Dec 05 10:18:36 np0005546420.localdomain podman[328246]: 2025-12-05 10:18:36.586056032 +0000 UTC m=+0.051279483 container kill a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:18:36 np0005546420.localdomain systemd[1]: tmp-crun.e8ma9m.mount: Deactivated successfully.
Dec 05 10:18:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:18:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:18:36 np0005546420.localdomain podman[328261]: 2025-12-05 10:18:36.699244456 +0000 UTC m=+0.086239063 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:18:36 np0005546420.localdomain systemd[1]: tmp-crun.ieyiKz.mount: Deactivated successfully.
Dec 05 10:18:36 np0005546420.localdomain podman[328261]: 2025-12-05 10:18:36.738484798 +0000 UTC m=+0.125479435 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 10:18:36 np0005546420.localdomain podman[328265]: 2025-12-05 10:18:36.738534829 +0000 UTC m=+0.117730145 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:18:36 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:18:36 np0005546420.localdomain kernel: device tap1f8ec0e0-cc left promiscuous mode
Dec 05 10:18:36 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:36Z|00382|binding|INFO|Releasing lport 1f8ec0e0-ccc8-49be-bea2-9b491416bc1f from this chassis (sb_readonly=0)
Dec 05 10:18:36 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:36Z|00383|binding|INFO|Setting lport 1f8ec0e0-ccc8-49be-bea2-9b491416bc1f down in Southbound
Dec 05 10:18:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:36.795 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:36.812 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:36 np0005546420.localdomain podman[328265]: 2025-12-05 10:18:36.821580783 +0000 UTC m=+0.200776129 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:18:36 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:18:36 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:36.961 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05dd4ee6-5f37-4402-88a5-db28b0b4198e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3554a89b305c449f9fd292eca5647512', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=135cedc8-bceb-4f2f-8778-26f5bc6f81d3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=1f8ec0e0-ccc8-49be-bea2-9b491416bc1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:18:36 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:36.963 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 1f8ec0e0-ccc8-49be-bea2-9b491416bc1f in datapath 05dd4ee6-5f37-4402-88a5-db28b0b4198e unbound from our chassis
Dec 05 10:18:36 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:36.965 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05dd4ee6-5f37-4402-88a5-db28b0b4198e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:18:36 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:36.967 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a67ee8-9ef3-4550-b455-efe82711f973]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:37.115 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:37 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:18:37 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:18:37 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:18:37 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:18:37 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:37.983 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:18:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:38.243 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:38 np0005546420.localdomain ceph-mon[298353]: pgmap v679: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 136 KiB/s wr, 86 op/s
Dec 05 10:18:39 np0005546420.localdomain dnsmasq[326313]: exiting on receipt of SIGTERM
Dec 05 10:18:39 np0005546420.localdomain podman[328325]: 2025-12-05 10:18:39.744603991 +0000 UTC m=+0.071148167 container kill a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 10:18:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:18:39 np0005546420.localdomain systemd[1]: libpod-a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab.scope: Deactivated successfully.
Dec 05 10:18:39 np0005546420.localdomain podman[328340]: 2025-12-05 10:18:39.867261218 +0000 UTC m=+0.103801236 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:18:39 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:18:39 np0005546420.localdomain podman[328339]: 2025-12-05 10:18:39.887203363 +0000 UTC m=+0.125568858 container died a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:18:39 np0005546420.localdomain systemd[1]: tmp-crun.ENxOim.mount: Deactivated successfully.
Dec 05 10:18:39 np0005546420.localdomain podman[328339]: 2025-12-05 10:18:39.926010961 +0000 UTC m=+0.164376416 container cleanup a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 10:18:39 np0005546420.localdomain systemd[1]: libpod-conmon-a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab.scope: Deactivated successfully.
Dec 05 10:18:39 np0005546420.localdomain podman[328352]: 2025-12-05 10:18:39.971754693 +0000 UTC m=+0.184874958 container remove a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05dd4ee6-5f37-4402-88a5-db28b0b4198e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 10:18:39 np0005546420.localdomain podman[328340]: 2025-12-05 10:18:39.986710395 +0000 UTC m=+0.223250453 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec 05 10:18:40 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:18:40 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:40.016 262769 INFO neutron.agent.dhcp.agent [None req-71ac5f5b-3311-4e57-80cc-2abe1ce28476 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:18:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:40 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:40.320 262769 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:18:40 np0005546420.localdomain ceph-mon[298353]: pgmap v680: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 111 KiB/s wr, 54 op/s
Dec 05 10:18:40 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:18:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:18:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 05 10:18:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 05 10:18:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:40.631 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-795f056e8a7fd525c76b3cbb9a5b90915e4e9fffd72d3681c376119e9fd0abae-merged.mount: Deactivated successfully.
Dec 05 10:18:40 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a57eb6024c487d742a9d4f5131e098c2b61c193f4f23f42459cd5d5456ed0cab-userdata-shm.mount: Deactivated successfully.
Dec 05 10:18:40 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d05dd4ee6\x2d5f37\x2d4402\x2d88a5\x2ddb28b0b4198e.mount: Deactivated successfully.
Dec 05 10:18:41 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e262 e262: 6 total, 6 up, 6 in
Dec 05 10:18:41 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:18:41 np0005546420.localdomain ceph-mon[298353]: osdmap e262: 6 total, 6 up, 6 in
Dec 05 10:18:41 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:42.076 281103 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764929907.0748305, be3af3e0-e77e-4be9-9458-b874e91bdd42 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 10:18:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:42.077 281103 INFO nova.compute.manager [-] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] VM Stopped (Lifecycle Event)
Dec 05 10:18:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:42.104 281103 DEBUG nova.compute.manager [None req-2679b482-f409-4308-a6ee-01a64673a07a - - - - - -] [instance: be3af3e0-e77e-4be9-9458-b874e91bdd42] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 10:18:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:42.117 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:42 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "628154e8-b5ef-4050-80fd-dcb33fc276cd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:18:42 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "628154e8-b5ef-4050-80fd-dcb33fc276cd", "format": "json"}]: dispatch
Dec 05 10:18:42 np0005546420.localdomain ceph-mon[298353]: pgmap v682: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 103 KiB/s wr, 53 op/s
Dec 05 10:18:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:43.246 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:18:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:18:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:18:44 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:18:44 np0005546420.localdomain ceph-mon[298353]: pgmap v683: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 120 KiB/s wr, 48 op/s
Dec 05 10:18:45 np0005546420.localdomain sshd[328382]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:18:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:45 np0005546420.localdomain sshd[328384]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:18:45 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fea0968b-0e9f-458d-879e-5e4c7889c409", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:18:45 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:46 np0005546420.localdomain sshd[328382]: Received disconnect from 24.232.50.5 port 48566:11: Bye Bye [preauth]
Dec 05 10:18:46 np0005546420.localdomain sshd[328382]: Disconnected from authenticating user root 24.232.50.5 port 48566 [preauth]
Dec 05 10:18:46 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:46.451 262769 INFO neutron.agent.linux.ip_lib [None req-39ac43f9-ebbd-48e1-a3f2-b2787ffa4084 - - - - - -] Device tap4c32b286-fd cannot be used as it has no MAC address
Dec 05 10:18:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:46.476 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:46 np0005546420.localdomain kernel: device tap4c32b286-fd entered promiscuous mode
Dec 05 10:18:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:46.486 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:46 np0005546420.localdomain NetworkManager[5963]: <info>  [1764929926.4871] manager: (tap4c32b286-fd): new Generic device (/org/freedesktop/NetworkManager/Devices/65)
Dec 05 10:18:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:46Z|00384|binding|INFO|Claiming lport 4c32b286-fdcf-4caa-8843-0ed2f77650bd for this chassis.
Dec 05 10:18:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:46Z|00385|binding|INFO|4c32b286-fdcf-4caa-8843-0ed2f77650bd: Claiming unknown
Dec 05 10:18:46 np0005546420.localdomain systemd-udevd[328396]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:18:46 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:46.502 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-63bd7005-10d8-4e7e-8ca2-cb537d610fb2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-63bd7005-10d8-4e7e-8ca2-cb537d610fb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6e8880ab10e4b26b0074c6f9b06aca3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd1f52a5-1e4c-40b2-bb9a-c75028fc42e8, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=4c32b286-fdcf-4caa-8843-0ed2f77650bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:18:46 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:46.504 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 4c32b286-fdcf-4caa-8843-0ed2f77650bd in datapath 63bd7005-10d8-4e7e-8ca2-cb537d610fb2 bound to our chassis
Dec 05 10:18:46 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:46.505 159503 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 63bd7005-10d8-4e7e-8ca2-cb537d610fb2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 05 10:18:46 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:18:46.506 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3edc97-814a-47a2-8b7f-0a581077d42d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:18:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap4c32b286-fd: No such device
Dec 05 10:18:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:46Z|00386|binding|INFO|Setting lport 4c32b286-fdcf-4caa-8843-0ed2f77650bd ovn-installed in OVS
Dec 05 10:18:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:46Z|00387|binding|INFO|Setting lport 4c32b286-fdcf-4caa-8843-0ed2f77650bd up in Southbound
Dec 05 10:18:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:46.528 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap4c32b286-fd: No such device
Dec 05 10:18:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap4c32b286-fd: No such device
Dec 05 10:18:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap4c32b286-fd: No such device
Dec 05 10:18:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap4c32b286-fd: No such device
Dec 05 10:18:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap4c32b286-fd: No such device
Dec 05 10:18:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap4c32b286-fd: No such device
Dec 05 10:18:46 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tap4c32b286-fd: No such device
Dec 05 10:18:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:46.569 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:46.598 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fea0968b-0e9f-458d-879e-5e4c7889c409", "format": "json"}]: dispatch
Dec 05 10:18:46 np0005546420.localdomain ceph-mon[298353]: pgmap v684: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 111 KiB/s wr, 44 op/s
Dec 05 10:18:46 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:18:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:47.119 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:18:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:18:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:18:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:18:47 np0005546420.localdomain sshd[328384]: Connection reset by authenticating user root 91.202.233.33 port 27688 [preauth]
Dec 05 10:18:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:18:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18291 "" "Go-http-client/1.1"
Dec 05 10:18:47 np0005546420.localdomain sshd[328462]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:18:47 np0005546420.localdomain podman[328468]: 
Dec 05 10:18:47 np0005546420.localdomain podman[328468]: 2025-12-05 10:18:47.546318681 +0000 UTC m=+0.129987323 container create e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63bd7005-10d8-4e7e-8ca2-cb537d610fb2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:18:47 np0005546420.localdomain podman[328468]: 2025-12-05 10:18:47.465859078 +0000 UTC m=+0.049527790 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:18:47 np0005546420.localdomain systemd[1]: Started libpod-conmon-e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322.scope.
Dec 05 10:18:47 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:18:47 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cb09b2b36be5d7d55f0b6fbba0a9e8a863a42704f90d532d63550a1f362adf8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:18:47 np0005546420.localdomain podman[328468]: 2025-12-05 10:18:47.621324807 +0000 UTC m=+0.204993489 container init e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63bd7005-10d8-4e7e-8ca2-cb537d610fb2, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:18:47 np0005546420.localdomain podman[328468]: 2025-12-05 10:18:47.632225554 +0000 UTC m=+0.215894226 container start e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63bd7005-10d8-4e7e-8ca2-cb537d610fb2, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:18:47 np0005546420.localdomain dnsmasq[328486]: started, version 2.85 cachesize 150
Dec 05 10:18:47 np0005546420.localdomain dnsmasq[328486]: DNS service limited to local subnets
Dec 05 10:18:47 np0005546420.localdomain dnsmasq[328486]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:18:47 np0005546420.localdomain dnsmasq[328486]: warning: no upstream servers configured
Dec 05 10:18:47 np0005546420.localdomain dnsmasq-dhcp[328486]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:18:47 np0005546420.localdomain dnsmasq[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/addn_hosts - 0 addresses
Dec 05 10:18:47 np0005546420.localdomain dnsmasq-dhcp[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/host
Dec 05 10:18:47 np0005546420.localdomain dnsmasq-dhcp[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/opts
Dec 05 10:18:47 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:18:47 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:18:47 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 05 10:18:47 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 05 10:18:47 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:18:47 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:47.865 262769 INFO neutron.agent.dhcp.agent [None req-245db3d9-69cd-409e-a4f0-fec6622d7381 - - - - - -] DHCP configuration for ports {'9541f7f5-286e-4477-a8fd-9841de75c67f'} is completed
Dec 05 10:18:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:48.248 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:48 np0005546420.localdomain ceph-mon[298353]: pgmap v685: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 62 KiB/s wr, 8 op/s
Dec 05 10:18:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:18:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:18:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:18:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:18:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:18:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:18:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:18:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:18:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:18:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:18:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:18:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:18:49 np0005546420.localdomain snmpd[68010]: empty variable list in _query
Dec 05 10:18:49 np0005546420.localdomain snmpd[68010]: empty variable list in _query
Dec 05 10:18:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fea0968b-0e9f-458d-879e-5e4c7889c409", "format": "json"}]: dispatch
Dec 05 10:18:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fea0968b-0e9f-458d-879e-5e4c7889c409", "force": true, "format": "json"}]: dispatch
Dec 05 10:18:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:50.001 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:18:49Z, description=, device_id=9194d865-006b-4d8e-bcaf-2896debdf8bd, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e32160>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e32100>], id=dbd1260c-1a64-4f50-9a64-c40cfe893f32, ip_allocation=immediate, mac_address=fa:16:3e:77:f8:ef, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:18:44Z, description=, dns_domain=, id=63bd7005-10d8-4e7e-8ca2-cb537d610fb2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesSnapshotTestJSON-1052451854-network, port_security_enabled=True, project_id=a6e8880ab10e4b26b0074c6f9b06aca3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12951, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3603, status=ACTIVE, subnets=['dd8dda27-9274-4b0a-a9b8-5ef026efe040'], tags=[], tenant_id=a6e8880ab10e4b26b0074c6f9b06aca3, updated_at=2025-12-05T10:18:45Z, vlan_transparent=None, network_id=63bd7005-10d8-4e7e-8ca2-cb537d610fb2, port_security_enabled=False, project_id=a6e8880ab10e4b26b0074c6f9b06aca3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3620, status=DOWN, tags=[], tenant_id=a6e8880ab10e4b26b0074c6f9b06aca3, updated_at=2025-12-05T10:18:49Z on network 63bd7005-10d8-4e7e-8ca2-cb537d610fb2
Dec 05 10:18:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:50 np0005546420.localdomain dnsmasq[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/addn_hosts - 1 addresses
Dec 05 10:18:50 np0005546420.localdomain dnsmasq-dhcp[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/host
Dec 05 10:18:50 np0005546420.localdomain dnsmasq-dhcp[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/opts
Dec 05 10:18:50 np0005546420.localdomain podman[328506]: 2025-12-05 10:18:50.266640142 +0000 UTC m=+0.060957112 container kill e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63bd7005-10d8-4e7e-8ca2-cb537d610fb2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:18:50 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:50.530 262769 INFO neutron.agent.dhcp.agent [None req-1b60630c-ffc8-4461-b10c-1085fb9795c0 - - - - - -] DHCP configuration for ports {'dbd1260c-1a64-4f50-9a64-c40cfe893f32'} is completed
Dec 05 10:18:50 np0005546420.localdomain ceph-mon[298353]: pgmap v686: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 62 KiB/s wr, 8 op/s
Dec 05 10:18:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:18:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:18:50 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:18:51 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:51.377 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:18:49Z, description=, device_id=9194d865-006b-4d8e-bcaf-2896debdf8bd, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99db3fa0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99db39d0>], id=dbd1260c-1a64-4f50-9a64-c40cfe893f32, ip_allocation=immediate, mac_address=fa:16:3e:77:f8:ef, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:18:44Z, description=, dns_domain=, id=63bd7005-10d8-4e7e-8ca2-cb537d610fb2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesSnapshotTestJSON-1052451854-network, port_security_enabled=True, project_id=a6e8880ab10e4b26b0074c6f9b06aca3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12951, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3603, status=ACTIVE, subnets=['dd8dda27-9274-4b0a-a9b8-5ef026efe040'], tags=[], tenant_id=a6e8880ab10e4b26b0074c6f9b06aca3, updated_at=2025-12-05T10:18:45Z, vlan_transparent=None, network_id=63bd7005-10d8-4e7e-8ca2-cb537d610fb2, port_security_enabled=False, project_id=a6e8880ab10e4b26b0074c6f9b06aca3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3620, status=DOWN, tags=[], tenant_id=a6e8880ab10e4b26b0074c6f9b06aca3, updated_at=2025-12-05T10:18:49Z on network 63bd7005-10d8-4e7e-8ca2-cb537d610fb2
Dec 05 10:18:51 np0005546420.localdomain podman[328545]: 2025-12-05 10:18:51.62122482 +0000 UTC m=+0.065689879 container kill e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63bd7005-10d8-4e7e-8ca2-cb537d610fb2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:18:51 np0005546420.localdomain dnsmasq[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/addn_hosts - 1 addresses
Dec 05 10:18:51 np0005546420.localdomain dnsmasq-dhcp[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/host
Dec 05 10:18:51 np0005546420.localdomain dnsmasq-dhcp[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/opts
Dec 05 10:18:51 np0005546420.localdomain systemd[1]: tmp-crun.2oE74M.mount: Deactivated successfully.
Dec 05 10:18:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:18:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:18:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:18:51 np0005546420.localdomain systemd[1]: tmp-crun.T0C8FF.mount: Deactivated successfully.
Dec 05 10:18:51 np0005546420.localdomain podman[328560]: 2025-12-05 10:18:51.763131061 +0000 UTC m=+0.103936610 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:18:51 np0005546420.localdomain podman[328560]: 2025-12-05 10:18:51.805570321 +0000 UTC m=+0.146375870 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:18:51 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:18:51 np0005546420.localdomain podman[328559]: 2025-12-05 10:18:51.86413805 +0000 UTC m=+0.205080113 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:18:51 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:18:51.895 262769 INFO neutron.agent.dhcp.agent [None req-0755a226-63a9-469e-9224-18af30dd36b7 - - - - - -] DHCP configuration for ports {'dbd1260c-1a64-4f50-9a64-c40cfe893f32'} is completed
Dec 05 10:18:51 np0005546420.localdomain podman[328559]: 2025-12-05 10:18:51.906644142 +0000 UTC m=+0.247586215 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, release=1755695350)
Dec 05 10:18:51 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:18:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:52.121 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:52 np0005546420.localdomain sshd[328462]: Connection reset by authenticating user root 91.202.233.33 port 27694 [preauth]
Dec 05 10:18:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "628154e8-b5ef-4050-80fd-dcb33fc276cd", "format": "json"}]: dispatch
Dec 05 10:18:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "628154e8-b5ef-4050-80fd-dcb33fc276cd", "force": true, "format": "json"}]: dispatch
Dec 05 10:18:52 np0005546420.localdomain ceph-mon[298353]: pgmap v687: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 594 B/s rd, 61 KiB/s wr, 9 op/s
Dec 05 10:18:52 np0005546420.localdomain sshd[328610]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:18:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:52Z|00388|ovn_bfd|INFO|Enabled BFD on interface ovn-473cc8-0
Dec 05 10:18:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:52Z|00389|ovn_bfd|INFO|Enabled BFD on interface ovn-f5bb44-0
Dec 05 10:18:52 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:18:52Z|00390|ovn_bfd|INFO|Enabled BFD on interface ovn-40c64e-0
Dec 05 10:18:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:52.920 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:52.923 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:52.925 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:52.957 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:52.976 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:53.252 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:53 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:18:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:18:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 05 10:18:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 05 10:18:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:53.888 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:54.624 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:54.758 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:54 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:18:54 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:18:54 np0005546420.localdomain ceph-mon[298353]: pgmap v688: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 101 KiB/s wr, 8 op/s
Dec 05 10:18:54 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "32c34ad9-6951-4024-bed1-03795f491ee7", "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:18:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:54.778 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:54 np0005546420.localdomain sshd[328610]: Connection reset by authenticating user root 91.202.233.33 port 63214 [preauth]
Dec 05 10:18:54 np0005546420.localdomain sshd[328614]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:18:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:18:55 np0005546420.localdomain ceph-mon[298353]: pgmap v689: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 77 KiB/s wr, 7 op/s
Dec 05 10:18:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e263 e263: 6 total, 6 up, 6 in
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: osdmap e263: 6 total, 6 up, 6 in
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.819108) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936819160, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2772, "num_deletes": 262, "total_data_size": 3792036, "memory_usage": 3966688, "flush_reason": "Manual Compaction"}
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936837214, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 2469392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34009, "largest_seqno": 36776, "table_properties": {"data_size": 2458632, "index_size": 6689, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26356, "raw_average_key_size": 22, "raw_value_size": 2435588, "raw_average_value_size": 2053, "num_data_blocks": 283, "num_entries": 1186, "num_filter_entries": 1186, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929823, "oldest_key_time": 1764929823, "file_creation_time": 1764929936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 18144 microseconds, and 7680 cpu microseconds.
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.837251) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 2469392 bytes OK
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.837271) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.838984) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.839001) EVENT_LOG_v1 {"time_micros": 1764929936838996, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.839017) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3778898, prev total WAL file size 3778898, number of live WAL files 2.
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.840157) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end)
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2411KB)], [60(16MB)]
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936840240, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 20272746, "oldest_snapshot_seqno": -1}
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 14015 keys, 19000314 bytes, temperature: kUnknown
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936949528, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 19000314, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18918146, "index_size": 46035, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35077, "raw_key_size": 374229, "raw_average_key_size": 26, "raw_value_size": 18677955, "raw_average_value_size": 1332, "num_data_blocks": 1732, "num_entries": 14015, "num_filter_entries": 14015, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764929936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.949797) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 19000314 bytes
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.951808) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.4 rd, 173.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 17.0 +0.0 blob) out(18.1 +0.0 blob), read-write-amplify(15.9) write-amplify(7.7) OK, records in: 14566, records dropped: 551 output_compression: NoCompression
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.951863) EVENT_LOG_v1 {"time_micros": 1764929936951842, "job": 36, "event": "compaction_finished", "compaction_time_micros": 109363, "compaction_time_cpu_micros": 56054, "output_level": 6, "num_output_files": 1, "total_output_size": 19000314, "num_input_records": 14566, "num_output_records": 14015, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936952600, "job": 36, "event": "table_file_deletion", "file_number": 62}
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936954742, "job": 36, "event": "table_file_deletion", "file_number": 60}
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.839917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.954781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.954787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.954791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.954794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:56 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:18:56.954797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:18:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:57.125 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:18:57 np0005546420.localdomain systemd[1]: tmp-crun.hOWzTJ.mount: Deactivated successfully.
Dec 05 10:18:57 np0005546420.localdomain podman[328616]: 2025-12-05 10:18:57.506200608 +0000 UTC m=+0.085271094 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:18:57 np0005546420.localdomain podman[328616]: 2025-12-05 10:18:57.548200495 +0000 UTC m=+0.127270981 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 05 10:18:57 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:18:57 np0005546420.localdomain ceph-mon[298353]: pgmap v691: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 6.6 KiB/s rd, 94 KiB/s wr, 17 op/s
Dec 05 10:18:57 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "32c34ad9-6951-4024-bed1-03795f491ee7", "force": true, "format": "json"}]: dispatch
Dec 05 10:18:58 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 05 10:18:58 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2243297349' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:18:58.284 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:18:58 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:18:58 np0005546420.localdomain podman[328641]: 2025-12-05 10:18:58.516818727 +0000 UTC m=+0.091685221 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:18:58 np0005546420.localdomain podman[328641]: 2025-12-05 10:18:58.533530283 +0000 UTC m=+0.108396767 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 10:18:58 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:18:58 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "d70cb354-430b-4c26-91ce-f33fbada84ef", "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:18:58 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2243297349' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:18:59 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:18:59 np0005546420.localdomain ceph-mon[298353]: pgmap v692: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 6.6 KiB/s rd, 94 KiB/s wr, 17 op/s
Dec 05 10:18:59 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:18:59 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 05 10:18:59 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 05 10:19:00 np0005546420.localdomain sshd[328614]: Connection reset by authenticating user root 91.202.233.33 port 63250 [preauth]
Dec 05 10:19:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:00 np0005546420.localdomain sshd[328661]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:19:00 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:19:00 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:19:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:02.127 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:02 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "d70cb354-430b-4c26-91ce-f33fbada84ef", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:02 np0005546420.localdomain ceph-mon[298353]: pgmap v693: 177 pgs: 177 active+clean; 422 MiB data, 1.7 GiB used, 40 GiB / 42 GiB avail; 15 KiB/s rd, 21 MiB/s wr, 35 op/s
Dec 05 10:19:02 np0005546420.localdomain sshd[328661]: Connection reset by authenticating user root 91.202.233.33 port 25038 [preauth]
Dec 05 10:19:03 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:19:03 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:19:03 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:19:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:03.335 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:19:04.136 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:19:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:19:04.137 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:19:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:19:04.137 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:19:04 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:19:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1742460817' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:19:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1742460817' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:19:04 np0005546420.localdomain ceph-mon[298353]: pgmap v694: 177 pgs: 177 active+clean; 446 MiB data, 1.7 GiB used, 40 GiB / 42 GiB avail; 58 KiB/s rd, 24 MiB/s wr, 99 op/s
Dec 05 10:19:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:05 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fdc84630-5588-464e-88a7-bc4004dd5a8f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:19:05 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fdc84630-5588-464e-88a7-bc4004dd5a8f", "format": "json"}]: dispatch
Dec 05 10:19:05 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:19:06 np0005546420.localdomain ceph-mon[298353]: pgmap v695: 177 pgs: 177 active+clean; 446 MiB data, 1.7 GiB used, 40 GiB / 42 GiB avail; 58 KiB/s rd, 24 MiB/s wr, 99 op/s
Dec 05 10:19:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:07.130 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:07 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:19:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:19:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:19:07 np0005546420.localdomain podman[328663]: 2025-12-05 10:19:07.528107719 +0000 UTC m=+0.103707202 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:19:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:19:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "format": "json"}]: dispatch
Dec 05 10:19:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:19:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:19:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 05 10:19:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 05 10:19:07 np0005546420.localdomain systemd[1]: tmp-crun.1VRo71.mount: Deactivated successfully.
Dec 05 10:19:07 np0005546420.localdomain podman[328664]: 2025-12-05 10:19:07.601158384 +0000 UTC m=+0.169131492 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:19:07 np0005546420.localdomain podman[328664]: 2025-12-05 10:19:07.605769027 +0000 UTC m=+0.173742145 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:19:07 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:19:07 np0005546420.localdomain podman[328663]: 2025-12-05 10:19:07.666833212 +0000 UTC m=+0.242432725 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:19:07 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:19:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:08.338 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:08 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:19:08 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:19:08 np0005546420.localdomain ceph-mon[298353]: pgmap v696: 177 pgs: 177 active+clean; 778 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 87 KiB/s rd, 49 MiB/s wr, 155 op/s
Dec 05 10:19:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fdc84630-5588-464e-88a7-bc4004dd5a8f", "format": "json"}]: dispatch
Dec 05 10:19:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fdc84630-5588-464e-88a7-bc4004dd5a8f", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:19:10 np0005546420.localdomain podman[328702]: 2025-12-05 10:19:10.575691077 +0000 UTC m=+0.143251805 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec 05 10:19:10 np0005546420.localdomain podman[328702]: 2025-12-05 10:19:10.622527624 +0000 UTC m=+0.190088372 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:19:10 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:19:10 np0005546420.localdomain ceph-mon[298353]: pgmap v697: 177 pgs: 177 active+clean; 778 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 80 KiB/s rd, 47 MiB/s wr, 141 op/s
Dec 05 10:19:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:19:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:19:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:19:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "7813f9e3-09ab-45cd-bf28-79a3478841e1", "format": "json"}]: dispatch
Dec 05 10:19:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:19:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:12.133 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:19:12 np0005546420.localdomain ceph-mon[298353]: pgmap v698: 177 pgs: 177 active+clean; 1.0 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 80 KiB/s rd, 72 MiB/s wr, 149 op/s
Dec 05 10:19:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe", "format": "json"}]: dispatch
Dec 05 10:19:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:19:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:13.344 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:13 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:19:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:19:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 05 10:19:13 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 05 10:19:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:19:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:19:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "1938e467-a994-44d6-8f8e-b2434d6c8af6", "format": "json"}]: dispatch
Dec 05 10:19:14 np0005546420.localdomain ceph-mon[298353]: pgmap v699: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 110 KiB/s rd, 60 MiB/s wr, 196 op/s
Dec 05 10:19:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe", "format": "json"}]: dispatch
Dec 05 10:19:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:15 np0005546420.localdomain ceph-mon[298353]: pgmap v700: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 74 KiB/s rd, 58 MiB/s wr, 140 op/s
Dec 05 10:19:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:17.169 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:19:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:19:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:19:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:19:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:19:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:19:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:19:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154915 "" "Go-http-client/1.1"
Dec 05 10:19:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e264 e264: 6 total, 6 up, 6 in
Dec 05 10:19:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:19:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18783 "" "Go-http-client/1.1"
Dec 05 10:19:17 np0005546420.localdomain sudo[328721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:19:17 np0005546420.localdomain sudo[328721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:19:17 np0005546420.localdomain sudo[328721]: pam_unix(sudo:session): session closed for user root
Dec 05 10:19:18 np0005546420.localdomain sudo[328739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:19:18 np0005546420.localdomain sudo[328739]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:19:18 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "1938e467-a994-44d6-8f8e-b2434d6c8af6_b24687ce-b711-4104-a308-ba8dc02856cd", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:18 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "1938e467-a994-44d6-8f8e-b2434d6c8af6", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:18 np0005546420.localdomain ceph-mon[298353]: osdmap e264: 6 total, 6 up, 6 in
Dec 05 10:19:18 np0005546420.localdomain ceph-mon[298353]: pgmap v702: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 46 MiB/s wr, 180 op/s
Dec 05 10:19:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:18.384 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:18 np0005546420.localdomain sudo[328739]: pam_unix(sudo:session): session closed for user root
Dec 05 10:19:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:19:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:19:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:19:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:19:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:19:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:19:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:19:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:19:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:19:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:19:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:19:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:19:19 np0005546420.localdomain sudo[328788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:19:19 np0005546420.localdomain sudo[328788]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:19:19 np0005546420.localdomain sudo[328788]: pam_unix(sudo:session): session closed for user root
Dec 05 10:19:19 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:19:19 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:19:19 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:19:19 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:19:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e265 e265: 6 total, 6 up, 6 in
Dec 05 10:19:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:19.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:19 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:19:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e266 e266: 6 total, 6 up, 6 in
Dec 05 10:19:20 np0005546420.localdomain ceph-mon[298353]: osdmap e265: 6 total, 6 up, 6 in
Dec 05 10:19:20 np0005546420.localdomain ceph-mon[298353]: pgmap v704: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 21 MiB/s wr, 213 op/s
Dec 05 10:19:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:19:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 05 10:19:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 05 10:19:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:20.896 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:20.896 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:19:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:19:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:19:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "f992b110-5225-429b-a9ae-546723768646", "format": "json"}]: dispatch
Dec 05 10:19:21 np0005546420.localdomain ceph-mon[298353]: osdmap e266: 6 total, 6 up, 6 in
Dec 05 10:19:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:19:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e267 e267: 6 total, 6 up, 6 in
Dec 05 10:19:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:22.172 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:19:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:19:22 np0005546420.localdomain podman[328807]: 2025-12-05 10:19:22.518330777 +0000 UTC m=+0.090957180 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container)
Dec 05 10:19:22 np0005546420.localdomain podman[328807]: 2025-12-05 10:19:22.538270583 +0000 UTC m=+0.110896996 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.expose-services=)
Dec 05 10:19:22 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:19:22 np0005546420.localdomain ceph-mon[298353]: pgmap v706: 177 pgs: 177 active+clean; 212 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 15 MiB/s wr, 169 op/s
Dec 05 10:19:22 np0005546420.localdomain ceph-mon[298353]: osdmap e267: 6 total, 6 up, 6 in
Dec 05 10:19:22 np0005546420.localdomain systemd[1]: tmp-crun.JEkAIA.mount: Deactivated successfully.
Dec 05 10:19:22 np0005546420.localdomain podman[328808]: 2025-12-05 10:19:22.638156918 +0000 UTC m=+0.209900034 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:19:22 np0005546420.localdomain podman[328808]: 2025-12-05 10:19:22.650717946 +0000 UTC m=+0.222461152 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:19:22 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:19:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:23.386 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:19:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:19:23 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:19:24 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:19:24 np0005546420.localdomain ceph-mon[298353]: pgmap v708: 177 pgs: 177 active+clean; 212 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 60 KiB/s wr, 39 op/s
Dec 05 10:19:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:24.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:24.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:19:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:24.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:19:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:24.926 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:19:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e268 e268: 6 total, 6 up, 6 in
Dec 05 10:19:25 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "f992b110-5225-429b-a9ae-546723768646_06c08141-b346-463f-8e34-5d1400cfd9b2", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:25 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "f992b110-5225-429b-a9ae-546723768646", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:25 np0005546420.localdomain ceph-mon[298353]: osdmap e268: 6 total, 6 up, 6 in
Dec 05 10:19:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:25.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e269 e269: 6 total, 6 up, 6 in
Dec 05 10:19:26 np0005546420.localdomain ceph-mon[298353]: pgmap v710: 177 pgs: 177 active+clean; 212 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 60 KiB/s wr, 39 op/s
Dec 05 10:19:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2483581321' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:19:26 np0005546420.localdomain ceph-mon[298353]: osdmap e269: 6 total, 6 up, 6 in
Dec 05 10:19:26 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:19:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:19:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 05 10:19:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 05 10:19:26 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:19:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:26.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:27.174 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:19:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "format": "json"}]: dispatch
Dec 05 10:19:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:19:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:19:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/4057956785' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:19:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3496270687' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:19:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3496270687' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:19:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:27.866 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:28.411 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:19:28 np0005546420.localdomain podman[328852]: 2025-12-05 10:19:28.527725028 +0000 UTC m=+0.091038543 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller)
Dec 05 10:19:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:19:28 np0005546420.localdomain podman[328852]: 2025-12-05 10:19:28.602668323 +0000 UTC m=+0.165981838 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Dec 05 10:19:28 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:19:28 np0005546420.localdomain ceph-mon[298353]: pgmap v712: 177 pgs: 177 active+clean; 212 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 92 KiB/s wr, 77 op/s
Dec 05 10:19:28 np0005546420.localdomain podman[328877]: 2025-12-05 10:19:28.693672863 +0000 UTC m=+0.092100345 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:19:28 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e270 e270: 6 total, 6 up, 6 in
Dec 05 10:19:28 np0005546420.localdomain podman[328877]: 2025-12-05 10:19:28.709467052 +0000 UTC m=+0.107894534 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute)
Dec 05 10:19:28 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:19:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e271 e271: 6 total, 6 up, 6 in
Dec 05 10:19:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "d750bd56-72a3-4675-bc8c-db3ac4a6da80", "format": "json"}]: dispatch
Dec 05 10:19:29 np0005546420.localdomain ceph-mon[298353]: osdmap e270: 6 total, 6 up, 6 in
Dec 05 10:19:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:30 np0005546420.localdomain ceph-mon[298353]: pgmap v714: 177 pgs: 177 active+clean; 212 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 76 KiB/s wr, 50 op/s
Dec 05 10:19:30 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "snap_name": "306fbfec-6fb1-478f-9901-cfcf09cbfd8b", "format": "json"}]: dispatch
Dec 05 10:19:30 np0005546420.localdomain ceph-mon[298353]: osdmap e271: 6 total, 6 up, 6 in
Dec 05 10:19:30 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:19:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:19:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:19:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:19:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/164294145' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:19:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/164294145' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:19:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:30.866 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:30.887 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:30.888 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 10:19:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:30.908 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 10:19:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e272 e272: 6 total, 6 up, 6 in
Dec 05 10:19:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:31.892 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:32.177 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:32 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "d750bd56-72a3-4675-bc8c-db3ac4a6da80_2a061474-d04d-4e89-8efe-a6fde2e36128", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:32 np0005546420.localdomain ceph-mon[298353]: osdmap e272: 6 total, 6 up, 6 in
Dec 05 10:19:32 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "d750bd56-72a3-4675-bc8c-db3ac4a6da80", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:32 np0005546420.localdomain ceph-mon[298353]: pgmap v717: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 226 KiB/s wr, 122 op/s
Dec 05 10:19:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e273 e273: 6 total, 6 up, 6 in
Dec 05 10:19:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:32.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:33 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:19:33Z|00391|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 05 10:19:33 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:19:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2883523459' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:19:33 np0005546420.localdomain ceph-mon[298353]: osdmap e273: 6 total, 6 up, 6 in
Dec 05 10:19:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:19:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 05 10:19:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 05 10:19:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/635138071' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:19:33 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e274 e274: 6 total, 6 up, 6 in
Dec 05 10:19:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:33.435 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:33.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:33.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:33.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 10:19:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:19:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:19:34 np0005546420.localdomain ceph-mon[298353]: osdmap e274: 6 total, 6 up, 6 in
Dec 05 10:19:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "snap_name": "306fbfec-6fb1-478f-9901-cfcf09cbfd8b_fe3defdc-ce29-4b96-bb35-17832d73b057", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "snap_name": "306fbfec-6fb1-478f-9901-cfcf09cbfd8b", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:34 np0005546420.localdomain ceph-mon[298353]: pgmap v720: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 188 KiB/s wr, 152 op/s
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.064 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.085 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.086 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.086 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.087 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.087 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:19:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:35 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "84a64188-bba6-4439-8dec-7b4368042935", "format": "json"}]: dispatch
Dec 05 10:19:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1417923523' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:19:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1417923523' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:19:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:19:35 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2849704650' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.514 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.760 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.762 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11425MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.763 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.763 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.984 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:19:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:35.985 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:19:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:36.080 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:19:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e275 e275: 6 total, 6 up, 6 in
Dec 05 10:19:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2849704650' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:19:36 np0005546420.localdomain ceph-mon[298353]: pgmap v721: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 72 KiB/s rd, 129 KiB/s wr, 105 op/s
Dec 05 10:19:36 np0005546420.localdomain ceph-mon[298353]: osdmap e275: 6 total, 6 up, 6 in
Dec 05 10:19:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:19:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:19:36 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1922865955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:19:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:36.547 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:19:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:36.554 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:19:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:36.570 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:19:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:36.572 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:19:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:36.572 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1209624616' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1209624616' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:19:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:37.179 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e276 e276: 6 total, 6 up, 6 in
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1922865955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "format": "json"}]: dispatch
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1209624616' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1209624616' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:19:37 np0005546420.localdomain ceph-mon[298353]: osdmap e276: 6 total, 6 up, 6 in
Dec 05 10:19:38 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:19:38 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1546384509' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:19:38 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:19:38 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1546384509' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:19:38 np0005546420.localdomain ceph-mon[298353]: pgmap v724: 177 pgs: 1 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 170 active+clean; 213 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 100 KiB/s wr, 110 op/s
Dec 05 10:19:38 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1546384509' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:19:38 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1546384509' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:19:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:19:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:19:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:38.467 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:38 np0005546420.localdomain systemd[1]: tmp-crun.XgThh2.mount: Deactivated successfully.
Dec 05 10:19:38 np0005546420.localdomain podman[328941]: 2025-12-05 10:19:38.555650284 +0000 UTC m=+0.126358343 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:19:38 np0005546420.localdomain podman[328941]: 2025-12-05 10:19:38.591040057 +0000 UTC m=+0.161748056 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:19:38 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:19:38 np0005546420.localdomain podman[328940]: 2025-12-05 10:19:38.644925552 +0000 UTC m=+0.218602503 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:19:38 np0005546420.localdomain podman[328940]: 2025-12-05 10:19:38.678754247 +0000 UTC m=+0.252431188 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:19:38 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:19:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:19:39.280 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:19:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:39.281 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:19:39.282 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:19:39 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:19:39Z|00392|ovn_bfd|INFO|Disabled BFD on interface ovn-473cc8-0
Dec 05 10:19:39 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:19:39Z|00393|ovn_bfd|INFO|Disabled BFD on interface ovn-f5bb44-0
Dec 05 10:19:39 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:19:39Z|00394|ovn_bfd|INFO|Disabled BFD on interface ovn-40c64e-0
Dec 05 10:19:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:39.353 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:39.369 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:39 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "84a64188-bba6-4439-8dec-7b4368042935_04f749f5-630f-40c9-aae2-87b186aa8f80", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:39 np0005546420.localdomain dnsmasq[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/addn_hosts - 0 addresses
Dec 05 10:19:39 np0005546420.localdomain dnsmasq-dhcp[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/host
Dec 05 10:19:39 np0005546420.localdomain podman[328999]: 2025-12-05 10:19:39.523308053 +0000 UTC m=+0.062874542 container kill e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63bd7005-10d8-4e7e-8ca2-cb537d610fb2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:19:39 np0005546420.localdomain dnsmasq-dhcp[328486]: read /var/lib/neutron/dhcp/63bd7005-10d8-4e7e-8ca2-cb537d610fb2/opts
Dec 05 10:19:39 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:19:39 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:19:39Z|00395|binding|INFO|Releasing lport 4c32b286-fdcf-4caa-8843-0ed2f77650bd from this chassis (sb_readonly=0)
Dec 05 10:19:39 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:19:39Z|00396|binding|INFO|Setting lport 4c32b286-fdcf-4caa-8843-0ed2f77650bd down in Southbound
Dec 05 10:19:39 np0005546420.localdomain kernel: device tap4c32b286-fd left promiscuous mode
Dec 05 10:19:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:39.794 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:19:39.802 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-63bd7005-10d8-4e7e-8ca2-cb537d610fb2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-63bd7005-10d8-4e7e-8ca2-cb537d610fb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6e8880ab10e4b26b0074c6f9b06aca3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd1f52a5-1e4c-40b2-bb9a-c75028fc42e8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=4c32b286-fdcf-4caa-8843-0ed2f77650bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:19:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:19:39.804 159503 INFO neutron.agent.ovn.metadata.agent [-] Port 4c32b286-fdcf-4caa-8843-0ed2f77650bd in datapath 63bd7005-10d8-4e7e-8ca2-cb537d610fb2 unbound from our chassis
Dec 05 10:19:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:19:39.807 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 63bd7005-10d8-4e7e-8ca2-cb537d610fb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:19:39 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:19:39.809 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[ac373efe-256b-4470-90d3-eedaf9950db7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:19:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:39.815 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:40 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "84a64188-bba6-4439-8dec-7b4368042935", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:40 np0005546420.localdomain ceph-mon[298353]: pgmap v725: 177 pgs: 1 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 170 active+clean; 213 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 84 KiB/s wr, 93 op/s
Dec 05 10:19:40 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:19:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 05 10:19:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 05 10:19:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 05 10:19:40 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch
Dec 05 10:19:41 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e277 e277: 6 total, 6 up, 6 in
Dec 05 10:19:41 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:19:41.284 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:19:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:19:41 np0005546420.localdomain dnsmasq[328486]: exiting on receipt of SIGTERM
Dec 05 10:19:41 np0005546420.localdomain podman[329037]: 2025-12-05 10:19:41.502105626 +0000 UTC m=+0.065804854 container kill e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63bd7005-10d8-4e7e-8ca2-cb537d610fb2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:19:41 np0005546420.localdomain systemd[1]: libpod-e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322.scope: Deactivated successfully.
Dec 05 10:19:41 np0005546420.localdomain podman[329035]: 2025-12-05 10:19:41.567380082 +0000 UTC m=+0.138538121 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:19:41 np0005546420.localdomain podman[329062]: 2025-12-05 10:19:41.580680323 +0000 UTC m=+0.065071802 container died e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63bd7005-10d8-4e7e-8ca2-cb537d610fb2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 10:19:41 np0005546420.localdomain podman[329062]: 2025-12-05 10:19:41.614697833 +0000 UTC m=+0.099089262 container cleanup e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63bd7005-10d8-4e7e-8ca2-cb537d610fb2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:19:41 np0005546420.localdomain systemd[1]: libpod-conmon-e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322.scope: Deactivated successfully.
Dec 05 10:19:41 np0005546420.localdomain podman[329035]: 2025-12-05 10:19:41.628734817 +0000 UTC m=+0.199892856 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:19:41 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:19:41 np0005546420.localdomain podman[329064]: 2025-12-05 10:19:41.713145384 +0000 UTC m=+0.188330638 container remove e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-63bd7005-10d8-4e7e-8ca2-cb537d610fb2, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 10:19:41 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:19:41.737 262769 INFO neutron.agent.dhcp.agent [None req-dad88655-7f8e-43d5-99a4-295d8b75732f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:19:41 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:19:41.738 262769 INFO neutron.agent.dhcp.agent [None req-dad88655-7f8e-43d5-99a4-295d8b75732f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:19:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:41.822 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:42.211 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:42 np0005546420.localdomain ceph-mon[298353]: osdmap e277: 6 total, 6 up, 6 in
Dec 05 10:19:42 np0005546420.localdomain ceph-mon[298353]: pgmap v727: 177 pgs: 177 active+clean; 214 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 202 KiB/s wr, 139 op/s
Dec 05 10:19:42 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e278 e278: 6 total, 6 up, 6 in
Dec 05 10:19:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-4cb09b2b36be5d7d55f0b6fbba0a9e8a863a42704f90d532d63550a1f362adf8-merged.mount: Deactivated successfully.
Dec 05 10:19:42 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e312cede6064c84ea3d17342a19e4f35fd7ca47a074499e05988346acd2db322-userdata-shm.mount: Deactivated successfully.
Dec 05 10:19:42 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d63bd7005\x2d10d8\x2d4e7e\x2d8ca2\x2dcb537d610fb2.mount: Deactivated successfully.
Dec 05 10:19:43 np0005546420.localdomain ceph-mon[298353]: osdmap e278: 6 total, 6 up, 6 in
Dec 05 10:19:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:19:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:19:43 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:19:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:43.491 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:44 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "47f251f3-cf59-49da-9646-8d4921af4c7f", "format": "json"}]: dispatch
Dec 05 10:19:44 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:19:44 np0005546420.localdomain ceph-mon[298353]: pgmap v729: 177 pgs: 177 active+clean; 214 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 111 KiB/s wr, 62 op/s
Dec 05 10:19:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:46 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e279 e279: 6 total, 6 up, 6 in
Dec 05 10:19:46 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:19:46 np0005546420.localdomain ceph-mon[298353]: pgmap v730: 177 pgs: 177 active+clean; 214 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 87 KiB/s wr, 48 op/s
Dec 05 10:19:46 np0005546420.localdomain ceph-mon[298353]: osdmap e279: 6 total, 6 up, 6 in
Dec 05 10:19:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:19:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 05 10:19:46 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 05 10:19:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:46.896 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:19:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:19:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:19:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:19:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:19:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:47.259 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:19:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18292 "" "Go-http-client/1.1"
Dec 05 10:19:47 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:19:47 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:19:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:48.527 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:48 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "47f251f3-cf59-49da-9646-8d4921af4c7f_8cad2d4c-611e-441b-8900-64acdbf02566", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:48 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "47f251f3-cf59-49da-9646-8d4921af4c7f", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:48 np0005546420.localdomain ceph-mon[298353]: pgmap v732: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 209 KiB/s wr, 68 op/s
Dec 05 10:19:48 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:19:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:19:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:19:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:19:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:19:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:19:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:19:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:19:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:19:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:19:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:19:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:19:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:19:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3018c34f-51ef-49c2-9a51-3d014bf9195b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:19:49 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3018c34f-51ef-49c2-9a51-3d014bf9195b", "format": "json"}]: dispatch
Dec 05 10:19:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:19:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:19:49 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:19:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:50 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:19:50 np0005546420.localdomain ceph-mon[298353]: pgmap v733: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 78 KiB/s wr, 22 op/s
Dec 05 10:19:51 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e280 e280: 6 total, 6 up, 6 in
Dec 05 10:19:52 np0005546420.localdomain ceph-mon[298353]: osdmap e280: 6 total, 6 up, 6 in
Dec 05 10:19:52 np0005546420.localdomain ceph-mon[298353]: pgmap v735: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 155 KiB/s wr, 12 op/s
Dec 05 10:19:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:52.302 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:52 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e281 e281: 6 total, 6 up, 6 in
Dec 05 10:19:52 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:19:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "7813f9e3-09ab-45cd-bf28-79a3478841e1_4cfd1cec-b884-48b2-9066-449bf106ef78", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "7813f9e3-09ab-45cd-bf28-79a3478841e1", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:53 np0005546420.localdomain ceph-mon[298353]: osdmap e281: 6 total, 6 up, 6 in
Dec 05 10:19:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3018c34f-51ef-49c2-9a51-3d014bf9195b", "format": "json"}]: dispatch
Dec 05 10:19:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3018c34f-51ef-49c2-9a51-3d014bf9195b", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 05 10:19:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 05 10:19:53 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 05 10:19:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:19:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:19:53 np0005546420.localdomain systemd[1]: Starting dnf makecache...
Dec 05 10:19:53 np0005546420.localdomain podman[329100]: 2025-12-05 10:19:53.520113964 +0000 UTC m=+0.091768646 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:19:53 np0005546420.localdomain podman[329100]: 2025-12-05 10:19:53.555368603 +0000 UTC m=+0.127023295 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:19:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:53.553 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:53 np0005546420.localdomain podman[329099]: 2025-12-05 10:19:53.572072589 +0000 UTC m=+0.145781604 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., io.openshift.expose-services=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.component=ubi9-minimal-container, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Dec 05 10:19:53 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:19:53 np0005546420.localdomain podman[329099]: 2025-12-05 10:19:53.589369603 +0000 UTC m=+0.163078648 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:19:53 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:19:53 np0005546420.localdomain dnf[329101]: Updating Subscription Management repositories.
Dec 05 10:19:53 np0005546420.localdomain dnf[329101]: Unable to read consumer identity
Dec 05 10:19:53 np0005546420.localdomain dnf[329101]: This system is not registered with an entitlement server. You can use subscription-manager to register.
Dec 05 10:19:53 np0005546420.localdomain dnf[329101]: delorean-openstack-barbican-42b4c41831408a8e323  49 kB/s | 3.0 kB     00:00
Dec 05 10:19:53 np0005546420.localdomain dnf[329101]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7  69 kB/s | 3.0 kB     00:00
Dec 05 10:19:53 np0005546420.localdomain dnf[329101]: delorean-openstack-cinder-1c00d6490d88e436f26ef  76 kB/s | 3.0 kB     00:00
Dec 05 10:19:53 np0005546420.localdomain dnf[329101]: delorean-python-stevedore-c4acc5639fd2329372142  74 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-python-cloudkitty-tests-tempest-2c80f8  70 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-os-net-config-d0cedbdb788d43e5c7551df5  72 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6  68 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-python-designate-tests-tempest-347fdbc  72 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-openstack-glance-1fd12c29b339f30fe823e  67 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  75 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-openstack-manila-3c01b7181572c95dac462  72 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-python-whitebox-neutron-tests-tempest-  71 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:19:54 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 05 10:19:54 np0005546420.localdomain ceph-mon[298353]: pgmap v737: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 562 B/s rd, 170 KiB/s wr, 13 op/s
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-openstack-octavia-ba397f07a7331190208c  68 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-openstack-watcher-c014f81a8647287f6dcc  38 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-ansible-config_template-5ccaa22121a7ff  53 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158  56 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-openstack-swift-dc98a8463506ac520c469a  60 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-python-tempestconf-8515371b7cceebd4282  77 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: delorean-openstack-heat-ui-013accbfd179753bc3f0  74 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: dlrn-antelope-testing                            76 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: dlrn-antelope-build-deps                         79 kB/s | 3.0 kB     00:00
Dec 05 10:19:54 np0005546420.localdomain dnf[329101]: centos9-rabbitmq                                 27 kB/s | 3.0 kB     00:00
Dec 05 10:19:55 np0005546420.localdomain dnf[329101]: centos9-storage                                  23 kB/s | 3.0 kB     00:00
Dec 05 10:19:55 np0005546420.localdomain dnf[329101]: centos9-opstools                                 24 kB/s | 3.0 kB     00:00
Dec 05 10:19:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:19:55 np0005546420.localdomain dnf[329101]: NFV SIG OpenvSwitch                              24 kB/s | 3.0 kB     00:00
Dec 05 10:19:55 np0005546420.localdomain dnf[329101]: repo-setup-centos-appstream                      50 kB/s | 4.4 kB     00:00
Dec 05 10:19:55 np0005546420.localdomain dnf[329101]: repo-setup-centos-baseos                         65 kB/s | 3.9 kB     00:00
Dec 05 10:19:55 np0005546420.localdomain dnf[329101]: repo-setup-centos-highavailability               18 kB/s | 3.9 kB     00:00
Dec 05 10:19:56 np0005546420.localdomain dnf[329101]: repo-setup-centos-powertools                     14 kB/s | 4.3 kB     00:00
Dec 05 10:19:56 np0005546420.localdomain dnf[329101]: Extra Packages for Enterprise Linux 9 - x86_64  162 kB/s |  30 kB     00:00
Dec 05 10:19:56 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "format": "json"}]: dispatch
Dec 05 10:19:56 np0005546420.localdomain ceph-mon[298353]: pgmap v738: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 77 KiB/s wr, 6 op/s
Dec 05 10:19:56 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "force": true, "format": "json"}]: dispatch
Dec 05 10:19:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:19:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:19:56 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:19:56 np0005546420.localdomain dnf[329101]: Metadata cache created.
Dec 05 10:19:56 np0005546420.localdomain systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 05 10:19:56 np0005546420.localdomain systemd[1]: Finished dnf makecache.
Dec 05 10:19:56 np0005546420.localdomain systemd[1]: dnf-makecache.service: Consumed 2.214s CPU time.
Dec 05 10:19:57 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e282 e282: 6 total, 6 up, 6 in
Dec 05 10:19:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:57.349 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:57 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:19:57 np0005546420.localdomain ceph-mon[298353]: osdmap e282: 6 total, 6 up, 6 in
Dec 05 10:19:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:19:58.598 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:19:58 np0005546420.localdomain ceph-mon[298353]: pgmap v740: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 982 B/s rd, 236 KiB/s wr, 17 op/s
Dec 05 10:19:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:19:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:19:59 np0005546420.localdomain podman[329181]: 2025-12-05 10:19:59.523672335 +0000 UTC m=+0.096862244 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:19:59 np0005546420.localdomain podman[329181]: 2025-12-05 10:19:59.568430987 +0000 UTC m=+0.141620906 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 05 10:19:59 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:19:59 np0005546420.localdomain podman[329182]: 2025-12-05 10:19:59.573739531 +0000 UTC m=+0.141338687 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true)
Dec 05 10:19:59 np0005546420.localdomain podman[329182]: 2025-12-05 10:19:59.655034242 +0000 UTC m=+0.222633378 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 05 10:19:59 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:19:59 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:20:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:00 np0005546420.localdomain ceph-mon[298353]: pgmap v741: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 108 KiB/s wr, 8 op/s
Dec 05 10:20:00 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:20:00 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 05 10:20:00 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 05 10:20:00 np0005546420.localdomain ceph-mon[298353]: overall HEALTH_OK
Dec 05 10:20:01 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e283 e283: 6 total, 6 up, 6 in
Dec 05 10:20:01 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:20:01 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:20:01 np0005546420.localdomain ceph-mon[298353]: osdmap e283: 6 total, 6 up, 6 in
Dec 05 10:20:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:02.395 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:02 np0005546420.localdomain ceph-mon[298353]: pgmap v743: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 151 KiB/s wr, 11 op/s
Dec 05 10:20:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:03.641 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:03 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:20:03 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:20:03 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:20:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1460072176' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:20:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1460072176' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:20:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:20:04.137 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:20:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:20:04.138 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:20:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:20:04.138 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:20:04 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch
Dec 05 10:20:04 np0005546420.localdomain ceph-mon[298353]: pgmap v744: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 151 KiB/s wr, 12 op/s
Dec 05 10:20:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:05 np0005546420.localdomain ceph-mon[298353]: pgmap v745: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 372 B/s rd, 43 KiB/s wr, 4 op/s
Dec 05 10:20:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 e284: 6 total, 6 up, 6 in
Dec 05 10:20:06 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:20:07 np0005546420.localdomain ceph-mon[298353]: osdmap e284: 6 total, 6 up, 6 in
Dec 05 10:20:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:20:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 05 10:20:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 05 10:20:07 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 05 10:20:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 05 10:20:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:07.431 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:08 np0005546420.localdomain ceph-mon[298353]: pgmap v747: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 94 KiB/s wr, 7 op/s
Dec 05 10:20:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:08.664 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:08.731 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:20:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:20:09 np0005546420.localdomain systemd[1]: tmp-crun.gwhg7W.mount: Deactivated successfully.
Dec 05 10:20:09 np0005546420.localdomain podman[329224]: 2025-12-05 10:20:09.5172884 +0000 UTC m=+0.094005365 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:20:09 np0005546420.localdomain podman[329224]: 2025-12-05 10:20:09.531307623 +0000 UTC m=+0.108024618 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:20:09 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:20:09 np0005546420.localdomain podman[329225]: 2025-12-05 10:20:09.61862514 +0000 UTC m=+0.189689689 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 05 10:20:09 np0005546420.localdomain podman[329225]: 2025-12-05 10:20:09.623362777 +0000 UTC m=+0.194427346 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 10:20:09 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:20:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:10 np0005546420.localdomain ceph-mon[298353]: pgmap v748: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 495 B/s rd, 91 KiB/s wr, 7 op/s
Dec 05 10:20:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 05 10:20:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch
Dec 05 10:20:10 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished
Dec 05 10:20:11 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:20:11Z|00397|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 05 10:20:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:20:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:20:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:12.477 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:12 np0005546420.localdomain systemd[1]: tmp-crun.AeWRNb.mount: Deactivated successfully.
Dec 05 10:20:12 np0005546420.localdomain podman[329264]: 2025-12-05 10:20:12.570293343 +0000 UTC m=+0.143068070 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:20:12 np0005546420.localdomain podman[329264]: 2025-12-05 10:20:12.586541865 +0000 UTC m=+0.159316622 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:20:12 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:20:12 np0005546420.localdomain ceph-mon[298353]: pgmap v749: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 93 KiB/s wr, 6 op/s
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:20:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:20:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:13.695 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:13 np0005546420.localdomain ceph-mon[298353]: pgmap v750: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 93 KiB/s wr, 6 op/s
Dec 05 10:20:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:20:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "format": "json"}]: dispatch
Dec 05 10:20:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:20:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:15 np0005546420.localdomain ceph-mon[298353]: pgmap v751: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 93 KiB/s wr, 6 op/s
Dec 05 10:20:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:20:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:20:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:20:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:20:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:20:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18295 "" "Go-http-client/1.1"
Dec 05 10:20:17 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 05 10:20:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:17.520 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:18 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "auth_id": "bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch
Dec 05 10:20:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea,allow rw path=/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/53340ae9-2988-4b83-be04-2373cdf8bc7b", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363,allow rw pool=manila_data namespace=fsvolumens_6fece2d2-49a0-4615-93c1-5c0350069189"]} : dispatch
Dec 05 10:20:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea,allow rw path=/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/53340ae9-2988-4b83-be04-2373cdf8bc7b", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363,allow rw pool=manila_data namespace=fsvolumens_6fece2d2-49a0-4615-93c1-5c0350069189"]}]': finished
Dec 05 10:20:18 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 05 10:20:18 np0005546420.localdomain ceph-mon[298353]: pgmap v752: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 181 B/s rd, 93 KiB/s wr, 7 op/s
Dec 05 10:20:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:18.698 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:20:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:20:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:20:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:20:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:20:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:20:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:20:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:20:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:20:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:20:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:20:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:20:19 np0005546420.localdomain sudo[329284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:20:19 np0005546420.localdomain sudo[329284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:20:19 np0005546420.localdomain sudo[329284]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:19 np0005546420.localdomain sudo[329302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:20:19 np0005546420.localdomain sudo[329302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:20:20 np0005546420.localdomain sudo[329302]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:20 np0005546420.localdomain sudo[329352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:20:20 np0005546420.localdomain sudo[329352]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:20:20 np0005546420.localdomain sudo[329352]: pam_unix(sudo:session): session closed for user root
Dec 05 10:20:20 np0005546420.localdomain ceph-mon[298353]: pgmap v753: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 54 KiB/s wr, 4 op/s
Dec 05 10:20:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:20:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:20:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:20:20 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:20:20 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/53340ae9-2988-4b83-be04-2373cdf8bc7b],prefix=session evict} (starting...)
Dec 05 10:20:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:20:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "auth_id": "bob", "format": "json"}]: dispatch
Dec 05 10:20:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 05 10:20:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363"]} : dispatch
Dec 05 10:20:21 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363"]}]': finished
Dec 05 10:20:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:22.561 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:22 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "auth_id": "bob", "format": "json"}]: dispatch
Dec 05 10:20:22 np0005546420.localdomain ceph-mon[298353]: pgmap v754: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 101 KiB/s wr, 7 op/s
Dec 05 10:20:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:22.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:22.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:20:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:23.727 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:24 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea],prefix=session evict} (starting...)
Dec 05 10:20:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:20:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:20:24 np0005546420.localdomain systemd[1]: tmp-crun.iETppM.mount: Deactivated successfully.
Dec 05 10:20:24 np0005546420.localdomain podman[329372]: 2025-12-05 10:20:24.50186176 +0000 UTC m=+0.069522078 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:20:24 np0005546420.localdomain podman[329372]: 2025-12-05 10:20:24.514256123 +0000 UTC m=+0.081916451 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:20:24 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:20:24 np0005546420.localdomain podman[329371]: 2025-12-05 10:20:24.559648486 +0000 UTC m=+0.130127291 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Dec 05 10:20:24 np0005546420.localdomain podman[329371]: 2025-12-05 10:20:24.578441345 +0000 UTC m=+0.148920200 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, vcs-type=git, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.)
Dec 05 10:20:24 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:20:24 np0005546420.localdomain ceph-mon[298353]: pgmap v755: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 58 KiB/s wr, 4 op/s
Dec 05 10:20:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 05 10:20:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Dec 05 10:20:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Dec 05 10:20:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:25 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "bob", "format": "json"}]: dispatch
Dec 05 10:20:25 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "bob", "format": "json"}]: dispatch
Dec 05 10:20:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:25.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:26 np0005546420.localdomain ceph-mon[298353]: pgmap v756: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 58 KiB/s wr, 4 op/s
Dec 05 10:20:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:26.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:26.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:20:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:26.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:20:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:26.893 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:20:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:27.595 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3489022920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:20:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:27.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:27.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:28.760 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:28 np0005546420.localdomain ceph-mon[298353]: pgmap v757: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 87 KiB/s wr, 6 op/s
Dec 05 10:20:28 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2965429103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:20:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "format": "json"}]: dispatch
Dec 05 10:20:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "force": true, "format": "json"}]: dispatch
Dec 05 10:20:29 np0005546420.localdomain sshd[329414]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:20:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:20:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:20:30 np0005546420.localdomain podman[329416]: 2025-12-05 10:20:30.52703061 +0000 UTC m=+0.098249495 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:20:30 np0005546420.localdomain podman[329417]: 2025-12-05 10:20:30.580088659 +0000 UTC m=+0.145601168 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 10:20:30 np0005546420.localdomain podman[329416]: 2025-12-05 10:20:30.591322566 +0000 UTC m=+0.162541441 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:20:30 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:20:30 np0005546420.localdomain podman[329417]: 2025-12-05 10:20:30.623781638 +0000 UTC m=+0.189294107 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:20:30 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:20:30 np0005546420.localdomain ceph-mon[298353]: pgmap v758: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 77 KiB/s wr, 4 op/s
Dec 05 10:20:31 np0005546420.localdomain sshd[329414]: Received disconnect from 24.232.50.5 port 53470:11: Bye Bye [preauth]
Dec 05 10:20:31 np0005546420.localdomain sshd[329414]: Disconnected from authenticating user root 24.232.50.5 port 53470 [preauth]
Dec 05 10:20:32 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "66fc337c-1267-4a35-81f5-115366d33363", "format": "json"}]: dispatch
Dec 05 10:20:32 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "force": true, "format": "json"}]: dispatch
Dec 05 10:20:32 np0005546420.localdomain ceph-mon[298353]: pgmap v759: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 109 KiB/s wr, 6 op/s
Dec 05 10:20:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:32.623 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:32.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:33.764 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:33.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:33.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:34 np0005546420.localdomain ceph-mon[298353]: pgmap v760: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 62 KiB/s wr, 3 op/s
Dec 05 10:20:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3937463032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:20:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:35.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:20:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:35.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:20:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:35.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:20:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:35.890 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:20:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:35.890 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:20:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:35.891 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:20:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:20:36 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1755188529' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:20:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:36.367 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:20:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:36.628 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:20:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:36.630 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11439MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:20:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:36.631 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:20:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:36.631 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:20:36 np0005546420.localdomain ceph-mon[298353]: pgmap v761: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 62 KiB/s wr, 3 op/s
Dec 05 10:20:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1991407580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:20:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1755188529' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.080 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.081 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.105 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.132 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.133 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.151 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.177 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.200 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.650 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:37 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:20:37 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3897957016' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.673 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.681 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.705 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.708 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:20:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:37.709 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:20:38 np0005546420.localdomain ceph-mon[298353]: pgmap v762: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 69 KiB/s wr, 4 op/s
Dec 05 10:20:38 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3897957016' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:20:38 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:20:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:38.799 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:39 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f12607d2-3489-40f0-b5a2-c89b74945c90", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:20:39 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f12607d2-3489-40f0-b5a2-c89b74945c90", "format": "json"}]: dispatch
Dec 05 10:20:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:20:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:20:39 np0005546420.localdomain podman[329506]: 2025-12-05 10:20:39.863101396 +0000 UTC m=+0.102042132 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 05 10:20:39 np0005546420.localdomain podman[329506]: 2025-12-05 10:20:39.897606073 +0000 UTC m=+0.136546799 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:20:39 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:20:39 np0005546420.localdomain podman[329505]: 2025-12-05 10:20:39.918007043 +0000 UTC m=+0.156477155 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:20:39 np0005546420.localdomain podman[329505]: 2025-12-05 10:20:39.930422196 +0000 UTC m=+0.168892308 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:20:39 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:20:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:40 np0005546420.localdomain ceph-mon[298353]: pgmap v763: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 40 KiB/s wr, 2 op/s
Dec 05 10:20:42 np0005546420.localdomain ceph-mon[298353]: pgmap v764: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 53 KiB/s wr, 3 op/s
Dec 05 10:20:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:42.699 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:43 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f12607d2-3489-40f0-b5a2-c89b74945c90", "format": "json"}]: dispatch
Dec 05 10:20:43 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f12607d2-3489-40f0-b5a2-c89b74945c90", "force": true, "format": "json"}]: dispatch
Dec 05 10:20:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:20:43 np0005546420.localdomain systemd[1]: tmp-crun.dsuViQ.mount: Deactivated successfully.
Dec 05 10:20:43 np0005546420.localdomain podman[329546]: 2025-12-05 10:20:43.513315466 +0000 UTC m=+0.089363000 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd)
Dec 05 10:20:43 np0005546420.localdomain podman[329546]: 2025-12-05 10:20:43.555533061 +0000 UTC m=+0.131580615 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible)
Dec 05 10:20:43 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:20:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:43.832 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:44 np0005546420.localdomain ceph-mon[298353]: pgmap v765: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 21 KiB/s wr, 2 op/s
Dec 05 10:20:45 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:20:45.105 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:20:45 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:20:45.106 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:20:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:45.142 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:45 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:20:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6bd85ed5-06af-4874-ad2a-45b58df56273", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:20:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6bd85ed5-06af-4874-ad2a-45b58df56273", "format": "json"}]: dispatch
Dec 05 10:20:46 np0005546420.localdomain ceph-mon[298353]: pgmap v766: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 21 KiB/s wr, 2 op/s
Dec 05 10:20:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:20:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:20:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:20:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:20:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:20:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18299 "" "Go-http-client/1.1"
Dec 05 10:20:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:47.737 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:48 np0005546420.localdomain ceph-mon[298353]: pgmap v767: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 42 KiB/s wr, 3 op/s
Dec 05 10:20:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:48.855 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:20:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:20:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:20:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:20:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:20:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:20:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:20:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:20:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:20:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:20:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:20:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:20:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:50 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6bd85ed5-06af-4874-ad2a-45b58df56273", "format": "json"}]: dispatch
Dec 05 10:20:50 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6bd85ed5-06af-4874-ad2a-45b58df56273", "force": true, "format": "json"}]: dispatch
Dec 05 10:20:50 np0005546420.localdomain ceph-mon[298353]: pgmap v768: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 34 KiB/s wr, 2 op/s
Dec 05 10:20:52 np0005546420.localdomain ceph-mon[298353]: pgmap v769: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 58 KiB/s wr, 3 op/s
Dec 05 10:20:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:52.780 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:53.888 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:54 np0005546420.localdomain ceph-mon[298353]: pgmap v770: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s
Dec 05 10:20:55 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:20:55.109 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:20:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:20:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:20:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:20:55 np0005546420.localdomain podman[329567]: 2025-12-05 10:20:55.529617591 +0000 UTC m=+0.101172306 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:20:55 np0005546420.localdomain podman[329566]: 2025-12-05 10:20:55.577059646 +0000 UTC m=+0.153413560 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:20:55 np0005546420.localdomain podman[329567]: 2025-12-05 10:20:55.594430913 +0000 UTC m=+0.165985688 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:20:55 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:20:55 np0005546420.localdomain podman[329566]: 2025-12-05 10:20:55.647188202 +0000 UTC m=+0.223542126 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Dec 05 10:20:55 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:20:56 np0005546420.localdomain ceph-mon[298353]: pgmap v771: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s
Dec 05 10:20:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:57.808 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:20:58 np0005546420.localdomain ceph-mon[298353]: pgmap v772: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 51 KiB/s wr, 3 op/s
Dec 05 10:20:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:20:58.922 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:00 np0005546420.localdomain ceph-mon[298353]: pgmap v773: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 30 KiB/s wr, 1 op/s
Dec 05 10:21:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:21:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:21:01 np0005546420.localdomain podman[329609]: 2025-12-05 10:21:01.523913075 +0000 UTC m=+0.089965361 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 10:21:01 np0005546420.localdomain podman[329608]: 2025-12-05 10:21:01.598680444 +0000 UTC m=+0.167964949 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:21:01 np0005546420.localdomain podman[329608]: 2025-12-05 10:21:01.609630602 +0000 UTC m=+0.178915147 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 10:21:01 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:21:01 np0005546420.localdomain podman[329609]: 2025-12-05 10:21:01.625700929 +0000 UTC m=+0.191753195 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:21:01 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:21:02 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af7aba7d-7155-4dea-94b8-6a42535f8b87", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:21:02 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af7aba7d-7155-4dea-94b8-6a42535f8b87", "format": "json"}]: dispatch
Dec 05 10:21:02 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:21:02 np0005546420.localdomain ceph-mon[298353]: pgmap v774: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 36 KiB/s wr, 2 op/s
Dec 05 10:21:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:02.857 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:21:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2199858619' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:21:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:21:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2199858619' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:21:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2199858619' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:21:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2199858619' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:21:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:03.953 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:21:04.138 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:21:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:21:04.139 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:21:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:21:04.139 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:21:04 np0005546420.localdomain ceph-mon[298353]: pgmap v775: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 12 KiB/s wr, 1 op/s
Dec 05 10:21:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:05 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:21:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8bdb3abd-9cce-44c0-b84e-bfddf34d9553", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:21:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8bdb3abd-9cce-44c0-b84e-bfddf34d9553", "format": "json"}]: dispatch
Dec 05 10:21:06 np0005546420.localdomain ceph-mon[298353]: pgmap v776: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 12 KiB/s wr, 1 op/s
Dec 05 10:21:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:07.901 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:08 np0005546420.localdomain ceph-mon[298353]: pgmap v777: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 33 KiB/s wr, 2 op/s
Dec 05 10:21:08 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:21:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:08.989 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f5a657ad-5944-498a-8a0f-804036d0f99b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:21:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f5a657ad-5944-498a-8a0f-804036d0f99b", "format": "json"}]: dispatch
Dec 05 10:21:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:21:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:21:10 np0005546420.localdomain podman[329653]: 2025-12-05 10:21:10.500552058 +0000 UTC m=+0.081474127 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:21:10 np0005546420.localdomain podman[329653]: 2025-12-05 10:21:10.511584229 +0000 UTC m=+0.092506288 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:21:10 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:21:10 np0005546420.localdomain podman[329654]: 2025-12-05 10:21:10.556981681 +0000 UTC m=+0.134322390 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 10:21:10 np0005546420.localdomain podman[329654]: 2025-12-05 10:21:10.567557318 +0000 UTC m=+0.144898057 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 05 10:21:10 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:21:10 np0005546420.localdomain ceph-mon[298353]: pgmap v778: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 28 KiB/s wr, 1 op/s
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.367650) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071367720, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2684, "num_deletes": 263, "total_data_size": 3231448, "memory_usage": 3289824, "flush_reason": "Manual Compaction"}
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071382478, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 2096414, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36781, "largest_seqno": 39460, "table_properties": {"data_size": 2086346, "index_size": 6193, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 24560, "raw_average_key_size": 21, "raw_value_size": 2064792, "raw_average_value_size": 1846, "num_data_blocks": 269, "num_entries": 1118, "num_filter_entries": 1118, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929937, "oldest_key_time": 1764929937, "file_creation_time": 1764930071, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 14885 microseconds, and 6676 cpu microseconds.
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.382542) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 2096414 bytes OK
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.382565) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.388255) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.388280) EVENT_LOG_v1 {"time_micros": 1764930071388273, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.388304) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3218913, prev total WAL file size 3218913, number of live WAL files 2.
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.389316) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end)
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(2047KB)], [63(18MB)]
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071389378, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 21096728, "oldest_snapshot_seqno": -1}
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 14592 keys, 19714052 bytes, temperature: kUnknown
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071512313, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 19714052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19626891, "index_size": 49620, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36485, "raw_key_size": 387913, "raw_average_key_size": 26, "raw_value_size": 19375405, "raw_average_value_size": 1327, "num_data_blocks": 1874, "num_entries": 14592, "num_filter_entries": 14592, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764930071, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.512689) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 19714052 bytes
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.514621) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.5 rd, 160.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 18.1 +0.0 blob) out(18.8 +0.0 blob), read-write-amplify(19.5) write-amplify(9.4) OK, records in: 15133, records dropped: 541 output_compression: NoCompression
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.514655) EVENT_LOG_v1 {"time_micros": 1764930071514639, "job": 38, "event": "compaction_finished", "compaction_time_micros": 123041, "compaction_time_cpu_micros": 63497, "output_level": 6, "num_output_files": 1, "total_output_size": 19714052, "num_input_records": 15133, "num_output_records": 14592, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071515508, "job": 38, "event": "table_file_deletion", "file_number": 65}
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071520736, "job": 38, "event": "table_file_deletion", "file_number": 63}
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.389227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.520859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.520865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.520868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.520871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:11 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:11.520874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:21:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "format": "json"}]: dispatch
Dec 05 10:21:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:21:12 np0005546420.localdomain ceph-mon[298353]: pgmap v779: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 54 KiB/s wr, 3 op/s
Dec 05 10:21:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:21:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:12.937 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:13 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "53bd909c-da2c-4390-a803-9471f6d51aba", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:21:13 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "53bd909c-da2c-4390-a803-9471f6d51aba", "format": "json"}]: dispatch
Dec 05 10:21:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:14.028 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:21:14 np0005546420.localdomain ceph-mon[298353]: pgmap v780: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 48 KiB/s wr, 2 op/s
Dec 05 10:21:14 np0005546420.localdomain podman[329696]: 2025-12-05 10:21:14.505558427 +0000 UTC m=+0.082917313 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 10:21:14 np0005546420.localdomain podman[329696]: 2025-12-05 10:21:14.541368232 +0000 UTC m=+0.118727108 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:21:14 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:21:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:16 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "snap_name": "7a976cff-8d23-40b2-8d65-cb0f4826218e", "format": "json"}]: dispatch
Dec 05 10:21:16 np0005546420.localdomain ceph-mon[298353]: pgmap v781: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 48 KiB/s wr, 2 op/s
Dec 05 10:21:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:21:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:21:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:21:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:21:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:21:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18289 "" "Go-http-client/1.1"
Dec 05 10:21:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "53bd909c-da2c-4390-a803-9471f6d51aba", "format": "json"}]: dispatch
Dec 05 10:21:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "53bd909c-da2c-4390-a803-9471f6d51aba", "force": true, "format": "json"}]: dispatch
Dec 05 10:21:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:17.975 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:18 np0005546420.localdomain ceph-mon[298353]: pgmap v782: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s wr, 4 op/s
Dec 05 10:21:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:21:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:21:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:21:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:21:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:21:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:21:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:21:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:21:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:21:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:21:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:21:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:21:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:19.068 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:20 np0005546420.localdomain sudo[329716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:21:20 np0005546420.localdomain sudo[329716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:21:20 np0005546420.localdomain sudo[329716]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:20 np0005546420.localdomain sudo[329734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:21:20 np0005546420.localdomain sudo[329734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:21:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "snap_name": "7a976cff-8d23-40b2-8d65-cb0f4826218e_16c0bfc8-b2b3-41ef-8ae1-a3a466e5482d", "force": true, "format": "json"}]: dispatch
Dec 05 10:21:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "snap_name": "7a976cff-8d23-40b2-8d65-cb0f4826218e", "force": true, "format": "json"}]: dispatch
Dec 05 10:21:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f5a657ad-5944-498a-8a0f-804036d0f99b", "format": "json"}]: dispatch
Dec 05 10:21:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f5a657ad-5944-498a-8a0f-804036d0f99b", "force": true, "format": "json"}]: dispatch
Dec 05 10:21:20 np0005546420.localdomain ceph-mon[298353]: pgmap v783: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s wr, 2 op/s
Dec 05 10:21:21 np0005546420.localdomain sudo[329734]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:21 np0005546420.localdomain sudo[329783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:21:21 np0005546420.localdomain sudo[329783]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:21:21 np0005546420.localdomain sudo[329783]: pam_unix(sudo:session): session closed for user root
Dec 05 10:21:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:21:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:21:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:21:22 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:21:22 np0005546420.localdomain ceph-mon[298353]: pgmap v784: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 93 KiB/s wr, 4 op/s
Dec 05 10:21:22 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e285 e285: 6 total, 6 up, 6 in
Dec 05 10:21:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:23.011 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:23 np0005546420.localdomain ceph-mon[298353]: osdmap e285: 6 total, 6 up, 6 in
Dec 05 10:21:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:24.114 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:24 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8bdb3abd-9cce-44c0-b84e-bfddf34d9553", "format": "json"}]: dispatch
Dec 05 10:21:24 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8bdb3abd-9cce-44c0-b84e-bfddf34d9553", "force": true, "format": "json"}]: dispatch
Dec 05 10:21:24 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "format": "json"}]: dispatch
Dec 05 10:21:24 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "force": true, "format": "json"}]: dispatch
Dec 05 10:21:24 np0005546420.localdomain ceph-mon[298353]: pgmap v786: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 80 KiB/s wr, 4 op/s
Dec 05 10:21:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:25.710 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:25.711 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:21:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:25.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:26 np0005546420.localdomain sshd[329801]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:21:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:21:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:21:26 np0005546420.localdomain podman[329804]: 2025-12-05 10:21:26.522703305 +0000 UTC m=+0.088965388 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:21:26 np0005546420.localdomain podman[329804]: 2025-12-05 10:21:26.536389388 +0000 UTC m=+0.102651481 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:21:26 np0005546420.localdomain systemd[1]: tmp-crun.6Aiv0V.mount: Deactivated successfully.
Dec 05 10:21:26 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:21:26 np0005546420.localdomain podman[329803]: 2025-12-05 10:21:26.593328446 +0000 UTC m=+0.163609034 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-type=git, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, name=ubi9-minimal, io.openshift.expose-services=)
Dec 05 10:21:26 np0005546420.localdomain ceph-mon[298353]: pgmap v787: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 80 KiB/s wr, 4 op/s
Dec 05 10:21:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:21:26 np0005546420.localdomain podman[329803]: 2025-12-05 10:21:26.678463526 +0000 UTC m=+0.248744094 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 10:21:26 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:21:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:26.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:26.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:21:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:26.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:21:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:27.414 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:21:27 np0005546420.localdomain sshd[329801]: Received disconnect from 178.217.173.50 port 33660:11: Bye Bye [preauth]
Dec 05 10:21:27 np0005546420.localdomain sshd[329801]: Disconnected from authenticating user root 178.217.173.50 port 33660 [preauth]
Dec 05 10:21:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af7aba7d-7155-4dea-94b8-6a42535f8b87", "format": "json"}]: dispatch
Dec 05 10:21:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af7aba7d-7155-4dea-94b8-6a42535f8b87", "force": true, "format": "json"}]: dispatch
Dec 05 10:21:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:28.050 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:28 np0005546420.localdomain ceph-mon[298353]: pgmap v788: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 80 KiB/s wr, 5 op/s
Dec 05 10:21:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:29.156 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/4107184619' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:21:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:29.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:29.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:30 np0005546420.localdomain ceph-mon[298353]: pgmap v789: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 80 KiB/s wr, 5 op/s
Dec 05 10:21:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e286 e286: 6 total, 6 up, 6 in
Dec 05 10:21:32 np0005546420.localdomain ceph-mon[298353]: osdmap e286: 6 total, 6 up, 6 in
Dec 05 10:21:32 np0005546420.localdomain ceph-mon[298353]: pgmap v791: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 892 B/s rd, 65 KiB/s wr, 5 op/s
Dec 05 10:21:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:21:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:21:32 np0005546420.localdomain systemd[1]: tmp-crun.ob6Tsa.mount: Deactivated successfully.
Dec 05 10:21:32 np0005546420.localdomain podman[329845]: 2025-12-05 10:21:32.527740971 +0000 UTC m=+0.099611448 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 10:21:32 np0005546420.localdomain podman[329845]: 2025-12-05 10:21:32.537380468 +0000 UTC m=+0.109250905 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 05 10:21:32 np0005546420.localdomain podman[329846]: 2025-12-05 10:21:32.577248381 +0000 UTC m=+0.148182589 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:21:32 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:21:32 np0005546420.localdomain podman[329846]: 2025-12-05 10:21:32.622439756 +0000 UTC m=+0.193373904 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:21:32 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:21:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:33.092 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:33.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:34.200 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:34 np0005546420.localdomain ceph-mon[298353]: pgmap v792: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 60 KiB/s wr, 4 op/s
Dec 05 10:21:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:34.868 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:36 np0005546420.localdomain ceph-mon[298353]: pgmap v793: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 60 KiB/s wr, 4 op/s
Dec 05 10:21:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:37.669 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:37.670 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:37.670 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:21:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:37.729 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:21:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:37.730 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:21:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:37.731 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:21:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:37.731 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:21:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:37.731 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:21:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:38.127 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:38 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:21:38 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3478444652' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:21:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:38.181 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:21:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:38.372 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:21:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:38.373 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11433MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:21:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:38.374 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:21:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:38.374 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:21:38 np0005546420.localdomain ceph-mon[298353]: pgmap v794: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 21 KiB/s wr, 1 op/s
Dec 05 10:21:38 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/609056301' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:21:38 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3478444652' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:21:38 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3997336507' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:21:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:38.871 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:21:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:38.873 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:21:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:38.896 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:21:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:39.258 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:39 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:21:39 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1675687501' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:21:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:39.394 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:21:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:39.400 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:21:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:39.531 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:21:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:39.534 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:21:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:39.534 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:21:39 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1675687501' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:21:39 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1391270209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:21:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:40 np0005546420.localdomain ceph-mon[298353]: pgmap v795: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 21 KiB/s wr, 1 op/s
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.350015) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101350085, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 675, "num_deletes": 258, "total_data_size": 627302, "memory_usage": 641048, "flush_reason": "Manual Compaction"}
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101356986, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 410462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39466, "largest_seqno": 40135, "table_properties": {"data_size": 407205, "index_size": 1112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8076, "raw_average_key_size": 19, "raw_value_size": 400310, "raw_average_value_size": 959, "num_data_blocks": 49, "num_entries": 417, "num_filter_entries": 417, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930071, "oldest_key_time": 1764930071, "file_creation_time": 1764930101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7009 microseconds, and 2443 cpu microseconds.
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.357033) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 410462 bytes OK
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.357055) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.359235) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.359257) EVENT_LOG_v1 {"time_micros": 1764930101359251, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.359280) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 623507, prev total WAL file size 623831, number of live WAL files 2.
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.360122) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353137' seq:72057594037927935, type:22 .. '6C6F676D0034373730' seq:0, type:0; will stop at (end)
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(400KB)], [66(18MB)]
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101360194, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 20124514, "oldest_snapshot_seqno": -1}
Dec 05 10:21:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:21:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 14472 keys, 19990668 bytes, temperature: kUnknown
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101493947, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 19990668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19903229, "index_size": 50154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36229, "raw_key_size": 386525, "raw_average_key_size": 26, "raw_value_size": 19652704, "raw_average_value_size": 1357, "num_data_blocks": 1894, "num_entries": 14472, "num_filter_entries": 14472, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764930101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.494431) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 19990668 bytes
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.496277) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.3 rd, 149.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 18.8 +0.0 blob) out(19.1 +0.0 blob), read-write-amplify(97.7) write-amplify(48.7) OK, records in: 15009, records dropped: 537 output_compression: NoCompression
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.496307) EVENT_LOG_v1 {"time_micros": 1764930101496294, "job": 40, "event": "compaction_finished", "compaction_time_micros": 133930, "compaction_time_cpu_micros": 60472, "output_level": 6, "num_output_files": 1, "total_output_size": 19990668, "num_input_records": 15009, "num_output_records": 14472, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101497209, "job": 40, "event": "table_file_deletion", "file_number": 68}
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101500747, "job": 40, "event": "table_file_deletion", "file_number": 66}
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.359935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.500796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.500803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.500807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.500811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:41 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:41.500815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:41 np0005546420.localdomain podman[329934]: 2025-12-05 10:21:41.517571023 +0000 UTC m=+0.093480238 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:21:41 np0005546420.localdomain podman[329934]: 2025-12-05 10:21:41.557002131 +0000 UTC m=+0.132911376 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:21:41 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:21:41 np0005546420.localdomain podman[329935]: 2025-12-05 10:21:41.568261079 +0000 UTC m=+0.139247533 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:21:41 np0005546420.localdomain podman[329935]: 2025-12-05 10:21:41.65245619 +0000 UTC m=+0.223442644 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:21:41 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:21:42 np0005546420.localdomain ceph-mon[298353]: pgmap v796: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 199 B/s rd, 23 KiB/s wr, 1 op/s
Dec 05 10:21:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:43.130 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:44.278 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:44 np0005546420.localdomain ceph-mon[298353]: pgmap v797: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.6 KiB/s wr, 0 op/s
Dec 05 10:21:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:21:45 np0005546420.localdomain podman[329974]: 2025-12-05 10:21:45.510524819 +0000 UTC m=+0.086377370 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd)
Dec 05 10:21:45 np0005546420.localdomain podman[329974]: 2025-12-05 10:21:45.527480032 +0000 UTC m=+0.103332583 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:21:45 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:21:46 np0005546420.localdomain ceph-mon[298353]: pgmap v798: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.5 KiB/s wr, 0 op/s
Dec 05 10:21:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:21:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:21:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:21:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:21:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:21:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18299 "" "Go-http-client/1.1"
Dec 05 10:21:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:48.159 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:48 np0005546420.localdomain ceph-mon[298353]: pgmap v799: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.5 KiB/s wr, 0 op/s
Dec 05 10:21:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:21:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:21:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:21:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:21:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:21:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:21:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:21:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:21:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:21:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:21:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:21:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:21:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:49.317 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:50 np0005546420.localdomain ceph-mon[298353]: pgmap v800: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s wr, 0 op/s
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.413485) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111413541, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 357, "num_deletes": 250, "total_data_size": 162095, "memory_usage": 168752, "flush_reason": "Manual Compaction"}
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111417382, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 105176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40140, "largest_seqno": 40492, "table_properties": {"data_size": 103059, "index_size": 292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5909, "raw_average_key_size": 20, "raw_value_size": 98827, "raw_average_value_size": 339, "num_data_blocks": 13, "num_entries": 291, "num_filter_entries": 291, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930101, "oldest_key_time": 1764930101, "file_creation_time": 1764930111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 3941 microseconds, and 1213 cpu microseconds.
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.417428) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 105176 bytes OK
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.417449) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.419242) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.419260) EVENT_LOG_v1 {"time_micros": 1764930111419254, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.419280) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 159670, prev total WAL file size 159670, number of live WAL files 2.
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.419777) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323537' seq:72057594037927935, type:22 .. '6D6772737461740034353038' seq:0, type:0; will stop at (end)
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(102KB)], [69(19MB)]
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111419853, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 20095844, "oldest_snapshot_seqno": -1}
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 14250 keys, 17989570 bytes, temperature: kUnknown
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111519824, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 17989570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17908399, "index_size": 44462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35653, "raw_key_size": 382102, "raw_average_key_size": 26, "raw_value_size": 17666520, "raw_average_value_size": 1239, "num_data_blocks": 1656, "num_entries": 14250, "num_filter_entries": 14250, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764930111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.520275) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 17989570 bytes
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.522110) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.7 rd, 179.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 19.1 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(362.1) write-amplify(171.0) OK, records in: 14763, records dropped: 513 output_compression: NoCompression
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.522141) EVENT_LOG_v1 {"time_micros": 1764930111522127, "job": 42, "event": "compaction_finished", "compaction_time_micros": 100133, "compaction_time_cpu_micros": 54259, "output_level": 6, "num_output_files": 1, "total_output_size": 17989570, "num_input_records": 14763, "num_output_records": 14250, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111522339, "job": 42, "event": "table_file_deletion", "file_number": 71}
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111525787, "job": 42, "event": "table_file_deletion", "file_number": 69}
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.419675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.525860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.525869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.525872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.525875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:21:51.525878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:21:52 np0005546420.localdomain ceph-mon[298353]: pgmap v801: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s wr, 0 op/s
Dec 05 10:21:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:21:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:53.210 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:21:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "format": "json"}]: dispatch
Dec 05 10:21:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:54.362 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:54 np0005546420.localdomain ceph-mon[298353]: pgmap v802: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:21:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:21:56 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "snap_name": "ae208c69-a56a-463a-9bcc-3888ef448123", "format": "json"}]: dispatch
Dec 05 10:21:56 np0005546420.localdomain ceph-mon[298353]: pgmap v803: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:21:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:21:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:21:57 np0005546420.localdomain podman[329993]: 2025-12-05 10:21:57.519605277 +0000 UTC m=+0.093270400 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec 05 10:21:57 np0005546420.localdomain podman[329993]: 2025-12-05 10:21:57.559442617 +0000 UTC m=+0.133107700 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container, release=1755695350, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6)
Dec 05 10:21:57 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:21:57 np0005546420.localdomain podman[329994]: 2025-12-05 10:21:57.579660263 +0000 UTC m=+0.147480806 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:21:57 np0005546420.localdomain podman[329994]: 2025-12-05 10:21:57.593415057 +0000 UTC m=+0.161235600 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:21:57 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:21:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:58.247 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:58 np0005546420.localdomain ceph-mon[298353]: pgmap v804: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s
Dec 05 10:21:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:21:59.402 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:21:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "snap_name": "ae208c69-a56a-463a-9bcc-3888ef448123_5efb51d0-b619-4eae-8c1e-aa8a23f8d68c", "force": true, "format": "json"}]: dispatch
Dec 05 10:21:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "snap_name": "ae208c69-a56a-463a-9bcc-3888ef448123", "force": true, "format": "json"}]: dispatch
Dec 05 10:22:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:00 np0005546420.localdomain ceph-mon[298353]: pgmap v805: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s
Dec 05 10:22:02 np0005546420.localdomain ceph-mon[298353]: pgmap v806: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 27 KiB/s wr, 2 op/s
Dec 05 10:22:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:03.276 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:22:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:22:03 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "format": "json"}]: dispatch
Dec 05 10:22:03 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "force": true, "format": "json"}]: dispatch
Dec 05 10:22:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1945601657' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:22:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/1945601657' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:22:03 np0005546420.localdomain podman[330038]: 2025-12-05 10:22:03.520385832 +0000 UTC m=+0.096098649 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 10:22:03 np0005546420.localdomain podman[330039]: 2025-12-05 10:22:03.572434391 +0000 UTC m=+0.142008548 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:22:03 np0005546420.localdomain podman[330038]: 2025-12-05 10:22:03.585330018 +0000 UTC m=+0.161042835 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 05 10:22:03 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:22:03 np0005546420.localdomain podman[330039]: 2025-12-05 10:22:03.620047271 +0000 UTC m=+0.189621398 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 10:22:03 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:22:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:22:04.139 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:22:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:22:04.140 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:22:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:22:04.140 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:22:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:04.436 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:04 np0005546420.localdomain ceph-mon[298353]: pgmap v807: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 27 KiB/s wr, 2 op/s
Dec 05 10:22:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:05 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:22:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:22:06 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.1 total, 600.0 interval
                                                          Cumulative writes: 22K writes, 86K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                                          Cumulative WAL: 22K writes, 7660 syncs, 2.92 writes per sync, written: 0.06 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 27.33 MB, 0.05 MB/s
                                                          Interval WAL: 10K writes, 4262 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:22:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "634bc3f3-1d76-4f32-ac6a-ef8f376d641b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:22:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "634bc3f3-1d76-4f32-ac6a-ef8f376d641b", "format": "json"}]: dispatch
Dec 05 10:22:06 np0005546420.localdomain ceph-mon[298353]: pgmap v808: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 27 KiB/s wr, 2 op/s
Dec 05 10:22:07 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e287 e287: 6 total, 6 up, 6 in
Dec 05 10:22:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:08.306 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:08 np0005546420.localdomain ceph-mon[298353]: osdmap e287: 6 total, 6 up, 6 in
Dec 05 10:22:08 np0005546420.localdomain ceph-mon[298353]: pgmap v810: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 61 KiB/s wr, 4 op/s
Dec 05 10:22:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "634bc3f3-1d76-4f32-ac6a-ef8f376d641b", "format": "json"}]: dispatch
Dec 05 10:22:09 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "634bc3f3-1d76-4f32-ac6a-ef8f376d641b", "force": true, "format": "json"}]: dispatch
Dec 05 10:22:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:09.471 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:10 np0005546420.localdomain ceph-mon[298353]: pgmap v811: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 61 KiB/s wr, 4 op/s
Dec 05 10:22:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:22:10 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.2 total, 600.0 interval
                                                          Cumulative writes: 21K writes, 84K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.01 MB/s
                                                          Cumulative WAL: 21K writes, 7932 syncs, 2.77 writes per sync, written: 0.08 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 41.02 MB, 0.07 MB/s
                                                          Interval WAL: 11K writes, 4859 syncs, 2.36 writes per sync, written: 0.04 GB, 0.07 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:22:11 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 05 10:22:12 np0005546420.localdomain sshd[330083]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:22:12 np0005546420.localdomain ceph-mon[298353]: pgmap v812: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 67 KiB/s wr, 4 op/s
Dec 05 10:22:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:22:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:22:12 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:22:12 np0005546420.localdomain podman[330085]: 2025-12-05 10:22:12.507243462 +0000 UTC m=+0.075692609 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:22:12 np0005546420.localdomain podman[330086]: 2025-12-05 10:22:12.580126493 +0000 UTC m=+0.140717207 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:22:12 np0005546420.localdomain podman[330085]: 2025-12-05 10:22:12.598696697 +0000 UTC m=+0.167145794 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:22:12 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:22:12 np0005546420.localdomain podman[330086]: 2025-12-05 10:22:12.615461925 +0000 UTC m=+0.176052629 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 05 10:22:12 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.959 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:22:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:22:13 np0005546420.localdomain sshd[330083]: Received disconnect from 24.232.50.5 port 36098:11: Bye Bye [preauth]
Dec 05 10:22:13 np0005546420.localdomain sshd[330083]: Disconnected from authenticating user root 24.232.50.5 port 36098 [preauth]
Dec 05 10:22:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:13.344 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:13 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:22:13 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "format": "json"}]: dispatch
Dec 05 10:22:14 np0005546420.localdomain ceph-mon[298353]: pgmap v813: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s
Dec 05 10:22:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:14.498 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e288 e288: 6 total, 6 up, 6 in
Dec 05 10:22:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:22:16 np0005546420.localdomain podman[330125]: 2025-12-05 10:22:16.52261624 +0000 UTC m=+0.095263284 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:22:16 np0005546420.localdomain podman[330125]: 2025-12-05 10:22:16.541925386 +0000 UTC m=+0.114572460 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:22:16 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:22:16 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "snap_name": "350bb740-9d29-4da1-bcf8-f780530c6ed0", "format": "json"}]: dispatch
Dec 05 10:22:16 np0005546420.localdomain ceph-mon[298353]: pgmap v814: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s
Dec 05 10:22:16 np0005546420.localdomain ceph-mon[298353]: osdmap e288: 6 total, 6 up, 6 in
Dec 05 10:22:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:22:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:22:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:22:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:22:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:22:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18298 "" "Go-http-client/1.1"
Dec 05 10:22:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:18.381 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:18 np0005546420.localdomain ceph-mon[298353]: pgmap v816: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 56 KiB/s wr, 3 op/s
Dec 05 10:22:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:22:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:22:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:22:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:22:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:22:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:22:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:22:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:22:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:22:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:22:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:22:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:22:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:19.540 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "snap_name": "350bb740-9d29-4da1-bcf8-f780530c6ed0", "target_sub_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "format": "json"}]: dispatch
Dec 05 10:22:19 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "format": "json"}]: dispatch
Dec 05 10:22:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:20 np0005546420.localdomain ceph-mon[298353]: pgmap v817: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 56 KiB/s wr, 3 op/s
Dec 05 10:22:20 np0005546420.localdomain ceph-mon[298353]: mgrmap e53: np0005546419.zhsnqq(active, since 20m), standbys: np0005546420.aoeylc, np0005546421.sukfea
Dec 05 10:22:21 np0005546420.localdomain sudo[330145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:22:21 np0005546420.localdomain sudo[330145]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:22:21 np0005546420.localdomain sudo[330145]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:21 np0005546420.localdomain sudo[330163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 05 10:22:21 np0005546420.localdomain sudo[330163]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:22:22 np0005546420.localdomain ceph-mon[298353]: pgmap v818: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 56 KiB/s wr, 4 op/s
Dec 05 10:22:22 np0005546420.localdomain systemd[1]: tmp-crun.B5fV1O.mount: Deactivated successfully.
Dec 05 10:22:22 np0005546420.localdomain podman[330249]: 2025-12-05 10:22:22.806407746 +0000 UTC m=+0.100574847 container exec 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Dec 05 10:22:22 np0005546420.localdomain podman[330249]: 2025-12-05 10:22:22.96193923 +0000 UTC m=+0.256106331 container exec_died 909d634fc41f72cae9ceedfb293b180a9391380d32d6561424f3690c6db50d18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546420, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, version=7, ceph=True, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 10:22:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:23.424 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:23 np0005546420.localdomain sudo[330163]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:23 np0005546420.localdomain sudo[330372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:22:23 np0005546420.localdomain sudo[330372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:22:23 np0005546420.localdomain sudo[330372]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:23 np0005546420.localdomain sudo[330390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:22:23 np0005546420.localdomain sudo[330390]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:22:24 np0005546420.localdomain sudo[330390]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:22:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:22:24 np0005546420.localdomain ceph-mon[298353]: pgmap v819: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 56 KiB/s wr, 4 op/s
Dec 05 10:22:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:22:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:22:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:22:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:22:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 05 10:22:24 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 05 10:22:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:24.566 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:24 np0005546420.localdomain sudo[330440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:22:24 np0005546420.localdomain sudo[330440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:22:24 np0005546420.localdomain sudo[330440]: pam_unix(sudo:session): session closed for user root
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:22:25 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:22:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:25.736 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:25.737 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:22:26 np0005546420.localdomain ceph-mon[298353]: pgmap v820: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 56 KiB/s wr, 4 op/s
Dec 05 10:22:26 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:22:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:26.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:22:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:22:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:28.470 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:28 np0005546420.localdomain podman[330458]: 2025-12-05 10:22:28.563808893 +0000 UTC m=+0.091786336 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public)
Dec 05 10:22:28 np0005546420.localdomain systemd[1]: tmp-crun.SPcGm9.mount: Deactivated successfully.
Dec 05 10:22:28 np0005546420.localdomain podman[330459]: 2025-12-05 10:22:28.614259991 +0000 UTC m=+0.143967908 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:22:28 np0005546420.localdomain podman[330459]: 2025-12-05 10:22:28.626406556 +0000 UTC m=+0.156114473 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:22:28 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:22:28 np0005546420.localdomain podman[330458]: 2025-12-05 10:22:28.69873481 +0000 UTC m=+0.226712233 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 10:22:28 np0005546420.localdomain ceph-mon[298353]: pgmap v821: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 547 B/s rd, 50 KiB/s wr, 4 op/s
Dec 05 10:22:28 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:22:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:28.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:28.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:22:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:28.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:22:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:28.987 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:22:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:29.609 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1029736738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:22:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:30 np0005546420.localdomain ceph-mon[298353]: pgmap v822: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 47 KiB/s wr, 4 op/s
Dec 05 10:22:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/567841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:22:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:30.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:31.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:32 np0005546420.localdomain ceph-mon[298353]: pgmap v823: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 49 KiB/s wr, 5 op/s
Dec 05 10:22:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:33.508 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:33.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:22:34 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:22:34 np0005546420.localdomain podman[330504]: 2025-12-05 10:22:34.487790584 +0000 UTC m=+0.065447792 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:22:34 np0005546420.localdomain podman[330504]: 2025-12-05 10:22:34.503867761 +0000 UTC m=+0.081525009 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 05 10:22:34 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:22:34 np0005546420.localdomain podman[330505]: 2025-12-05 10:22:34.558067096 +0000 UTC m=+0.129697458 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Dec 05 10:22:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:34.632 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:34 np0005546420.localdomain podman[330505]: 2025-12-05 10:22:34.647512298 +0000 UTC m=+0.219142680 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 05 10:22:34 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:22:34 np0005546420.localdomain ceph-mon[298353]: pgmap v824: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 33 KiB/s wr, 3 op/s
Dec 05 10:22:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:34.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:35.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:35.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:22:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:35.897 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:22:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:35.898 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:22:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:35.898 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:22:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:35.899 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:22:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:35.900 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:22:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:22:36 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1342300014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:22:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:36.374 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:22:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:36.586 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:22:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:36.588 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11413MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:22:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:36.588 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:22:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:36.588 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:22:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:36.638 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:22:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:36.639 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:22:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:36.655 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:22:36 np0005546420.localdomain ceph-mon[298353]: pgmap v825: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 33 KiB/s wr, 2 op/s
Dec 05 10:22:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/765261355' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:22:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1342300014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:22:37 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:22:37 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4208800554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:22:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:37.120 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:22:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:37.128 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:22:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:37.346 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:22:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:37.350 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:22:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:37.351 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:22:37 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2919384564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:22:37 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/4208800554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:22:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:38.547 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:38 np0005546420.localdomain ceph-mon[298353]: pgmap v826: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 46 KiB/s wr, 3 op/s
Dec 05 10:22:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:39.668 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:40 np0005546420.localdomain ceph-mon[298353]: pgmap v827: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 16 KiB/s wr, 1 op/s
Dec 05 10:22:42 np0005546420.localdomain ceph-mon[298353]: pgmap v828: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 16 KiB/s wr, 1 op/s
Dec 05 10:22:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:22:43 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:22:43 np0005546420.localdomain podman[330593]: 2025-12-05 10:22:43.525855356 +0000 UTC m=+0.095201332 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:22:43 np0005546420.localdomain podman[330593]: 2025-12-05 10:22:43.556372569 +0000 UTC m=+0.125718585 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 10:22:43 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:22:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:43.604 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:43 np0005546420.localdomain podman[330592]: 2025-12-05 10:22:43.61083009 +0000 UTC m=+0.181819877 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:22:43 np0005546420.localdomain podman[330592]: 2025-12-05 10:22:43.625330318 +0000 UTC m=+0.196320105 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:22:43 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:22:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:44.713 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:44 np0005546420.localdomain ceph-mon[298353]: pgmap v829: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 13 KiB/s wr, 1 op/s
Dec 05 10:22:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:46 np0005546420.localdomain ceph-mon[298353]: pgmap v830: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 13 KiB/s wr, 1 op/s
Dec 05 10:22:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:22:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:22:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:22:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:22:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:22:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18302 "" "Go-http-client/1.1"
Dec 05 10:22:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:22:47 np0005546420.localdomain systemd[1]: tmp-crun.eWpBrt.mount: Deactivated successfully.
Dec 05 10:22:47 np0005546420.localdomain podman[330636]: 2025-12-05 10:22:47.518563823 +0000 UTC m=+0.093747856 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 10:22:47 np0005546420.localdomain podman[330636]: 2025-12-05 10:22:47.534323171 +0000 UTC m=+0.109507184 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:22:47 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:22:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:48.635 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:48 np0005546420.localdomain ceph-mon[298353]: pgmap v831: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 17 KiB/s wr, 1 op/s
Dec 05 10:22:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:22:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:22:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:22:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:22:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:22:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:22:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:22:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:22:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:22:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:22:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:22:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:22:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:49.751 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:50 np0005546420.localdomain ceph-mon[298353]: pgmap v832: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.8 KiB/s wr, 0 op/s
Dec 05 10:22:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "format": "json"}]: dispatch
Dec 05 10:22:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:22:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "format": "json"}]: dispatch
Dec 05 10:22:52 np0005546420.localdomain ceph-mon[298353]: pgmap v833: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.8 KiB/s wr, 0 op/s
Dec 05 10:22:53 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:53.675 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "format": "json"}]: dispatch
Dec 05 10:22:53 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "force": true, "format": "json"}]: dispatch
Dec 05 10:22:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:54.810 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:54 np0005546420.localdomain ceph-mon[298353]: pgmap v834: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.7 KiB/s wr, 0 op/s
Dec 05 10:22:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:22:55 np0005546420.localdomain ceph-mon[298353]: pgmap v835: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.7 KiB/s wr, 0 op/s
Dec 05 10:22:58 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "snap_name": "350bb740-9d29-4da1-bcf8-f780530c6ed0_1766b14d-8ab9-4c3d-b694-3340927e09ec", "force": true, "format": "json"}]: dispatch
Dec 05 10:22:58 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "snap_name": "350bb740-9d29-4da1-bcf8-f780530c6ed0", "force": true, "format": "json"}]: dispatch
Dec 05 10:22:58 np0005546420.localdomain ceph-mon[298353]: pgmap v836: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 19 KiB/s wr, 1 op/s
Dec 05 10:22:58 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:58.704 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:22:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:22:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:22:59 np0005546420.localdomain podman[330656]: 2025-12-05 10:22:59.516597573 +0000 UTC m=+0.083002945 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, io.buildah.version=1.33.7, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible)
Dec 05 10:22:59 np0005546420.localdomain podman[330657]: 2025-12-05 10:22:59.569085885 +0000 UTC m=+0.132171385 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:22:59 np0005546420.localdomain podman[330657]: 2025-12-05 10:22:59.580750715 +0000 UTC m=+0.143836165 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:22:59 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:22:59 np0005546420.localdomain podman[330656]: 2025-12-05 10:22:59.636049932 +0000 UTC m=+0.202455294 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, vendor=Red Hat, Inc.)
Dec 05 10:22:59 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:22:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:22:59.866 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:00 np0005546420.localdomain ceph-mon[298353]: pgmap v837: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 16 KiB/s wr, 0 op/s
Dec 05 10:23:01 np0005546420.localdomain sshd[330697]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:23:01 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "format": "json"}]: dispatch
Dec 05 10:23:01 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "force": true, "format": "json"}]: dispatch
Dec 05 10:23:02 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e289 e289: 6 total, 6 up, 6 in
Dec 05 10:23:02 np0005546420.localdomain ceph-mon[298353]: pgmap v838: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 43 KiB/s wr, 2 op/s
Dec 05 10:23:02 np0005546420.localdomain ceph-mon[298353]: osdmap e289: 6 total, 6 up, 6 in
Dec 05 10:23:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:23:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2869923890' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:23:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:23:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2869923890' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:23:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:03.750 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2869923890' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:23:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2869923890' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:23:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:23:04.140 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:23:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:23:04.141 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:23:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:23:04.141 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:23:04 np0005546420.localdomain ceph-mon[298353]: pgmap v840: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 52 KiB/s wr, 3 op/s
Dec 05 10:23:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:04.908 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:23:05.194 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:23:05 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:23:05.195 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:23:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:05.196 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:23:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:23:05 np0005546420.localdomain podman[330699]: 2025-12-05 10:23:05.519487013 +0000 UTC m=+0.091879999 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 05 10:23:05 np0005546420.localdomain podman[330699]: 2025-12-05 10:23:05.532133513 +0000 UTC m=+0.104526489 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:23:05 np0005546420.localdomain podman[330700]: 2025-12-05 10:23:05.568669852 +0000 UTC m=+0.137066905 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 10:23:05 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:23:05 np0005546420.localdomain podman[330700]: 2025-12-05 10:23:05.647554228 +0000 UTC m=+0.215951261 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:23:05 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:23:05 np0005546420.localdomain ceph-mon[298353]: pgmap v841: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 52 KiB/s wr, 3 op/s
Dec 05 10:23:06 np0005546420.localdomain sshd[330742]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:23:08 np0005546420.localdomain ceph-mon[298353]: pgmap v842: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 47 KiB/s wr, 3 op/s
Dec 05 10:23:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:08.790 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:09.943 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:10 np0005546420.localdomain ceph-mon[298353]: pgmap v843: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 47 KiB/s wr, 3 op/s
Dec 05 10:23:11 np0005546420.localdomain sshd[330742]: Received disconnect from 41.94.88.49 port 39400:11: Bye Bye [preauth]
Dec 05 10:23:11 np0005546420.localdomain sshd[330742]: Disconnected from authenticating user root 41.94.88.49 port 39400 [preauth]
Dec 05 10:23:11 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 e290: 6 total, 6 up, 6 in
Dec 05 10:23:12 np0005546420.localdomain ceph-mon[298353]: osdmap e290: 6 total, 6 up, 6 in
Dec 05 10:23:12 np0005546420.localdomain ceph-mon[298353]: pgmap v845: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 223 B/s rd, 26 KiB/s wr, 2 op/s
Dec 05 10:23:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:23:13 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:23:13.197 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:23:13 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:23:13 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "format": "json"}]: dispatch
Dec 05 10:23:13 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:13.816 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:23:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:23:14 np0005546420.localdomain ceph-mon[298353]: pgmap v846: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 24 KiB/s wr, 1 op/s
Dec 05 10:23:14 np0005546420.localdomain podman[330744]: 2025-12-05 10:23:14.514396922 +0000 UTC m=+0.089652480 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:23:14 np0005546420.localdomain podman[330744]: 2025-12-05 10:23:14.524313888 +0000 UTC m=+0.099569426 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:23:14 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:23:14 np0005546420.localdomain podman[330745]: 2025-12-05 10:23:14.617207617 +0000 UTC m=+0.188286967 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:23:14 np0005546420.localdomain podman[330745]: 2025-12-05 10:23:14.62734695 +0000 UTC m=+0.198426310 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 10:23:14 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:23:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:15.002 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:16 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "snap_name": "cbd95e9a-4741-4e3c-931e-7bcd695d1401", "format": "json"}]: dispatch
Dec 05 10:23:16 np0005546420.localdomain ceph-mon[298353]: pgmap v847: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 24 KiB/s wr, 1 op/s
Dec 05 10:23:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:23:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:23:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:23:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:23:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:23:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18304 "" "Go-http-client/1.1"
Dec 05 10:23:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:23:18 np0005546420.localdomain podman[330785]: 2025-12-05 10:23:18.514166518 +0000 UTC m=+0.084948235 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:23:18 np0005546420.localdomain podman[330785]: 2025-12-05 10:23:18.524544089 +0000 UTC m=+0.095325836 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:23:18 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:23:18 np0005546420.localdomain ceph-mon[298353]: pgmap v848: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 29 KiB/s wr, 1 op/s
Dec 05 10:23:18 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:18.818 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:23:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:23:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:23:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:23:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:23:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:23:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:23:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:23:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:23:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:23:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:23:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:23:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:20.040 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "snap_name": "cbd95e9a-4741-4e3c-931e-7bcd695d1401", "target_sub_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "format": "json"}]: dispatch
Dec 05 10:23:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "format": "json"}]: dispatch
Dec 05 10:23:20 np0005546420.localdomain ceph-mon[298353]: pgmap v849: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 29 KiB/s wr, 1 op/s
Dec 05 10:23:22 np0005546420.localdomain ceph-mon[298353]: pgmap v850: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s wr, 2 op/s
Dec 05 10:23:23 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:23.861 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:24 np0005546420.localdomain ceph-mon[298353]: pgmap v851: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s wr, 1 op/s
Dec 05 10:23:24 np0005546420.localdomain sudo[330804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:23:24 np0005546420.localdomain sudo[330804]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:23:24 np0005546420.localdomain sudo[330804]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:25 np0005546420.localdomain sudo[330822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:23:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:25.084 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:25 np0005546420.localdomain sudo[330822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:23:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:25 np0005546420.localdomain sudo[330822]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:26 np0005546420.localdomain ceph-mon[298353]: pgmap v852: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s wr, 1 op/s
Dec 05 10:23:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:27.350 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:27.351 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:23:28 np0005546420.localdomain sudo[330873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:23:28 np0005546420.localdomain sudo[330873]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:23:28 np0005546420.localdomain sudo[330873]: pam_unix(sudo:session): session closed for user root
Dec 05 10:23:28 np0005546420.localdomain ceph-mon[298353]: pgmap v853: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 340 B/s rd, 52 KiB/s wr, 3 op/s
Dec 05 10:23:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:23:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:23:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:23:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:23:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:23:28 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:23:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:28.898 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:28.899 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:29.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:29.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:23:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:29.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:23:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:30.117 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:30.243 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:23:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:23:30 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:23:30 np0005546420.localdomain podman[330891]: 2025-12-05 10:23:30.521362609 +0000 UTC m=+0.093313033 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.expose-services=, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:23:30 np0005546420.localdomain systemd[1]: tmp-crun.WkpuRP.mount: Deactivated successfully.
Dec 05 10:23:30 np0005546420.localdomain podman[330892]: 2025-12-05 10:23:30.575702308 +0000 UTC m=+0.147663432 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:23:30 np0005546420.localdomain podman[330891]: 2025-12-05 10:23:30.588027799 +0000 UTC m=+0.159978253 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 10:23:30 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:23:30 np0005546420.localdomain podman[330892]: 2025-12-05 10:23:30.609867554 +0000 UTC m=+0.181828658 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:23:30 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:23:30 np0005546420.localdomain ceph-mon[298353]: pgmap v854: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 340 B/s rd, 35 KiB/s wr, 2 op/s
Dec 05 10:23:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2154872146' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:23:30 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:23:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1122658377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:23:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:32.239 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:32 np0005546420.localdomain ceph-mon[298353]: pgmap v855: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 596 B/s rd, 53 KiB/s wr, 4 op/s
Dec 05 10:23:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:32.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:33.960 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:34 np0005546420.localdomain ceph-mon[298353]: pgmap v856: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 596 B/s rd, 43 KiB/s wr, 3 op/s
Dec 05 10:23:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:34.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:35.156 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:35 np0005546420.localdomain sshd[330935]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:23:35 np0005546420.localdomain sshd[330936]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:23:35 np0005546420.localdomain sshd[330936]: error: kex_exchange_identification: read: Connection reset by peer
Dec 05 10:23:35 np0005546420.localdomain sshd[330936]: Connection reset by 45.140.17.97 port 17858
Dec 05 10:23:35 np0005546420.localdomain ceph-mon[298353]: pgmap v857: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 596 B/s rd, 43 KiB/s wr, 3 op/s
Dec 05 10:23:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:35.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:35.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:35.909 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:23:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:35.910 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:23:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:35.910 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:23:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:35.911 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:23:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:35.911 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:23:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:23:36 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4080366141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:23:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:23:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:23:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:36.422 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:23:36 np0005546420.localdomain podman[330960]: 2025-12-05 10:23:36.527067156 +0000 UTC m=+0.100078142 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 10:23:36 np0005546420.localdomain podman[330959]: 2025-12-05 10:23:36.564793902 +0000 UTC m=+0.142190024 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:23:36 np0005546420.localdomain podman[330960]: 2025-12-05 10:23:36.579440944 +0000 UTC m=+0.152451980 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:23:36 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:23:36 np0005546420.localdomain podman[330959]: 2025-12-05 10:23:36.630249023 +0000 UTC m=+0.207645175 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute)
Dec 05 10:23:36 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:23:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:36.664 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:23:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:36.667 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11399MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:23:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:36.668 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:23:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:36.668 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:23:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:36.720 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:23:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:36.720 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:23:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:36.738 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:23:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2229880570' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:23:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/4080366141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:23:37 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:23:37 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1692855588' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:23:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:37.224 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:23:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:37.231 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:23:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:37.252 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:23:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:37.256 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:23:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:37.256 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:23:37 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3960805442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:23:37 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1692855588' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:23:37 np0005546420.localdomain ceph-mon[298353]: pgmap v858: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 596 B/s rd, 53 KiB/s wr, 4 op/s
Dec 05 10:23:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:38.257 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:38.992 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:39.869 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:23:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:40.175 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:40 np0005546420.localdomain ceph-mon[298353]: pgmap v859: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 28 KiB/s wr, 2 op/s
Dec 05 10:23:42 np0005546420.localdomain ceph-mon[298353]: pgmap v860: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 36 KiB/s wr, 3 op/s
Dec 05 10:23:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:44.043 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:44 np0005546420.localdomain ceph-mon[298353]: pgmap v861: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 18 KiB/s wr, 1 op/s
Dec 05 10:23:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:45.216 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:23:45 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:23:45 np0005546420.localdomain podman[331025]: 2025-12-05 10:23:45.526599139 +0000 UTC m=+0.094141739 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:23:45 np0005546420.localdomain podman[331025]: 2025-12-05 10:23:45.536381231 +0000 UTC m=+0.103924351 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:23:45 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:23:45 np0005546420.localdomain podman[331026]: 2025-12-05 10:23:45.624176583 +0000 UTC m=+0.188166784 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:23:45 np0005546420.localdomain podman[331026]: 2025-12-05 10:23:45.660439253 +0000 UTC m=+0.224429504 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:23:45 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:23:46 np0005546420.localdomain ceph-mon[298353]: pgmap v862: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 18 KiB/s wr, 1 op/s
Dec 05 10:23:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:23:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:23:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:23:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:23:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:23:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18304 "" "Go-http-client/1.1"
Dec 05 10:23:48 np0005546420.localdomain ceph-mon[298353]: pgmap v863: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 18 KiB/s wr, 1 op/s
Dec 05 10:23:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:23:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:23:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:23:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:23:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:23:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:23:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:23:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:23:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:23:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:23:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:23:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:23:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:49.045 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:23:49 np0005546420.localdomain podman[331066]: 2025-12-05 10:23:49.513153768 +0000 UTC m=+0.090320341 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 10:23:49 np0005546420.localdomain podman[331066]: 2025-12-05 10:23:49.529340928 +0000 UTC m=+0.106507541 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 10:23:49 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:23:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:50.261 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:50 np0005546420.localdomain ceph-mon[298353]: pgmap v864: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 7.7 KiB/s wr, 0 op/s
Dec 05 10:23:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "format": "json"}]: dispatch
Dec 05 10:23:51 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:23:52 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "format": "json"}]: dispatch
Dec 05 10:23:52 np0005546420.localdomain ceph-mon[298353]: pgmap v865: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 7.7 KiB/s wr, 0 op/s
Dec 05 10:23:52 np0005546420.localdomain sshd[331085]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:23:54 np0005546420.localdomain sshd[331085]: Received disconnect from 24.232.50.5 port 50910:11: Bye Bye [preauth]
Dec 05 10:23:54 np0005546420.localdomain sshd[331085]: Disconnected from authenticating user root 24.232.50.5 port 50910 [preauth]
Dec 05 10:23:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:54.079 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:54 np0005546420.localdomain ceph-mon[298353]: pgmap v866: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s wr, 0 op/s
Dec 05 10:23:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:23:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:55.293 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:55 np0005546420.localdomain ceph-mon[298353]: pgmap v867: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s wr, 0 op/s
Dec 05 10:23:58 np0005546420.localdomain ceph-mon[298353]: pgmap v868: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 0 op/s
Dec 05 10:23:58 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:23:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:23:59.083 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:23:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a85e7b5b-d7ad-447d-9406-7e6c08cbb29a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:23:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a85e7b5b-d7ad-447d-9406-7e6c08cbb29a", "format": "json"}]: dispatch
Dec 05 10:23:59 np0005546420.localdomain sshd[331087]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:00.319 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: pgmap v869: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 5.9 KiB/s wr, 0 op/s
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.778183) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240778295, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1955, "num_deletes": 253, "total_data_size": 2898814, "memory_usage": 2941640, "flush_reason": "Manual Compaction"}
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240792918, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 1879954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40497, "largest_seqno": 42447, "table_properties": {"data_size": 1872568, "index_size": 4279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17200, "raw_average_key_size": 21, "raw_value_size": 1857133, "raw_average_value_size": 2304, "num_data_blocks": 184, "num_entries": 806, "num_filter_entries": 806, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930111, "oldest_key_time": 1764930111, "file_creation_time": 1764930240, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 14839 microseconds, and 6302 cpu microseconds.
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.793034) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 1879954 bytes OK
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.793064) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.794761) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.794782) EVENT_LOG_v1 {"time_micros": 1764930240794776, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.794813) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2889772, prev total WAL file size 2889772, number of live WAL files 2.
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.795827) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end)
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(1835KB)], [72(17MB)]
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240795894, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 19869524, "oldest_snapshot_seqno": -1}
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 14521 keys, 18558515 bytes, temperature: kUnknown
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240895109, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 18558515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18475250, "index_size": 45852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 388551, "raw_average_key_size": 26, "raw_value_size": 18228519, "raw_average_value_size": 1255, "num_data_blocks": 1710, "num_entries": 14521, "num_filter_entries": 14521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764930240, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.895465) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 18558515 bytes
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.897088) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.0 rd, 186.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 17.2 +0.0 blob) out(17.7 +0.0 blob), read-write-amplify(20.4) write-amplify(9.9) OK, records in: 15056, records dropped: 535 output_compression: NoCompression
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.897117) EVENT_LOG_v1 {"time_micros": 1764930240897103, "job": 44, "event": "compaction_finished", "compaction_time_micros": 99338, "compaction_time_cpu_micros": 53042, "output_level": 6, "num_output_files": 1, "total_output_size": 18558515, "num_input_records": 15056, "num_output_records": 14521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240897525, "job": 44, "event": "table_file_deletion", "file_number": 74}
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240900690, "job": 44, "event": "table_file_deletion", "file_number": 72}
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.795666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.900783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.900789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.900792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.900795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:24:00 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:24:00.900798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:24:01 np0005546420.localdomain sshd[331087]: Received disconnect from 163.44.99.31 port 37430:11: Bye Bye [preauth]
Dec 05 10:24:01 np0005546420.localdomain sshd[331087]: Disconnected from authenticating user root 163.44.99.31 port 37430 [preauth]
Dec 05 10:24:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:24:01 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:24:01 np0005546420.localdomain podman[331089]: 2025-12-05 10:24:01.334454219 +0000 UTC m=+0.092626632 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, vcs-type=git, version=9.6)
Dec 05 10:24:01 np0005546420.localdomain podman[331090]: 2025-12-05 10:24:01.389635073 +0000 UTC m=+0.143638617 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:24:01 np0005546420.localdomain podman[331090]: 2025-12-05 10:24:01.400357105 +0000 UTC m=+0.154360659 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:24:01 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:24:01 np0005546420.localdomain podman[331089]: 2025-12-05 10:24:01.472306867 +0000 UTC m=+0.230479290 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., version=9.6, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 10:24:01 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:24:02 np0005546420.localdomain ceph-mon[298353]: pgmap v870: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s
Dec 05 10:24:02 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "a85e7b5b-d7ad-447d-9406-7e6c08cbb29a", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 05 10:24:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4162796722' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:24:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4162796722' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:24:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:04.108 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:24:04.141 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:24:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:24:04.141 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:24:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:24:04.142 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:24:04 np0005546420.localdomain ceph-mon[298353]: pgmap v871: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s
Dec 05 10:24:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:05.349 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a85e7b5b-d7ad-447d-9406-7e6c08cbb29a", "format": "json"}]: dispatch
Dec 05 10:24:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a85e7b5b-d7ad-447d-9406-7e6c08cbb29a", "force": true, "format": "json"}]: dispatch
Dec 05 10:24:06 np0005546420.localdomain ceph-mon[298353]: pgmap v872: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s
Dec 05 10:24:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:24:07 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:24:07 np0005546420.localdomain podman[331133]: 2025-12-05 10:24:07.505431129 +0000 UTC m=+0.079052572 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:24:07 np0005546420.localdomain podman[331132]: 2025-12-05 10:24:07.584167352 +0000 UTC m=+0.159652912 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:24:07 np0005546420.localdomain podman[331133]: 2025-12-05 10:24:07.595368238 +0000 UTC m=+0.168989731 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 10:24:07 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:24:07 np0005546420.localdomain podman[331132]: 2025-12-05 10:24:07.650859961 +0000 UTC m=+0.226345551 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 10:24:07 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:24:08 np0005546420.localdomain ceph-mon[298353]: pgmap v873: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 34 KiB/s wr, 1 op/s
Dec 05 10:24:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:09.136 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:10.378 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:10 np0005546420.localdomain ceph-mon[298353]: pgmap v874: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 29 KiB/s wr, 0 op/s
Dec 05 10:24:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:24:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "16e7fa2f-3ef8-4c00-9984-3af47fc17b5e", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:24:12 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "16e7fa2f-3ef8-4c00-9984-3af47fc17b5e", "format": "json"}]: dispatch
Dec 05 10:24:12 np0005546420.localdomain ceph-mon[298353]: pgmap v875: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 51 KiB/s wr, 2 op/s
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.967 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.968 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:24:12.969 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:24:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:14.161 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:14 np0005546420.localdomain ceph-mon[298353]: pgmap v876: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 42 KiB/s wr, 2 op/s
Dec 05 10:24:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:15.409 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:15 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "16e7fa2f-3ef8-4c00-9984-3af47fc17b5e", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Dec 05 10:24:15 np0005546420.localdomain sshd[331173]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:24:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:24:16 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:24:16 np0005546420.localdomain podman[331175]: 2025-12-05 10:24:16.519356395 +0000 UTC m=+0.092478217 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:24:16 np0005546420.localdomain podman[331176]: 2025-12-05 10:24:16.599332356 +0000 UTC m=+0.168608790 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 10:24:16 np0005546420.localdomain podman[331176]: 2025-12-05 10:24:16.608372575 +0000 UTC m=+0.177648999 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:24:16 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:24:16 np0005546420.localdomain podman[331175]: 2025-12-05 10:24:16.625818804 +0000 UTC m=+0.198940636 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:24:16 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:24:16 np0005546420.localdomain ceph-mon[298353]: pgmap v877: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 42 KiB/s wr, 2 op/s
Dec 05 10:24:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:24:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:24:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:24:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:24:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:24:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18311 "" "Go-http-client/1.1"
Dec 05 10:24:17 np0005546420.localdomain sshd[331173]: Received disconnect from 197.248.8.33 port 56214:11: Bye Bye [preauth]
Dec 05 10:24:17 np0005546420.localdomain sshd[331173]: Disconnected from authenticating user root 197.248.8.33 port 56214 [preauth]
Dec 05 10:24:17 np0005546420.localdomain ceph-mon[298353]: pgmap v878: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 55 KiB/s wr, 2 op/s
Dec 05 10:24:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "16e7fa2f-3ef8-4c00-9984-3af47fc17b5e", "format": "json"}]: dispatch
Dec 05 10:24:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "16e7fa2f-3ef8-4c00-9984-3af47fc17b5e", "force": true, "format": "json"}]: dispatch
Dec 05 10:24:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:24:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:24:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:24:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:24:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:24:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:24:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:24:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:24:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:24:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:24:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:24:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:24:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:19.201 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:20.445 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:24:20 np0005546420.localdomain podman[331216]: 2025-12-05 10:24:20.546076365 +0000 UTC m=+0.087805344 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:24:20 np0005546420.localdomain podman[331216]: 2025-12-05 10:24:20.564451252 +0000 UTC m=+0.106180231 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 10:24:20 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:24:20 np0005546420.localdomain ceph-mon[298353]: pgmap v879: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 35 KiB/s wr, 2 op/s
Dec 05 10:24:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:24:22 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dc0bcf67-bebb-4a88-beaa-15102c0e4d74", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:24:22 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dc0bcf67-bebb-4a88-beaa-15102c0e4d74", "format": "json"}]: dispatch
Dec 05 10:24:22 np0005546420.localdomain ceph-mon[298353]: pgmap v880: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 71 KiB/s wr, 4 op/s
Dec 05 10:24:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:22.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:24.233 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:24 np0005546420.localdomain ceph-mon[298353]: pgmap v881: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 48 KiB/s wr, 2 op/s
Dec 05 10:24:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:25.470 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:26 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dc0bcf67-bebb-4a88-beaa-15102c0e4d74", "format": "json"}]: dispatch
Dec 05 10:24:26 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dc0bcf67-bebb-4a88-beaa-15102c0e4d74", "force": true, "format": "json"}]: dispatch
Dec 05 10:24:26 np0005546420.localdomain ceph-mon[298353]: pgmap v882: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 48 KiB/s wr, 2 op/s
Dec 05 10:24:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:26.886 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:26.886 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:24:28 np0005546420.localdomain sudo[331236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:24:28 np0005546420.localdomain ceph-mon[298353]: pgmap v883: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 63 KiB/s wr, 2 op/s
Dec 05 10:24:28 np0005546420.localdomain sudo[331236]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:24:28 np0005546420.localdomain sudo[331236]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:28 np0005546420.localdomain sudo[331254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:24:28 np0005546420.localdomain sudo[331254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:24:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:29.282 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:29 np0005546420.localdomain sudo[331254]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:29 np0005546420.localdomain sudo[331304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:24:29 np0005546420.localdomain sudo[331304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:24:29 np0005546420.localdomain sudo[331304]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "format": "json"}]: dispatch
Dec 05 10:24:29 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "force": true, "format": "json"}]: dispatch
Dec 05 10:24:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:29.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:29 np0005546420.localdomain sudo[331322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b -- inventory --format=json-pretty --filter-for-batch
Dec 05 10:24:29 np0005546420.localdomain sudo[331322]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:24:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:30.515 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:30 np0005546420.localdomain podman[331380]: 
Dec 05 10:24:30 np0005546420.localdomain podman[331380]: 2025-12-05 10:24:30.566594811 +0000 UTC m=+0.086291675 container create b148554d792d0bc993603918ff55f9172b8c634d75112e5d0ac333ea9a8f515d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_mirzakhani, release=1763362218, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.41.4, io.openshift.expose-services=, ceph=True, architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.tags=rhceph ceph)
Dec 05 10:24:30 np0005546420.localdomain systemd[1]: Started libpod-conmon-b148554d792d0bc993603918ff55f9172b8c634d75112e5d0ac333ea9a8f515d.scope.
Dec 05 10:24:30 np0005546420.localdomain podman[331380]: 2025-12-05 10:24:30.530675533 +0000 UTC m=+0.050372427 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 10:24:30 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:24:30 np0005546420.localdomain podman[331380]: 2025-12-05 10:24:30.648054738 +0000 UTC m=+0.167751612 container init b148554d792d0bc993603918ff55f9172b8c634d75112e5d0ac333ea9a8f515d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_mirzakhani, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, version=7, ceph=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218)
Dec 05 10:24:30 np0005546420.localdomain systemd[1]: tmp-crun.De0pGd.mount: Deactivated successfully.
Dec 05 10:24:30 np0005546420.localdomain podman[331380]: 2025-12-05 10:24:30.66335836 +0000 UTC m=+0.183055224 container start b148554d792d0bc993603918ff55f9172b8c634d75112e5d0ac333ea9a8f515d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_mirzakhani, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_CLEAN=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 10:24:30 np0005546420.localdomain podman[331380]: 2025-12-05 10:24:30.663784793 +0000 UTC m=+0.183481657 container attach b148554d792d0bc993603918ff55f9172b8c634d75112e5d0ac333ea9a8f515d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_mirzakhani, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, version=7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, ceph=True)
Dec 05 10:24:30 np0005546420.localdomain nifty_mirzakhani[331395]: 167 167
Dec 05 10:24:30 np0005546420.localdomain systemd[1]: libpod-b148554d792d0bc993603918ff55f9172b8c634d75112e5d0ac333ea9a8f515d.scope: Deactivated successfully.
Dec 05 10:24:30 np0005546420.localdomain podman[331380]: 2025-12-05 10:24:30.670549093 +0000 UTC m=+0.190245957 container died b148554d792d0bc993603918ff55f9172b8c634d75112e5d0ac333ea9a8f515d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_mirzakhani, release=1763362218, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., RELEASE=main)
Dec 05 10:24:30 np0005546420.localdomain podman[331401]: 2025-12-05 10:24:30.78668681 +0000 UTC m=+0.102334452 container remove b148554d792d0bc993603918ff55f9172b8c634d75112e5d0ac333ea9a8f515d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_mirzakhani, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, distribution-scope=public, release=1763362218, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 05 10:24:30 np0005546420.localdomain systemd[1]: libpod-conmon-b148554d792d0bc993603918ff55f9172b8c634d75112e5d0ac333ea9a8f515d.scope: Deactivated successfully.
Dec 05 10:24:30 np0005546420.localdomain ceph-mon[298353]: pgmap v884: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 51 KiB/s wr, 2 op/s
Dec 05 10:24:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:30.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:30.874 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:24:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:30.875 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:24:31 np0005546420.localdomain podman[331424]: 
Dec 05 10:24:31 np0005546420.localdomain podman[331424]: 2025-12-05 10:24:31.03758153 +0000 UTC m=+0.084779200 container create f5db270c3970b96d18beaca7222f2aa74bbe391e9c7acd9980fd7396d4095602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, name=rhceph, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218)
Dec 05 10:24:31 np0005546420.localdomain systemd[1]: Started libpod-conmon-f5db270c3970b96d18beaca7222f2aa74bbe391e9c7acd9980fd7396d4095602.scope.
Dec 05 10:24:31 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:24:31 np0005546420.localdomain podman[331424]: 2025-12-05 10:24:31.002800655 +0000 UTC m=+0.049998365 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 05 10:24:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa2e99159a53414e1061f2819ecf6da3ceadd279b40ceffdb97f18cdb18c514/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa2e99159a53414e1061f2819ecf6da3ceadd279b40ceffdb97f18cdb18c514/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa2e99159a53414e1061f2819ecf6da3ceadd279b40ceffdb97f18cdb18c514/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:31 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa2e99159a53414e1061f2819ecf6da3ceadd279b40ceffdb97f18cdb18c514/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 10:24:31 np0005546420.localdomain podman[331424]: 2025-12-05 10:24:31.110802542 +0000 UTC m=+0.158000212 container init f5db270c3970b96d18beaca7222f2aa74bbe391e9c7acd9980fd7396d4095602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, version=7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 05 10:24:31 np0005546420.localdomain podman[331424]: 2025-12-05 10:24:31.121038397 +0000 UTC m=+0.168236077 container start f5db270c3970b96d18beaca7222f2aa74bbe391e9c7acd9980fd7396d4095602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, release=1763362218, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=)
Dec 05 10:24:31 np0005546420.localdomain podman[331424]: 2025-12-05 10:24:31.121300116 +0000 UTC m=+0.168497816 container attach f5db270c3970b96d18beaca7222f2aa74bbe391e9c7acd9980fd7396d4095602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, vcs-type=git, release=1763362218, GIT_BRANCH=main)
Dec 05 10:24:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:31.147 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:24:31 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-e6f3e5d379b4d028e5b32ce5a9be21cf5a7ab147ef19b50445bb4a210b611282-merged.mount: Deactivated successfully.
Dec 05 10:24:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:24:31 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:24:31 np0005546420.localdomain podman[331460]: 2025-12-05 10:24:31.704634824 +0000 UTC m=+0.092990083 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:24:31 np0005546420.localdomain podman[331460]: 2025-12-05 10:24:31.74240174 +0000 UTC m=+0.130757009 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:24:31 np0005546420.localdomain systemd[1]: tmp-crun.Sf1xrQ.mount: Deactivated successfully.
Dec 05 10:24:31 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:24:31 np0005546420.localdomain podman[331459]: 2025-12-05 10:24:31.766427603 +0000 UTC m=+0.153954157 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, version=9.6, release=1755695350, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container)
Dec 05 10:24:31 np0005546420.localdomain podman[331459]: 2025-12-05 10:24:31.785383458 +0000 UTC m=+0.172909982 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350)
Dec 05 10:24:31 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]: [
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:     {
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:         "available": false,
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:         "ceph_device": false,
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:         "lsm_data": {},
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:         "lvs": [],
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:         "path": "/dev/sr0",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:         "rejected_reasons": [
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "Has a FileSystem",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "Insufficient space (<5GB)"
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:         ],
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:         "sys_api": {
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "actuators": null,
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "device_nodes": "sr0",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "human_readable_size": "482.00 KB",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "id_bus": "ata",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "model": "QEMU DVD-ROM",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "nr_requests": "2",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "partitions": {},
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "path": "/dev/sr0",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "removable": "1",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "rev": "2.5+",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "ro": "0",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "rotational": "1",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "sas_address": "",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "sas_device_handle": "",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "scheduler_mode": "mq-deadline",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "sectors": 0,
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "sectorsize": "2048",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "size": 493568.0,
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "support_discard": "0",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "type": "disk",
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:             "vendor": "QEMU"
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:         }
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]:     }
Dec 05 10:24:32 np0005546420.localdomain admiring_dirac[331440]: ]
Dec 05 10:24:32 np0005546420.localdomain systemd[1]: libpod-f5db270c3970b96d18beaca7222f2aa74bbe391e9c7acd9980fd7396d4095602.scope: Deactivated successfully.
Dec 05 10:24:32 np0005546420.localdomain systemd[1]: libpod-f5db270c3970b96d18beaca7222f2aa74bbe391e9c7acd9980fd7396d4095602.scope: Consumed 1.145s CPU time.
Dec 05 10:24:32 np0005546420.localdomain podman[331424]: 2025-12-05 10:24:32.250374921 +0000 UTC m=+1.297572591 container died f5db270c3970b96d18beaca7222f2aa74bbe391e9c7acd9980fd7396d4095602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, name=rhceph, io.buildah.version=1.41.4, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_CLEAN=True)
Dec 05 10:24:32 np0005546420.localdomain podman[333461]: 2025-12-05 10:24:32.340536546 +0000 UTC m=+0.079466946 container remove f5db270c3970b96d18beaca7222f2aa74bbe391e9c7acd9980fd7396d4095602 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_dirac, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, ceph=True, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7)
Dec 05 10:24:32 np0005546420.localdomain systemd[1]: libpod-conmon-f5db270c3970b96d18beaca7222f2aa74bbe391e9c7acd9980fd7396d4095602.scope: Deactivated successfully.
Dec 05 10:24:32 np0005546420.localdomain sudo[331322]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:32 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "snap_name": "cbd95e9a-4741-4e3c-931e-7bcd695d1401_ea25bb09-4f0c-4041-9779-4529a3bb81e3", "force": true, "format": "json"}]: dispatch
Dec 05 10:24:32 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "snap_name": "cbd95e9a-4741-4e3c-931e-7bcd695d1401", "force": true, "format": "json"}]: dispatch
Dec 05 10:24:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/398949017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:24:32 np0005546420.localdomain ceph-mon[298353]: pgmap v885: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 80 KiB/s wr, 4 op/s
Dec 05 10:24:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:24:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:24:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:24:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:24:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:24:32 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:24:32 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-daa2e99159a53414e1061f2819ecf6da3ceadd279b40ceffdb97f18cdb18c514-merged.mount: Deactivated successfully.
Dec 05 10:24:32 np0005546420.localdomain sudo[333476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:24:32 np0005546420.localdomain sudo[333476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:24:32 np0005546420.localdomain sudo[333476]: pam_unix(sudo:session): session closed for user root
Dec 05 10:24:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:24:33 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:24:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/634200232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:24:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:33.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:33.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:34.328 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:34 np0005546420.localdomain ceph-mon[298353]: pgmap v886: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 45 KiB/s wr, 2 op/s
Dec 05 10:24:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:35.566 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:35.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:35.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:36 np0005546420.localdomain ceph-mon[298353]: pgmap v887: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 45 KiB/s wr, 2 op/s
Dec 05 10:24:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:24:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:24:38 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:24:38 np0005546420.localdomain systemd[1]: tmp-crun.y5MAyJ.mount: Deactivated successfully.
Dec 05 10:24:38 np0005546420.localdomain podman[333495]: 2025-12-05 10:24:38.53011104 +0000 UTC m=+0.100707911 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 10:24:38 np0005546420.localdomain podman[333494]: 2025-12-05 10:24:38.618752819 +0000 UTC m=+0.192016073 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:24:38 np0005546420.localdomain podman[333494]: 2025-12-05 10:24:38.632824953 +0000 UTC m=+0.206088217 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:24:38 np0005546420.localdomain podman[333495]: 2025-12-05 10:24:38.640835171 +0000 UTC m=+0.211432042 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 10:24:38 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:24:38 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:24:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:38.742 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:24:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:38.742 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:24:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:38.743 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:24:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:38.743 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:24:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:38.744 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:24:38 np0005546420.localdomain ceph-mon[298353]: pgmap v888: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 58 KiB/s wr, 4 op/s
Dec 05 10:24:39 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:24:39 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3353058958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:24:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:39.206 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:24:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:39.365 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:39.431 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:24:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:39.432 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11424MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:24:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:39.432 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:24:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:39.433 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:24:39 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8a959872-096f-4524-beb3-16ecf762162b", "format": "json"}]: dispatch
Dec 05 10:24:39 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "force": true, "format": "json"}]: dispatch
Dec 05 10:24:39 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/4080813943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:24:39 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3353058958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:24:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:40.568 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:40.739 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:24:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:40.740 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:24:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:40.765 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:24:40 np0005546420.localdomain ceph-mon[298353]: pgmap v889: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 43 KiB/s wr, 3 op/s
Dec 05 10:24:41 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:24:41 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4252168332' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:24:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:41.230 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:24:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:41.237 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:24:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:41.363 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:24:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:41.366 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:24:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:41.367 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:24:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:41.368 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:41.368 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 10:24:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:41.383 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 10:24:41 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/40468034' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:24:41 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/4252168332' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:24:41 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e291 e291: 6 total, 6 up, 6 in
Dec 05 10:24:42 np0005546420.localdomain ceph-mon[298353]: pgmap v890: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 76 KiB/s wr, 4 op/s
Dec 05 10:24:42 np0005546420.localdomain ceph-mon[298353]: osdmap e291: 6 total, 6 up, 6 in
Dec 05 10:24:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:43.234 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:43 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:24:43.233 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:24:43 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:24:43.235 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:24:43 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:24:43.236 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:24:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:43.383 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:43 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:43.384 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:43 np0005546420.localdomain ceph-mon[298353]: pgmap v892: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 55 KiB/s wr, 3 op/s
Dec 05 10:24:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:44.403 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:45.602 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:46 np0005546420.localdomain ceph-mon[298353]: pgmap v893: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 55 KiB/s wr, 3 op/s
Dec 05 10:24:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:46.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:24:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:46.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 10:24:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:24:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:24:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:24:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:24:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:24:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18304 "" "Go-http-client/1.1"
Dec 05 10:24:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:24:47 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:24:47 np0005546420.localdomain podman[333580]: 2025-12-05 10:24:47.514141572 +0000 UTC m=+0.087081981 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 05 10:24:47 np0005546420.localdomain podman[333579]: 2025-12-05 10:24:47.572615328 +0000 UTC m=+0.147260300 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:24:47 np0005546420.localdomain podman[333579]: 2025-12-05 10:24:47.586211618 +0000 UTC m=+0.160856650 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:24:47 np0005546420.localdomain podman[333580]: 2025-12-05 10:24:47.598765315 +0000 UTC m=+0.171705744 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:24:47 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:24:47 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:24:48 np0005546420.localdomain ceph-mon[298353]: pgmap v894: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 40 KiB/s wr, 2 op/s
Dec 05 10:24:48 np0005546420.localdomain ceph-mon[298353]: mgrmap e54: np0005546419.zhsnqq(active, since 23m), standbys: np0005546420.aoeylc, np0005546421.sukfea
Dec 05 10:24:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:24:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:24:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:24:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:24:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:24:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:24:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:24:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:24:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:24:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:24:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:24:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:24:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:49.429 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:50.621 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:50 np0005546420.localdomain ceph-mon[298353]: pgmap v895: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 40 KiB/s wr, 2 op/s
Dec 05 10:24:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:24:51 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 e292: 6 total, 6 up, 6 in
Dec 05 10:24:51 np0005546420.localdomain podman[333620]: 2025-12-05 10:24:51.519444299 +0000 UTC m=+0.093069215 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:24:51 np0005546420.localdomain podman[333620]: 2025-12-05 10:24:51.534392081 +0000 UTC m=+0.108016997 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec 05 10:24:51 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:24:52 np0005546420.localdomain ceph-mon[298353]: osdmap e292: 6 total, 6 up, 6 in
Dec 05 10:24:52 np0005546420.localdomain ceph-mon[298353]: pgmap v897: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s wr, 1 op/s
Dec 05 10:24:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:54.463 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:54 np0005546420.localdomain ceph-mon[298353]: pgmap v898: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s wr, 1 op/s
Dec 05 10:24:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:24:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:55.669 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:56 np0005546420.localdomain ceph-mon[298353]: pgmap v899: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s wr, 1 op/s
Dec 05 10:24:58 np0005546420.localdomain ceph-mon[298353]: pgmap v900: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s
Dec 05 10:24:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:24:59.510 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:24:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:24:59 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:25:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:00.672 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:00 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "format": "json"}]: dispatch
Dec 05 10:25:00 np0005546420.localdomain ceph-mon[298353]: pgmap v901: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s
Dec 05 10:25:01 np0005546420.localdomain sshd[330697]: fatal: Timeout before authentication for 123.59.50.202 port 27097
Dec 05 10:25:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:25:02 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:25:02 np0005546420.localdomain systemd[1]: tmp-crun.Fr7iWp.mount: Deactivated successfully.
Dec 05 10:25:02 np0005546420.localdomain podman[333640]: 2025-12-05 10:25:02.511267629 +0000 UTC m=+0.084617105 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:25:02 np0005546420.localdomain ceph-mon[298353]: pgmap v902: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 15 KiB/s wr, 0 op/s
Dec 05 10:25:02 np0005546420.localdomain podman[333640]: 2025-12-05 10:25:02.543948008 +0000 UTC m=+0.117297504 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:25:02 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:25:02 np0005546420.localdomain podman[333639]: 2025-12-05 10:25:02.562805011 +0000 UTC m=+0.138419997 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, version=9.6, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal)
Dec 05 10:25:02 np0005546420.localdomain podman[333639]: 2025-12-05 10:25:02.583238552 +0000 UTC m=+0.158853588 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec 05 10:25:02 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:25:03 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "snap_name": "2b579735-df10-471e-b27a-1c53c1d654a4", "format": "json"}]: dispatch
Dec 05 10:25:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2064747278' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:25:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2064747278' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:25:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:25:04.141 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:25:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:25:04.142 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:25:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:25:04.142 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:25:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:04.550 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:04 np0005546420.localdomain ceph-mon[298353]: pgmap v903: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s
Dec 05 10:25:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:05.705 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:05 np0005546420.localdomain ceph-mon[298353]: pgmap v904: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s
Dec 05 10:25:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:25:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6", "format": "json"}]: dispatch
Dec 05 10:25:06 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:25:07 np0005546420.localdomain ceph-mon[298353]: pgmap v905: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s wr, 1 op/s
Dec 05 10:25:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:25:09 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:25:09 np0005546420.localdomain podman[333683]: 2025-12-05 10:25:09.524520338 +0000 UTC m=+0.093187439 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 10:25:09 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:09.586 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:09 np0005546420.localdomain podman[333682]: 2025-12-05 10:25:09.598044599 +0000 UTC m=+0.170236009 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 10:25:09 np0005546420.localdomain podman[333683]: 2025-12-05 10:25:09.604363935 +0000 UTC m=+0.173030956 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 10:25:09 np0005546420.localdomain podman[333682]: 2025-12-05 10:25:09.612373391 +0000 UTC m=+0.184564791 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:25:09 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:25:09 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:25:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:10.707 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:10 np0005546420.localdomain ceph-mon[298353]: pgmap v906: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s wr, 1 op/s
Dec 05 10:25:10 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6", "format": "json"}]: dispatch
Dec 05 10:25:10 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6", "force": true, "format": "json"}]: dispatch
Dec 05 10:25:12 np0005546420.localdomain ceph-mon[298353]: pgmap v907: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 70 KiB/s wr, 2 op/s
Dec 05 10:25:13 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:25:14 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:14.624 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "34c024ab-9cea-4833-9880-2441e954f452", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:25:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "34c024ab-9cea-4833-9880-2441e954f452", "format": "json"}]: dispatch
Dec 05 10:25:14 np0005546420.localdomain ceph-mon[298353]: pgmap v908: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s wr, 1 op/s
Dec 05 10:25:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:15.754 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:16 np0005546420.localdomain ceph-mon[298353]: pgmap v909: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s wr, 1 op/s
Dec 05 10:25:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:25:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:25:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:25:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:25:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:25:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18309 "" "Go-http-client/1.1"
Dec 05 10:25:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "34c024ab-9cea-4833-9880-2441e954f452", "format": "json"}]: dispatch
Dec 05 10:25:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "34c024ab-9cea-4833-9880-2441e954f452", "force": true, "format": "json"}]: dispatch
Dec 05 10:25:18 np0005546420.localdomain sshd[333728]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:25:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:25:18 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:25:18 np0005546420.localdomain systemd[1]: tmp-crun.gEQEPm.mount: Deactivated successfully.
Dec 05 10:25:18 np0005546420.localdomain podman[333731]: 2025-12-05 10:25:18.522027645 +0000 UTC m=+0.087849434 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:25:18 np0005546420.localdomain podman[333731]: 2025-12-05 10:25:18.556433477 +0000 UTC m=+0.122255256 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 10:25:18 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:25:18 np0005546420.localdomain podman[333730]: 2025-12-05 10:25:18.574794425 +0000 UTC m=+0.145954529 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:25:18 np0005546420.localdomain podman[333730]: 2025-12-05 10:25:18.586436304 +0000 UTC m=+0.157596428 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:25:18 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:25:18 np0005546420.localdomain ceph-mon[298353]: pgmap v910: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 67 KiB/s wr, 3 op/s
Dec 05 10:25:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:25:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:25:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:25:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:25:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:25:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:25:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:25:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:25:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:25:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:25:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:25:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:25:19 np0005546420.localdomain sshd[333728]: Received disconnect from 103.231.14.54 port 52948:11: Bye Bye [preauth]
Dec 05 10:25:19 np0005546420.localdomain sshd[333728]: Disconnected from authenticating user root 103.231.14.54 port 52948 [preauth]
Dec 05 10:25:19 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:19.658 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:20.757 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:20 np0005546420.localdomain ceph-mon[298353]: pgmap v911: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s
Dec 05 10:25:20 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:25:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9928dfa4-aa49-49e2-81bd-4195ffc621e2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:25:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9928dfa4-aa49-49e2-81bd-4195ffc621e2", "format": "json"}]: dispatch
Dec 05 10:25:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:25:22 np0005546420.localdomain podman[333772]: 2025-12-05 10:25:22.503413813 +0000 UTC m=+0.080878869 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:25:22 np0005546420.localdomain podman[333772]: 2025-12-05 10:25:22.516216659 +0000 UTC m=+0.093681705 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 10:25:22 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:25:22 np0005546420.localdomain ceph-mon[298353]: pgmap v912: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 84 KiB/s wr, 4 op/s
Dec 05 10:25:24 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:24.702 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:24 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9928dfa4-aa49-49e2-81bd-4195ffc621e2", "format": "json"}]: dispatch
Dec 05 10:25:24 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9928dfa4-aa49-49e2-81bd-4195ffc621e2", "force": true, "format": "json"}]: dispatch
Dec 05 10:25:24 np0005546420.localdomain ceph-mon[298353]: pgmap v913: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 48 KiB/s wr, 3 op/s
Dec 05 10:25:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:25.785 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:26 np0005546420.localdomain ceph-mon[298353]: pgmap v914: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 48 KiB/s wr, 3 op/s
Dec 05 10:25:26 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:25:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "51c7d930-3a97-4fa3-ad82-6d9b7230a9bd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:25:27 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "51c7d930-3a97-4fa3-ad82-6d9b7230a9bd", "format": "json"}]: dispatch
Dec 05 10:25:28 np0005546420.localdomain ceph-mon[298353]: pgmap v915: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 70 KiB/s wr, 4 op/s
Dec 05 10:25:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:29.682 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:29.683 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:25:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:29.732 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:29 np0005546420.localdomain ceph-mon[298353]: pgmap v916: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 60 KiB/s wr, 3 op/s
Dec 05 10:25:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:30.811 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:30.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:30 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "51c7d930-3a97-4fa3-ad82-6d9b7230a9bd", "format": "json"}]: dispatch
Dec 05 10:25:30 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "51c7d930-3a97-4fa3-ad82-6d9b7230a9bd", "force": true, "format": "json"}]: dispatch
Dec 05 10:25:31 np0005546420.localdomain sshd[333792]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:25:32 np0005546420.localdomain ceph-mon[298353]: pgmap v917: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 98 KiB/s wr, 4 op/s
Dec 05 10:25:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1873975261' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:25:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:32.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:32.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:25:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:32.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:25:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:32.887 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:25:32 np0005546420.localdomain sudo[333794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:25:32 np0005546420.localdomain sudo[333794]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:25:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:25:32 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:25:32 np0005546420.localdomain sudo[333794]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:32 np0005546420.localdomain sshd[333792]: Received disconnect from 24.232.50.5 port 58570:11: Bye Bye [preauth]
Dec 05 10:25:32 np0005546420.localdomain sshd[333792]: Disconnected from authenticating user root 24.232.50.5 port 58570 [preauth]
Dec 05 10:25:33 np0005546420.localdomain sudo[333819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:25:33 np0005546420.localdomain sudo[333819]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:25:33 np0005546420.localdomain podman[333811]: 2025-12-05 10:25:33.050046643 +0000 UTC m=+0.117474519 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., distribution-scope=public, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Dec 05 10:25:33 np0005546420.localdomain systemd[1]: tmp-crun.z433MC.mount: Deactivated successfully.
Dec 05 10:25:33 np0005546420.localdomain podman[333813]: 2025-12-05 10:25:33.103797734 +0000 UTC m=+0.165724840 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:25:33 np0005546420.localdomain podman[333813]: 2025-12-05 10:25:33.110366686 +0000 UTC m=+0.172293792 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 10:25:33 np0005546420.localdomain podman[333811]: 2025-12-05 10:25:33.118645282 +0000 UTC m=+0.186073168 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, name=ubi9-minimal)
Dec 05 10:25:33 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:25:33 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:25:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3746664931' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:25:33 np0005546420.localdomain sudo[333819]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:33.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:34 np0005546420.localdomain sudo[333905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:25:34 np0005546420.localdomain sudo[333905]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:25:34 np0005546420.localdomain sudo[333905]: pam_unix(sudo:session): session closed for user root
Dec 05 10:25:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c8b95898-15a4-4c97-aa64-8387a2d050a5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:25:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c8b95898-15a4-4c97-aa64-8387a2d050a5", "format": "json"}]: dispatch
Dec 05 10:25:34 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:25:34 np0005546420.localdomain ceph-mon[298353]: pgmap v918: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 60 KiB/s wr, 2 op/s
Dec 05 10:25:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:25:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:25:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:25:34 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:25:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:34.771 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:34.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:35.853 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:35.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:35.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:35.893 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:25:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:35.893 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:25:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:35.894 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:25:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:35.894 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:25:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:35.894 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:25:36 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:25:36 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2387561916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:25:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:36.337 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:25:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:36.578 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:25:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:36.579 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11411MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:25:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:36.579 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:25:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:36.579 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:25:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:36.766 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:25:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:36.767 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:25:36 np0005546420.localdomain ceph-mon[298353]: pgmap v919: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 60 KiB/s wr, 2 op/s
Dec 05 10:25:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:25:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2387561916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:25:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:36.793 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:25:37 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:25:37 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3487356244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:25:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:37.271 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:25:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:37.279 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:25:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:37.298 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:25:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:37.301 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:25:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:37.301 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:25:37 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c8b95898-15a4-4c97-aa64-8387a2d050a5", "format": "json"}]: dispatch
Dec 05 10:25:37 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c8b95898-15a4-4c97-aa64-8387a2d050a5", "force": true, "format": "json"}]: dispatch
Dec 05 10:25:37 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3487356244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:25:37 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3960103133' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:25:38 np0005546420.localdomain sshd[333967]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:25:39 np0005546420.localdomain sshd[333967]: Received disconnect from 178.217.173.50 port 36238:11: Bye Bye [preauth]
Dec 05 10:25:39 np0005546420.localdomain sshd[333967]: Disconnected from authenticating user root 178.217.173.50 port 36238 [preauth]
Dec 05 10:25:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:39.972 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:25:39 np0005546420.localdomain ceph-mon[298353]: pgmap v920: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 69 KiB/s wr, 3 op/s
Dec 05 10:25:39 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/156447284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:25:39 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:25:40 np0005546420.localdomain podman[333970]: 2025-12-05 10:25:40.081180304 +0000 UTC m=+0.079148315 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 05 10:25:40 np0005546420.localdomain podman[333970]: 2025-12-05 10:25:40.116325259 +0000 UTC m=+0.114293250 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 05 10:25:40 np0005546420.localdomain systemd[1]: tmp-crun.nNSjPK.mount: Deactivated successfully.
Dec 05 10:25:40 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:25:40 np0005546420.localdomain podman[333969]: 2025-12-05 10:25:40.135619155 +0000 UTC m=+0.132356849 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:25:40 np0005546420.localdomain podman[333969]: 2025-12-05 10:25:40.174483716 +0000 UTC m=+0.171221390 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 10:25:40 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:25:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:40.896 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:40 np0005546420.localdomain ceph-mon[298353]: pgmap v921: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 48 KiB/s wr, 2 op/s
Dec 05 10:25:40 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "snap_name": "2b579735-df10-471e-b27a-1c53c1d654a4_3bb1d1a9-1164-4b5f-8e39-c63568c7ef8a", "force": true, "format": "json"}]: dispatch
Dec 05 10:25:40 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "snap_name": "2b579735-df10-471e-b27a-1c53c1d654a4", "force": true, "format": "json"}]: dispatch
Dec 05 10:25:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:41.302 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:41.303 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:42 np0005546420.localdomain ceph-mon[298353]: pgmap v922: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 94 KiB/s wr, 5 op/s
Dec 05 10:25:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:42.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:25:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:44.974 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:45 np0005546420.localdomain ceph-mon[298353]: pgmap v923: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 55 KiB/s wr, 3 op/s
Dec 05 10:25:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:45.940 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "format": "json"}]: dispatch
Dec 05 10:25:46 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "force": true, "format": "json"}]: dispatch
Dec 05 10:25:46 np0005546420.localdomain ceph-mon[298353]: pgmap v924: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 55 KiB/s wr, 3 op/s
Dec 05 10:25:46 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e293 e293: 6 total, 6 up, 6 in
Dec 05 10:25:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:25:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:25:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:25:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:25:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:25:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18311 "" "Go-http-client/1.1"
Dec 05 10:25:47 np0005546420.localdomain ceph-mon[298353]: osdmap e293: 6 total, 6 up, 6 in
Dec 05 10:25:47 np0005546420.localdomain ceph-mon[298353]: pgmap v926: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 78 KiB/s wr, 4 op/s
Dec 05 10:25:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:25:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:25:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:25:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:25:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:25:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:25:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:25:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:25:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:25:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:25:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:25:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:25:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:25:49 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:25:49 np0005546420.localdomain podman[334012]: 2025-12-05 10:25:49.509345515 +0000 UTC m=+0.084695627 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:25:49 np0005546420.localdomain podman[334013]: 2025-12-05 10:25:49.57977018 +0000 UTC m=+0.150516349 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:25:49 np0005546420.localdomain podman[334012]: 2025-12-05 10:25:49.596824928 +0000 UTC m=+0.172175020 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:25:49 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:25:49 np0005546420.localdomain podman[334013]: 2025-12-05 10:25:49.612574294 +0000 UTC m=+0.183320383 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 05 10:25:49 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:25:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:25:49.866 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:25:49 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:25:49.867 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:25:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:49.871 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:49 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:49.976 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:50 np0005546420.localdomain ceph-mon[298353]: pgmap v927: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 78 KiB/s wr, 4 op/s
Dec 05 10:25:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:50.968 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:52 np0005546420.localdomain ceph-mon[298353]: pgmap v928: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 47 KiB/s wr, 3 op/s
Dec 05 10:25:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:25:53 np0005546420.localdomain podman[334055]: 2025-12-05 10:25:53.505166851 +0000 UTC m=+0.081506179 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 10:25:53 np0005546420.localdomain podman[334055]: 2025-12-05 10:25:53.516406298 +0000 UTC m=+0.092745606 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:25:53 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:25:54 np0005546420.localdomain ceph-mon[298353]: pgmap v929: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 47 KiB/s wr, 3 op/s
Dec 05 10:25:54 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:25:54.871 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:25:54 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:54.978 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:25:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:55.999 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:25:56 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e294 e294: 6 total, 6 up, 6 in
Dec 05 10:25:56 np0005546420.localdomain ceph-mon[298353]: pgmap v930: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 47 KiB/s wr, 3 op/s
Dec 05 10:25:56 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:25:56 np0005546420.localdomain ceph-mon[298353]: osdmap e294: 6 total, 6 up, 6 in
Dec 05 10:25:57 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:25:57 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "format": "json"}]: dispatch
Dec 05 10:25:58 np0005546420.localdomain ceph-mon[298353]: pgmap v932: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 35 KiB/s wr, 2 op/s
Dec 05 10:25:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:25:59.980 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:00 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "snap_name": "eb5d7ece-ad76-479c-8c09-47bce11dd7a1", "format": "json"}]: dispatch
Dec 05 10:26:00 np0005546420.localdomain ceph-mon[298353]: pgmap v933: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 35 KiB/s wr, 2 op/s
Dec 05 10:26:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:01.033 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:02 np0005546420.localdomain ceph-mon[298353]: pgmap v934: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s wr, 1 op/s
Dec 05 10:26:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:26:03 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:26:03 np0005546420.localdomain systemd[1]: tmp-crun.Zf5nWI.mount: Deactivated successfully.
Dec 05 10:26:03 np0005546420.localdomain podman[334076]: 2025-12-05 10:26:03.511169379 +0000 UTC m=+0.072908803 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:26:03 np0005546420.localdomain podman[334076]: 2025-12-05 10:26:03.548470692 +0000 UTC m=+0.110210106 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:26:03 np0005546420.localdomain podman[334075]: 2025-12-05 10:26:03.559298357 +0000 UTC m=+0.126940383 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, version=9.6, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9)
Dec 05 10:26:03 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:26:03 np0005546420.localdomain podman[334075]: 2025-12-05 10:26:03.574500456 +0000 UTC m=+0.142142492 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 05 10:26:03 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:26:03 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "snap_name": "eb5d7ece-ad76-479c-8c09-47bce11dd7a1_70a4bdba-2abe-427a-b144-b7d84bdf25a4", "force": true, "format": "json"}]: dispatch
Dec 05 10:26:03 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "snap_name": "eb5d7ece-ad76-479c-8c09-47bce11dd7a1", "force": true, "format": "json"}]: dispatch
Dec 05 10:26:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4003524543' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:26:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/4003524543' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:26:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:26:04.143 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:26:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:26:04.143 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:26:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:26:04.144 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:26:04 np0005546420.localdomain ceph-mon[298353]: pgmap v935: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s wr, 1 op/s
Dec 05 10:26:04 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:04.981 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:06.071 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:06 np0005546420.localdomain ceph-mon[298353]: pgmap v936: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s wr, 1 op/s
Dec 05 10:26:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e295 e295: 6 total, 6 up, 6 in
Dec 05 10:26:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "format": "json"}]: dispatch
Dec 05 10:26:07 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "force": true, "format": "json"}]: dispatch
Dec 05 10:26:07 np0005546420.localdomain ceph-mon[298353]: osdmap e295: 6 total, 6 up, 6 in
Dec 05 10:26:08 np0005546420.localdomain ceph-mon[298353]: pgmap v938: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 64 KiB/s wr, 3 op/s
Dec 05 10:26:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:10.004 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:26:10 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:26:10 np0005546420.localdomain podman[334120]: 2025-12-05 10:26:10.522160807 +0000 UTC m=+0.091623111 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 05 10:26:10 np0005546420.localdomain podman[334121]: 2025-12-05 10:26:10.56693839 +0000 UTC m=+0.132187333 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 05 10:26:10 np0005546420.localdomain podman[334120]: 2025-12-05 10:26:10.59150442 +0000 UTC m=+0.160966724 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:26:10 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:26:10 np0005546420.localdomain podman[334121]: 2025-12-05 10:26:10.629793232 +0000 UTC m=+0.195042155 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:26:10 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:26:10 np0005546420.localdomain ceph-mon[298353]: pgmap v939: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 64 KiB/s wr, 3 op/s
Dec 05 10:26:10 np0005546420.localdomain ceph-mon[298353]: from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 05 10:26:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:11.104 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 05 10:26:11 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "format": "json"}]: dispatch
Dec 05 10:26:12 np0005546420.localdomain ceph-mon[298353]: pgmap v940: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:26:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:26:14 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "snap_name": "b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5", "format": "json"}]: dispatch
Dec 05 10:26:14 np0005546420.localdomain ceph-mon[298353]: pgmap v941: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Dec 05 10:26:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:15.039 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:15 np0005546420.localdomain ceph-mon[298353]: pgmap v942: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s
Dec 05 10:26:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:16.142 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e296 e296: 6 total, 6 up, 6 in
Dec 05 10:26:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:26:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:26:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:26:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:26:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:26:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18312 "" "Go-http-client/1.1"
Dec 05 10:26:17 np0005546420.localdomain ceph-mon[298353]: osdmap e296: 6 total, 6 up, 6 in
Dec 05 10:26:17 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "snap_name": "b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5_ce45a9ec-91dc-4f53-8a37-a14a18566ce2", "force": true, "format": "json"}]: dispatch
Dec 05 10:26:18 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "snap_name": "b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5", "force": true, "format": "json"}]: dispatch
Dec 05 10:26:18 np0005546420.localdomain ceph-mon[298353]: pgmap v944: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 48 KiB/s wr, 3 op/s
Dec 05 10:26:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:26:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:26:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:26:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:26:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:26:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:26:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:26:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:26:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:26:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:26:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:26:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:26:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:20.067 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:26:20 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:26:20 np0005546420.localdomain podman[334162]: 2025-12-05 10:26:20.504931789 +0000 UTC m=+0.081695395 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:26:20 np0005546420.localdomain podman[334162]: 2025-12-05 10:26:20.517755435 +0000 UTC m=+0.094519081 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:26:20 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:26:20 np0005546420.localdomain systemd[1]: tmp-crun.i3lWp2.mount: Deactivated successfully.
Dec 05 10:26:20 np0005546420.localdomain podman[334163]: 2025-12-05 10:26:20.576266492 +0000 UTC m=+0.150296984 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:26:20 np0005546420.localdomain podman[334163]: 2025-12-05 10:26:20.610538871 +0000 UTC m=+0.184569353 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 10:26:20 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:26:20 np0005546420.localdomain ceph-mon[298353]: pgmap v945: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 48 KiB/s wr, 3 op/s
Dec 05 10:26:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:21.197 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "format": "json"}]: dispatch
Dec 05 10:26:21 np0005546420.localdomain ceph-mon[298353]: from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "force": true, "format": "json"}]: dispatch
Dec 05 10:26:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e297 e297: 6 total, 6 up, 6 in
Dec 05 10:26:22 np0005546420.localdomain ceph-mon[298353]: pgmap v946: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 79 KiB/s wr, 3 op/s
Dec 05 10:26:22 np0005546420.localdomain ceph-mon[298353]: osdmap e297: 6 total, 6 up, 6 in
Dec 05 10:26:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:26:24 np0005546420.localdomain podman[334205]: 2025-12-05 10:26:24.510575268 +0000 UTC m=+0.087426541 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:26:24 np0005546420.localdomain podman[334205]: 2025-12-05 10:26:24.524341133 +0000 UTC m=+0.101192446 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3)
Dec 05 10:26:24 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:26:24 np0005546420.localdomain ceph-mon[298353]: pgmap v948: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 99 KiB/s wr, 4 op/s
Dec 05 10:26:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:25.113 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:26.224 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:26 np0005546420.localdomain ceph-mon[298353]: pgmap v949: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 224 B/s rd, 50 KiB/s wr, 2 op/s
Dec 05 10:26:27 np0005546420.localdomain ceph-mon[298353]: pgmap v950: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 49 KiB/s wr, 3 op/s
Dec 05 10:26:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:28.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:28.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:26:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:30.145 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:30 np0005546420.localdomain ceph-mon[298353]: pgmap v951: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 49 KiB/s wr, 3 op/s
Dec 05 10:26:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:31.247 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e298 e298: 6 total, 6 up, 6 in
Dec 05 10:26:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:31.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:32 np0005546420.localdomain ceph-mon[298353]: osdmap e298: 6 total, 6 up, 6 in
Dec 05 10:26:32 np0005546420.localdomain ceph-mon[298353]: pgmap v953: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 206 B/s rd, 27 KiB/s wr, 1 op/s
Dec 05 10:26:32 np0005546420.localdomain sshd[334223]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:26:33 np0005546420.localdomain sshd[334223]: Received disconnect from 163.44.99.31 port 53020:11: Bye Bye [preauth]
Dec 05 10:26:33 np0005546420.localdomain sshd[334223]: Disconnected from authenticating user root 163.44.99.31 port 53020 [preauth]
Dec 05 10:26:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:33.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:33.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:26:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:33.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:26:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:26:33 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:26:33 np0005546420.localdomain podman[334225]: 2025-12-05 10:26:33.976051431 +0000 UTC m=+0.086385079 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350)
Dec 05 10:26:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:33.982 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:26:33 np0005546420.localdomain podman[334225]: 2025-12-05 10:26:33.993377287 +0000 UTC m=+0.103710945 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 10:26:34 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:26:34 np0005546420.localdomain podman[334226]: 2025-12-05 10:26:34.08931408 +0000 UTC m=+0.197145591 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:26:34 np0005546420.localdomain podman[334226]: 2025-12-05 10:26:34.12657941 +0000 UTC m=+0.234410921 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:26:34 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:26:34 np0005546420.localdomain sudo[334268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:26:34 np0005546420.localdomain sudo[334268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:26:34 np0005546420.localdomain sudo[334268]: pam_unix(sudo:session): session closed for user root
Dec 05 10:26:34 np0005546420.localdomain sudo[334286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:26:34 np0005546420.localdomain sudo[334286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:26:34 np0005546420.localdomain ceph-mon[298353]: pgmap v954: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 26 KiB/s wr, 1 op/s
Dec 05 10:26:34 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1052562904' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:26:34 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:34.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:34 np0005546420.localdomain sudo[334286]: pam_unix(sudo:session): session closed for user root
Dec 05 10:26:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:35.176 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:35 np0005546420.localdomain sudo[334336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:26:35 np0005546420.localdomain sudo[334336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:26:35 np0005546420.localdomain sudo[334336]: pam_unix(sudo:session): session closed for user root
Dec 05 10:26:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:35.710 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:35 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:26:35.710 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:26:35 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:26:35.713 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 10:26:35 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:26:35.714 159503 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c2157608-8f70-44ef-883c-3db22f367c76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 10:26:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:26:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:26:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:26:35 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:26:35 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1394077181' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:26:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:35.868 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:36.300 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.616680) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396616768, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2319, "num_deletes": 255, "total_data_size": 3381155, "memory_usage": 3526672, "flush_reason": "Manual Compaction"}
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396632096, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2193201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42452, "largest_seqno": 44766, "table_properties": {"data_size": 2184747, "index_size": 5087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18685, "raw_average_key_size": 20, "raw_value_size": 2167019, "raw_average_value_size": 2352, "num_data_blocks": 221, "num_entries": 921, "num_filter_entries": 921, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930241, "oldest_key_time": 1764930241, "file_creation_time": 1764930396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 15460 microseconds, and 6273 cpu microseconds.
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.632147) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2193201 bytes OK
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.632171) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.634120) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.634143) EVENT_LOG_v1 {"time_micros": 1764930396634135, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.634167) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3370583, prev total WAL file size 3370583, number of live WAL files 2.
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.635436) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353330' seq:72057594037927935, type:22 .. '6B760031373832' seq:0, type:0; will stop at (end)
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2141KB)], [75(17MB)]
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396635506, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 20751716, "oldest_snapshot_seqno": -1}
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 14907 keys, 19651194 bytes, temperature: kUnknown
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396750697, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 19651194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19564198, "index_size": 48623, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 37317, "raw_key_size": 398540, "raw_average_key_size": 26, "raw_value_size": 19309385, "raw_average_value_size": 1295, "num_data_blocks": 1808, "num_entries": 14907, "num_filter_entries": 14907, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764930396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.751095) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 19651194 bytes
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.753161) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.0 rd, 170.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 17.7 +0.0 blob) out(18.7 +0.0 blob), read-write-amplify(18.4) write-amplify(9.0) OK, records in: 15442, records dropped: 535 output_compression: NoCompression
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.753189) EVENT_LOG_v1 {"time_micros": 1764930396753177, "job": 46, "event": "compaction_finished", "compaction_time_micros": 115291, "compaction_time_cpu_micros": 62602, "output_level": 6, "num_output_files": 1, "total_output_size": 19651194, "num_input_records": 15442, "num_output_records": 14907, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396753688, "job": 46, "event": "table_file_deletion", "file_number": 77}
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396757144, "job": 46, "event": "table_file_deletion", "file_number": 75}
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.635276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.757322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.757329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.757332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.757335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:36.757337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: pgmap v955: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 26 KiB/s wr, 1 op/s
Dec 05 10:26:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:26:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:36.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:37.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.012 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.012 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.013 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.013 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.013 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:26:38 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:26:38 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2553680754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.481 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.694 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.696 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11407MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.696 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.697 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.763 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.764 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:26:38 np0005546420.localdomain ceph-mon[298353]: pgmap v956: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 23 KiB/s wr, 0 op/s
Dec 05 10:26:38 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2553680754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.934 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing inventories for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.952 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating ProviderTree inventory for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.953 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Updating inventory in ProviderTree for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 10:26:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:38.974 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing aggregate associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 10:26:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:39.000 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Refreshing trait associations for resource provider 2850b2c4-8d07-40ab-9d82-672172ca70fc, traits: HW_CPU_X86_SHA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE41,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 10:26:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:39.022 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:26:39 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:26:39 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2835041434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:26:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:39.483 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:26:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:39.490 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:26:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:39.507 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:26:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:39.510 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:26:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:39.510 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:26:39 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3761217419' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:26:39 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2835041434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:26:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:40.224 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:40 np0005546420.localdomain sshd[334398]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:26:40 np0005546420.localdomain ceph-mon[298353]: pgmap v957: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 23 KiB/s wr, 0 op/s
Dec 05 10:26:40 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2231012404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:26:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:41.338 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:26:41 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:26:41 np0005546420.localdomain podman[334400]: 2025-12-05 10:26:41.513622175 +0000 UTC m=+0.085001316 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 10:26:41 np0005546420.localdomain podman[334400]: 2025-12-05 10:26:41.530319581 +0000 UTC m=+0.101698792 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:26:41 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:26:41 np0005546420.localdomain podman[334401]: 2025-12-05 10:26:41.608462094 +0000 UTC m=+0.176842823 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 10:26:41 np0005546420.localdomain podman[334401]: 2025-12-05 10:26:41.673870444 +0000 UTC m=+0.242251203 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Dec 05 10:26:41 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:26:42 np0005546420.localdomain sshd[334398]: Received disconnect from 197.248.8.33 port 39338:11: Bye Bye [preauth]
Dec 05 10:26:42 np0005546420.localdomain sshd[334398]: Disconnected from authenticating user root 197.248.8.33 port 39338 [preauth]
Dec 05 10:26:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:42.511 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:42 np0005546420.localdomain ceph-mon[298353]: pgmap v958: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:26:42 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:26:42.757 262769 INFO neutron.agent.linux.ip_lib [None req-0eb00a69-3143-4cf7-b896-a6d5763a044f - - - - - -] Device tape260cd65-43 cannot be used as it has no MAC address
Dec 05 10:26:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:42.777 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:42 np0005546420.localdomain kernel: device tape260cd65-43 entered promiscuous mode
Dec 05 10:26:42 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:26:42Z|00398|binding|INFO|Claiming lport e260cd65-4390-4dfe-a408-2a92f42fb518 for this chassis.
Dec 05 10:26:42 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:26:42Z|00399|binding|INFO|e260cd65-4390-4dfe-a408-2a92f42fb518: Claiming unknown
Dec 05 10:26:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:42.785 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:42 np0005546420.localdomain NetworkManager[5963]: <info>  [1764930402.7887] manager: (tape260cd65-43): new Generic device (/org/freedesktop/NetworkManager/Devices/66)
Dec 05 10:26:42 np0005546420.localdomain systemd-udevd[334455]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:26:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:26:42.798 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-4cea1b78-e898-4269-9afc-9f0df50ed868', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4cea1b78-e898-4269-9afc-9f0df50ed868', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '315f092d02d9417ea7002196e3bc5e51', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3437c44b-7a03-42a7-8f00-80349127ab0b, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=e260cd65-4390-4dfe-a408-2a92f42fb518) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:26:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:26:42.800 159503 INFO neutron.agent.ovn.metadata.agent [-] Port e260cd65-4390-4dfe-a408-2a92f42fb518 in datapath 4cea1b78-e898-4269-9afc-9f0df50ed868 bound to our chassis
Dec 05 10:26:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:26:42.802 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 8e8459f5-12d9-47a8-9357-08c25b96b748 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:26:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:26:42.802 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4cea1b78-e898-4269-9afc-9f0df50ed868, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:26:42 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:26:42.803 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[a292d755-40d2-49a3-a73d-52c6c1e2182b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:26:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:42.804 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:42 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:26:42Z|00400|binding|INFO|Setting lport e260cd65-4390-4dfe-a408-2a92f42fb518 ovn-installed in OVS
Dec 05 10:26:42 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:26:42Z|00401|binding|INFO|Setting lport e260cd65-4390-4dfe-a408-2a92f42fb518 up in Southbound
Dec 05 10:26:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:42.807 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:42.820 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:42 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape260cd65-43: No such device
Dec 05 10:26:42 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape260cd65-43: No such device
Dec 05 10:26:42 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape260cd65-43: No such device
Dec 05 10:26:42 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape260cd65-43: No such device
Dec 05 10:26:42 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape260cd65-43: No such device
Dec 05 10:26:42 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape260cd65-43: No such device
Dec 05 10:26:42 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape260cd65-43: No such device
Dec 05 10:26:42 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tape260cd65-43: No such device
Dec 05 10:26:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:42.859 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:42.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:26:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:42.873 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:43 np0005546420.localdomain podman[334526]: 
Dec 05 10:26:43 np0005546420.localdomain podman[334526]: 2025-12-05 10:26:43.856286786 +0000 UTC m=+0.084417489 container create 8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cea1b78-e898-4269-9afc-9f0df50ed868, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:26:43 np0005546420.localdomain systemd[1]: Started libpod-conmon-8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1.scope.
Dec 05 10:26:43 np0005546420.localdomain podman[334526]: 2025-12-05 10:26:43.811938206 +0000 UTC m=+0.040068929 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:26:43 np0005546420.localdomain systemd[1]: tmp-crun.8iGJQa.mount: Deactivated successfully.
Dec 05 10:26:43 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:26:43 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d615bd63872695d582622e77c7eae17012f881bd54d0f5367c7944a38a516978/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:26:44 np0005546420.localdomain podman[334526]: 2025-12-05 10:26:44.000428199 +0000 UTC m=+0.228558892 container init 8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cea1b78-e898-4269-9afc-9f0df50ed868, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:26:44 np0005546420.localdomain podman[334526]: 2025-12-05 10:26:44.010158749 +0000 UTC m=+0.238289422 container start 8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cea1b78-e898-4269-9afc-9f0df50ed868, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:26:44 np0005546420.localdomain dnsmasq[334544]: started, version 2.85 cachesize 150
Dec 05 10:26:44 np0005546420.localdomain dnsmasq[334544]: DNS service limited to local subnets
Dec 05 10:26:44 np0005546420.localdomain dnsmasq[334544]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:26:44 np0005546420.localdomain dnsmasq[334544]: warning: no upstream servers configured
Dec 05 10:26:44 np0005546420.localdomain dnsmasq-dhcp[334544]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:26:44 np0005546420.localdomain dnsmasq[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/addn_hosts - 0 addresses
Dec 05 10:26:44 np0005546420.localdomain dnsmasq-dhcp[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/host
Dec 05 10:26:44 np0005546420.localdomain dnsmasq-dhcp[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/opts
Dec 05 10:26:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:26:44.233 262769 INFO neutron.agent.dhcp.agent [None req-1ff8caa6-a281-4db3-8c4d-b3ccb37a2264 - - - - - -] DHCP configuration for ports {'5eac0cf3-fdc3-47eb-9670-43660753d486'} is completed
Dec 05 10:26:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:26:44.247 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:26:44Z, description=, device_id=6d693c4d-b8d4-46f0-bfa7-b410a013e27b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e1bd60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99db86a0>], id=3bca0736-5a1c-4eae-be23-facc3875d156, ip_allocation=immediate, mac_address=fa:16:3e:69:c9:22, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:26:39Z, description=, dns_domain=, id=4cea1b78-e898-4269-9afc-9f0df50ed868, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-229578445-network, port_security_enabled=True, project_id=315f092d02d9417ea7002196e3bc5e51, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64152, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3924, status=ACTIVE, subnets=['1252c6bc-7a96-4430-931c-9e1e25024689'], tags=[], tenant_id=315f092d02d9417ea7002196e3bc5e51, updated_at=2025-12-05T10:26:40Z, vlan_transparent=None, network_id=4cea1b78-e898-4269-9afc-9f0df50ed868, port_security_enabled=False, project_id=315f092d02d9417ea7002196e3bc5e51, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3932, status=DOWN, tags=[], tenant_id=315f092d02d9417ea7002196e3bc5e51, updated_at=2025-12-05T10:26:44Z on network 4cea1b78-e898-4269-9afc-9f0df50ed868
Dec 05 10:26:44 np0005546420.localdomain dnsmasq[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/addn_hosts - 1 addresses
Dec 05 10:26:44 np0005546420.localdomain podman[334560]: 2025-12-05 10:26:44.489732522 +0000 UTC m=+0.068378013 container kill 8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cea1b78-e898-4269-9afc-9f0df50ed868, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:26:44 np0005546420.localdomain dnsmasq-dhcp[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/host
Dec 05 10:26:44 np0005546420.localdomain dnsmasq-dhcp[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/opts
Dec 05 10:26:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:26:44.782 262769 INFO neutron.agent.dhcp.agent [None req-bdb9f0a9-a147-45c9-a0f7-e945b1cc97c2 - - - - - -] DHCP configuration for ports {'3bca0736-5a1c-4eae-be23-facc3875d156'} is completed
Dec 05 10:26:44 np0005546420.localdomain ceph-mon[298353]: pgmap v959: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:26:44 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:26:44.957 262769 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:26:44Z, description=, device_id=6d693c4d-b8d4-46f0-bfa7-b410a013e27b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e8f490>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f6d99e07910>], id=3bca0736-5a1c-4eae-be23-facc3875d156, ip_allocation=immediate, mac_address=fa:16:3e:69:c9:22, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:26:39Z, description=, dns_domain=, id=4cea1b78-e898-4269-9afc-9f0df50ed868, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-229578445-network, port_security_enabled=True, project_id=315f092d02d9417ea7002196e3bc5e51, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64152, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3924, status=ACTIVE, subnets=['1252c6bc-7a96-4430-931c-9e1e25024689'], tags=[], tenant_id=315f092d02d9417ea7002196e3bc5e51, updated_at=2025-12-05T10:26:40Z, vlan_transparent=None, network_id=4cea1b78-e898-4269-9afc-9f0df50ed868, port_security_enabled=False, project_id=315f092d02d9417ea7002196e3bc5e51, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3932, status=DOWN, tags=[], tenant_id=315f092d02d9417ea7002196e3bc5e51, updated_at=2025-12-05T10:26:44Z on network 4cea1b78-e898-4269-9afc-9f0df50ed868
Dec 05 10:26:45 np0005546420.localdomain podman[334597]: 2025-12-05 10:26:45.159117798 +0000 UTC m=+0.054546575 container kill 8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cea1b78-e898-4269-9afc-9f0df50ed868, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:26:45 np0005546420.localdomain dnsmasq[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/addn_hosts - 1 addresses
Dec 05 10:26:45 np0005546420.localdomain dnsmasq-dhcp[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/host
Dec 05 10:26:45 np0005546420.localdomain dnsmasq-dhcp[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/opts
Dec 05 10:26:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:45.275 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:45 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:26:45.588 262769 INFO neutron.agent.dhcp.agent [None req-59466819-4a3d-404e-9eaf-83c6b42545aa - - - - - -] DHCP configuration for ports {'3bca0736-5a1c-4eae-be23-facc3875d156'} is completed
Dec 05 10:26:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:26:46Z|00402|ovn_bfd|INFO|Enabled BFD on interface ovn-473cc8-0
Dec 05 10:26:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:26:46Z|00403|ovn_bfd|INFO|Enabled BFD on interface ovn-f5bb44-0
Dec 05 10:26:46 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:26:46Z|00404|ovn_bfd|INFO|Enabled BFD on interface ovn-40c64e-0
Dec 05 10:26:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:46.358 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:46.405 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:46.428 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:46 np0005546420.localdomain ceph-mon[298353]: pgmap v960: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:26:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:26:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:26:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:26:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154915 "" "Go-http-client/1.1"
Dec 05 10:26:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:26:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18772 "" "Go-http-client/1.1"
Dec 05 10:26:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:47.341 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:47.346 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:47.385 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:48 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:48.211 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:48 np0005546420.localdomain ceph-mon[298353]: pgmap v961: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:26:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:26:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:26:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:26:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:26:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:26:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:26:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:26:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:26:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:26:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:26:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:26:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:26:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:50.277 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:50 np0005546420.localdomain ceph-mon[298353]: pgmap v962: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:26:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:51.359 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:26:51 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:26:51 np0005546420.localdomain podman[334622]: 2025-12-05 10:26:51.514074804 +0000 UTC m=+0.089582318 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:26:51 np0005546420.localdomain podman[334623]: 2025-12-05 10:26:51.564006056 +0000 UTC m=+0.136843458 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec 05 10:26:51 np0005546420.localdomain podman[334623]: 2025-12-05 10:26:51.573104557 +0000 UTC m=+0.145941969 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:26:51 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.624721) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411624758, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 422, "num_deletes": 251, "total_data_size": 227247, "memory_usage": 234712, "flush_reason": "Manual Compaction"}
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411628420, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 147442, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44771, "largest_seqno": 45188, "table_properties": {"data_size": 145144, "index_size": 409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6139, "raw_average_key_size": 19, "raw_value_size": 140474, "raw_average_value_size": 448, "num_data_blocks": 18, "num_entries": 313, "num_filter_entries": 313, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930396, "oldest_key_time": 1764930396, "file_creation_time": 1764930411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 3742 microseconds, and 1211 cpu microseconds.
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.628463) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 147442 bytes OK
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.628482) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.630270) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.630290) EVENT_LOG_v1 {"time_micros": 1764930411630284, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.630309) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 224558, prev total WAL file size 224558, number of live WAL files 2.
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:26:51 np0005546420.localdomain podman[334622]: 2025-12-05 10:26:51.631061848 +0000 UTC m=+0.206569322 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.631671) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133353534' seq:72057594037927935, type:22 .. '7061786F73003133383036' seq:0, type:0; will stop at (end)
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(143KB)], [78(18MB)]
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411631818, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 19798636, "oldest_snapshot_seqno": -1}
Dec 05 10:26:51 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 14705 keys, 18713140 bytes, temperature: kUnknown
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411742717, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 18713140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18629041, "index_size": 46238, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36805, "raw_key_size": 394811, "raw_average_key_size": 26, "raw_value_size": 18379286, "raw_average_value_size": 1249, "num_data_blocks": 1704, "num_entries": 14705, "num_filter_entries": 14705, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764930411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.743103) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 18713140 bytes
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.745399) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.4 rd, 168.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 18.7 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(261.2) write-amplify(126.9) OK, records in: 15220, records dropped: 515 output_compression: NoCompression
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.745430) EVENT_LOG_v1 {"time_micros": 1764930411745417, "job": 48, "event": "compaction_finished", "compaction_time_micros": 111002, "compaction_time_cpu_micros": 54347, "output_level": 6, "num_output_files": 1, "total_output_size": 18713140, "num_input_records": 15220, "num_output_records": 14705, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411745627, "job": 48, "event": "table_file_deletion", "file_number": 80}
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411749187, "job": 48, "event": "table_file_deletion", "file_number": 78}
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.630785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.749229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.749235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.749238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.749241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:51 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:26:51.749244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:26:52 np0005546420.localdomain ceph-mon[298353]: pgmap v963: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:26:54 np0005546420.localdomain ceph-mon[298353]: pgmap v964: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:26:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:55.305 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:26:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:26:55 np0005546420.localdomain podman[334664]: 2025-12-05 10:26:55.520508274 +0000 UTC m=+0.090800955 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:26:55 np0005546420.localdomain podman[334664]: 2025-12-05 10:26:55.532604588 +0000 UTC m=+0.102897289 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:26:55 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:26:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:26:56.395 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:26:56 np0005546420.localdomain ceph-mon[298353]: pgmap v965: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:26:58 np0005546420.localdomain ceph-mon[298353]: pgmap v966: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:26:59 np0005546420.localdomain sshd[334683]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:27:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:00.339 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:00 np0005546420.localdomain ceph-mon[298353]: pgmap v967: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:01 np0005546420.localdomain sshd[334683]: Received disconnect from 178.217.173.50 port 35214:11: Bye Bye [preauth]
Dec 05 10:27:01 np0005546420.localdomain sshd[334683]: Disconnected from authenticating user root 178.217.173.50 port 35214 [preauth]
Dec 05 10:27:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:01.439 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:03 np0005546420.localdomain ceph-mon[298353]: pgmap v968: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:03 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:27:03.736 262769 INFO neutron.agent.linux.ip_lib [None req-6d71d3fd-f755-41f5-a5e3-a45dda7253f8 - - - - - -] Device tapffcac8b8-2a cannot be used as it has no MAC address
Dec 05 10:27:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:03.764 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:03 np0005546420.localdomain kernel: device tapffcac8b8-2a entered promiscuous mode
Dec 05 10:27:03 np0005546420.localdomain NetworkManager[5963]: <info>  [1764930423.7742] manager: (tapffcac8b8-2a): new Generic device (/org/freedesktop/NetworkManager/Devices/67)
Dec 05 10:27:03 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:03Z|00405|binding|INFO|Claiming lport ffcac8b8-2a42-4099-b79d-22ea5baad5c4 for this chassis.
Dec 05 10:27:03 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:03Z|00406|binding|INFO|ffcac8b8-2a42-4099-b79d-22ea5baad5c4: Claiming unknown
Dec 05 10:27:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:03.781 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:03 np0005546420.localdomain systemd-udevd[334695]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 10:27:03 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:03Z|00407|binding|INFO|Setting lport ffcac8b8-2a42-4099-b79d-22ea5baad5c4 ovn-installed in OVS
Dec 05 10:27:03 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:03Z|00408|binding|INFO|Setting lport ffcac8b8-2a42-4099-b79d-22ea5baad5c4 up in Southbound
Dec 05 10:27:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:03.789 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:03 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:03.789 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/16', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-935af8cd-eba1-468b-bf1d-d169a2ac754a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-935af8cd-eba1-468b-bf1d-d169a2ac754a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0be5dd7ec9b24465a8f2ecd5c831c9a3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d971b160-112e-459f-8eaa-eb3b8ed64f36, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=ffcac8b8-2a42-4099-b79d-22ea5baad5c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:27:03 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:03.792 159503 INFO neutron.agent.ovn.metadata.agent [-] Port ffcac8b8-2a42-4099-b79d-22ea5baad5c4 in datapath 935af8cd-eba1-468b-bf1d-d169a2ac754a bound to our chassis
Dec 05 10:27:03 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:03.795 159503 DEBUG neutron.agent.ovn.metadata.agent [-] Port 0408a3de-1ca6-4ff8-ad59-b92b3da748ea IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 05 10:27:03 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:03.796 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 935af8cd-eba1-468b-bf1d-d169a2ac754a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:27:03 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:03.797 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[0046820d-1556-4dd0-8383-1c311fb4c831]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:27:03 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapffcac8b8-2a: No such device
Dec 05 10:27:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:03.809 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:03 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapffcac8b8-2a: No such device
Dec 05 10:27:03 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapffcac8b8-2a: No such device
Dec 05 10:27:03 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapffcac8b8-2a: No such device
Dec 05 10:27:03 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapffcac8b8-2a: No such device
Dec 05 10:27:03 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapffcac8b8-2a: No such device
Dec 05 10:27:03 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapffcac8b8-2a: No such device
Dec 05 10:27:03 np0005546420.localdomain virtnodedevd[229575]: ethtool ioctl error on tapffcac8b8-2a: No such device
Dec 05 10:27:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:03.849 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:03 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:03.908 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2856942644' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:27:04 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/2856942644' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:27:04 np0005546420.localdomain ceph-mon[298353]: pgmap v969: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:04.143 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:27:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:04.144 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:27:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:04.144 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:27:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:27:04 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:27:04 np0005546420.localdomain podman[334744]: 2025-12-05 10:27:04.516401184 +0000 UTC m=+0.087998709 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 10:27:04 np0005546420.localdomain systemd[1]: tmp-crun.bWYZ2Q.mount: Deactivated successfully.
Dec 05 10:27:04 np0005546420.localdomain podman[334743]: 2025-12-05 10:27:04.588065518 +0000 UTC m=+0.162378277 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 10:27:04 np0005546420.localdomain podman[334744]: 2025-12-05 10:27:04.605390823 +0000 UTC m=+0.176988418 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:27:04 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:27:04 np0005546420.localdomain podman[334743]: 2025-12-05 10:27:04.657463051 +0000 UTC m=+0.231775870 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, config_id=edpm, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Dec 05 10:27:04 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:27:04 np0005546420.localdomain podman[334809]: 
Dec 05 10:27:04 np0005546420.localdomain podman[334809]: 2025-12-05 10:27:04.854548089 +0000 UTC m=+0.089797315 container create 00bde229740333ebf72f9da877dcd9e5ef2c7e81dd95a21dffc89466cc9f9300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-935af8cd-eba1-468b-bf1d-d169a2ac754a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 10:27:04 np0005546420.localdomain podman[334809]: 2025-12-05 10:27:04.812356186 +0000 UTC m=+0.047605442 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 05 10:27:04 np0005546420.localdomain systemd[1]: Started libpod-conmon-00bde229740333ebf72f9da877dcd9e5ef2c7e81dd95a21dffc89466cc9f9300.scope.
Dec 05 10:27:04 np0005546420.localdomain systemd[1]: Started libcrun container.
Dec 05 10:27:04 np0005546420.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686c0b53f3591abc2934cadd1ee9b340be0a155cf9f7edf762147940aa675605/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 10:27:04 np0005546420.localdomain podman[334809]: 2025-12-05 10:27:04.949503742 +0000 UTC m=+0.184752968 container init 00bde229740333ebf72f9da877dcd9e5ef2c7e81dd95a21dffc89466cc9f9300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-935af8cd-eba1-468b-bf1d-d169a2ac754a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:27:04 np0005546420.localdomain podman[334809]: 2025-12-05 10:27:04.960274005 +0000 UTC m=+0.195523231 container start 00bde229740333ebf72f9da877dcd9e5ef2c7e81dd95a21dffc89466cc9f9300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-935af8cd-eba1-468b-bf1d-d169a2ac754a, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:27:04 np0005546420.localdomain dnsmasq[334827]: started, version 2.85 cachesize 150
Dec 05 10:27:04 np0005546420.localdomain dnsmasq[334827]: DNS service limited to local subnets
Dec 05 10:27:04 np0005546420.localdomain dnsmasq[334827]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 05 10:27:04 np0005546420.localdomain dnsmasq[334827]: warning: no upstream servers configured
Dec 05 10:27:04 np0005546420.localdomain dnsmasq-dhcp[334827]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 05 10:27:04 np0005546420.localdomain dnsmasq[334827]: read /var/lib/neutron/dhcp/935af8cd-eba1-468b-bf1d-d169a2ac754a/addn_hosts - 0 addresses
Dec 05 10:27:04 np0005546420.localdomain dnsmasq-dhcp[334827]: read /var/lib/neutron/dhcp/935af8cd-eba1-468b-bf1d-d169a2ac754a/host
Dec 05 10:27:04 np0005546420.localdomain dnsmasq-dhcp[334827]: read /var/lib/neutron/dhcp/935af8cd-eba1-468b-bf1d-d169a2ac754a/opts
Dec 05 10:27:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e299 e299: 6 total, 6 up, 6 in
Dec 05 10:27:05 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:27:05.265 262769 INFO neutron.agent.dhcp.agent [None req-c56dfafe-8b01-4f32-834e-34a1ba56825d - - - - - -] DHCP configuration for ports {'3b1c53ea-04fb-4427-a782-d99b9adb1993'} is completed
Dec 05 10:27:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:05.382 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:06 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e300 e300: 6 total, 6 up, 6 in
Dec 05 10:27:06 np0005546420.localdomain ceph-mon[298353]: osdmap e299: 6 total, 6 up, 6 in
Dec 05 10:27:06 np0005546420.localdomain ceph-mon[298353]: pgmap v971: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:06.490 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:07 np0005546420.localdomain ceph-mon[298353]: osdmap e300: 6 total, 6 up, 6 in
Dec 05 10:27:07 np0005546420.localdomain dnsmasq[334827]: exiting on receipt of SIGTERM
Dec 05 10:27:07 np0005546420.localdomain podman[334844]: 2025-12-05 10:27:07.573072401 +0000 UTC m=+0.057331112 container kill 00bde229740333ebf72f9da877dcd9e5ef2c7e81dd95a21dffc89466cc9f9300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-935af8cd-eba1-468b-bf1d-d169a2ac754a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 05 10:27:07 np0005546420.localdomain systemd[1]: libpod-00bde229740333ebf72f9da877dcd9e5ef2c7e81dd95a21dffc89466cc9f9300.scope: Deactivated successfully.
Dec 05 10:27:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:07Z|00409|binding|INFO|Removing iface tapffcac8b8-2a ovn-installed in OVS
Dec 05 10:27:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:07.591 159503 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 0408a3de-1ca6-4ff8-ad59-b92b3da748ea with type ""
Dec 05 10:27:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:07.594 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/16', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-935af8cd-eba1-468b-bf1d-d169a2ac754a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-935af8cd-eba1-468b-bf1d-d169a2ac754a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0be5dd7ec9b24465a8f2ecd5c831c9a3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d971b160-112e-459f-8eaa-eb3b8ed64f36, chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=ffcac8b8-2a42-4099-b79d-22ea5baad5c4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:27:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:07.597 159503 INFO neutron.agent.ovn.metadata.agent [-] Port ffcac8b8-2a42-4099-b79d-22ea5baad5c4 in datapath 935af8cd-eba1-468b-bf1d-d169a2ac754a unbound from our chassis
Dec 05 10:27:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:07.600 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 935af8cd-eba1-468b-bf1d-d169a2ac754a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:27:07 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:07Z|00410|binding|INFO|Removing lport ffcac8b8-2a42-4099-b79d-22ea5baad5c4 ovn-installed in OVS
Dec 05 10:27:07 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:07.601 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[629c45ca-4d36-42a3-b2b2-4d4481766444]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:27:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:07.639 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:07 np0005546420.localdomain podman[334858]: 2025-12-05 10:27:07.671595834 +0000 UTC m=+0.074111960 container died 00bde229740333ebf72f9da877dcd9e5ef2c7e81dd95a21dffc89466cc9f9300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-935af8cd-eba1-468b-bf1d-d169a2ac754a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:27:07 np0005546420.localdomain systemd[1]: tmp-crun.luifII.mount: Deactivated successfully.
Dec 05 10:27:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00bde229740333ebf72f9da877dcd9e5ef2c7e81dd95a21dffc89466cc9f9300-userdata-shm.mount: Deactivated successfully.
Dec 05 10:27:07 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-686c0b53f3591abc2934cadd1ee9b340be0a155cf9f7edf762147940aa675605-merged.mount: Deactivated successfully.
Dec 05 10:27:07 np0005546420.localdomain podman[334858]: 2025-12-05 10:27:07.771873282 +0000 UTC m=+0.174389368 container remove 00bde229740333ebf72f9da877dcd9e5ef2c7e81dd95a21dffc89466cc9f9300 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-935af8cd-eba1-468b-bf1d-d169a2ac754a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:27:07 np0005546420.localdomain systemd[1]: libpod-conmon-00bde229740333ebf72f9da877dcd9e5ef2c7e81dd95a21dffc89466cc9f9300.scope: Deactivated successfully.
Dec 05 10:27:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:07.784 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:07 np0005546420.localdomain kernel: device tapffcac8b8-2a left promiscuous mode
Dec 05 10:27:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:07.806 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:07 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:27:07.823 262769 INFO neutron.agent.dhcp.agent [None req-a7696fd8-d8c7-4f89-b5fc-14347fad0a64 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:27:07 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:27:07.825 262769 INFO neutron.agent.dhcp.agent [None req-a7696fd8-d8c7-4f89-b5fc-14347fad0a64 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:27:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:08.030 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:08 np0005546420.localdomain ceph-mon[298353]: pgmap v973: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 245 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Dec 05 10:27:08 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:08Z|00411|ovn_bfd|INFO|Disabled BFD on interface ovn-473cc8-0
Dec 05 10:27:08 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:08Z|00412|ovn_bfd|INFO|Disabled BFD on interface ovn-f5bb44-0
Dec 05 10:27:08 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:08Z|00413|ovn_bfd|INFO|Disabled BFD on interface ovn-40c64e-0
Dec 05 10:27:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:08.497 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:08.499 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:08.516 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:08 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d935af8cd\x2deba1\x2d468b\x2dbf1d\x2dd169a2ac754a.mount: Deactivated successfully.
Dec 05 10:27:08 np0005546420.localdomain podman[334901]: 2025-12-05 10:27:08.649202841 +0000 UTC m=+0.058770447 container kill 8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cea1b78-e898-4269-9afc-9f0df50ed868, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 10:27:08 np0005546420.localdomain dnsmasq[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/addn_hosts - 0 addresses
Dec 05 10:27:08 np0005546420.localdomain dnsmasq-dhcp[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/host
Dec 05 10:27:08 np0005546420.localdomain dnsmasq-dhcp[334544]: read /var/lib/neutron/dhcp/4cea1b78-e898-4269-9afc-9f0df50ed868/opts
Dec 05 10:27:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:08.876 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:08 np0005546420.localdomain kernel: device tape260cd65-43 left promiscuous mode
Dec 05 10:27:08 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:08Z|00414|binding|INFO|Releasing lport e260cd65-4390-4dfe-a408-2a92f42fb518 from this chassis (sb_readonly=0)
Dec 05 10:27:08 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:08Z|00415|binding|INFO|Setting lport e260cd65-4390-4dfe-a408-2a92f42fb518 down in Southbound
Dec 05 10:27:08 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:08.905 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:10.027 159503 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546420.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc249f605-8ec0-51a1-974f-e2ecbd1c1235-4cea1b78-e898-4269-9afc-9f0df50ed868', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4cea1b78-e898-4269-9afc-9f0df50ed868', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '315f092d02d9417ea7002196e3bc5e51', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546420.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3437c44b-7a03-42a7-8f00-80349127ab0b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>], logical_port=e260cd65-4390-4dfe-a408-2a92f42fb518) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fed93dffb20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 10:27:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:10.029 159503 INFO neutron.agent.ovn.metadata.agent [-] Port e260cd65-4390-4dfe-a408-2a92f42fb518 in datapath 4cea1b78-e898-4269-9afc-9f0df50ed868 unbound from our chassis
Dec 05 10:27:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:10.031 159503 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4cea1b78-e898-4269-9afc-9f0df50ed868, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 10:27:10 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:27:10.032 307492 DEBUG oslo.privsep.daemon [-] privsep: reply[e0211f0e-59b3-4393-b616-f1fc6081822f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 10:27:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:10.421 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:10 np0005546420.localdomain ceph-mon[298353]: pgmap v974: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 245 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 2.6 MiB/s wr, 46 op/s
Dec 05 10:27:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:11.528 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:11 np0005546420.localdomain podman[334939]: 2025-12-05 10:27:11.788490399 +0000 UTC m=+0.067302530 container kill 8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cea1b78-e898-4269-9afc-9f0df50ed868, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:27:11 np0005546420.localdomain dnsmasq[334544]: exiting on receipt of SIGTERM
Dec 05 10:27:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:27:11 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:27:11 np0005546420.localdomain systemd[1]: libpod-8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1.scope: Deactivated successfully.
Dec 05 10:27:11 np0005546420.localdomain podman[334953]: 2025-12-05 10:27:11.8739954 +0000 UTC m=+0.059805239 container died 8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cea1b78-e898-4269-9afc-9f0df50ed868, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Dec 05 10:27:11 np0005546420.localdomain systemd[1]: tmp-crun.stJ2qC.mount: Deactivated successfully.
Dec 05 10:27:11 np0005546420.localdomain podman[334953]: 2025-12-05 10:27:11.927775201 +0000 UTC m=+0.113584990 container cleanup 8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cea1b78-e898-4269-9afc-9f0df50ed868, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:27:11 np0005546420.localdomain systemd[1]: libpod-conmon-8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1.scope: Deactivated successfully.
Dec 05 10:27:12 np0005546420.localdomain podman[334961]: 2025-12-05 10:27:12.012274771 +0000 UTC m=+0.191852237 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:27:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:12.041 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:12 np0005546420.localdomain podman[334955]: 2025-12-05 10:27:12.067413295 +0000 UTC m=+0.253546833 container remove 8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4cea1b78-e898-4269-9afc-9f0df50ed868, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:27:12 np0005546420.localdomain podman[334956]: 2025-12-05 10:27:12.118032158 +0000 UTC m=+0.298147931 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 10:27:12 np0005546420.localdomain podman[334956]: 2025-12-05 10:27:12.133442244 +0000 UTC m=+0.313558067 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 05 10:27:12 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:27:12 np0005546420.localdomain podman[334961]: 2025-12-05 10:27:12.18448082 +0000 UTC m=+0.364058276 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:27:12 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:27:12 np0005546420.localdomain sshd[335026]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:27:12 np0005546420.localdomain ceph-mon[298353]: pgmap v975: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 55 op/s
Dec 05 10:27:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay-d615bd63872695d582622e77c7eae17012f881bd54d0f5367c7944a38a516978-merged.mount: Deactivated successfully.
Dec 05 10:27:12 np0005546420.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c8b6c9ca1a96967bc5dc90779748e0c18954d890440670915b836ba4cff40e1-userdata-shm.mount: Deactivated successfully.
Dec 05 10:27:13 np0005546420.localdomain systemd[1]: run-netns-qdhcp\x2d4cea1b78\x2de898\x2d4269\x2d9afc\x2d9f0df50ed868.mount: Deactivated successfully.
Dec 05 10:27:13 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:27:13.037 262769 INFO neutron.agent.dhcp.agent [None req-cffd0a69-899c-47cf-a2fb-26ae2c67e8f5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:27:13 np0005546420.localdomain neutron_dhcp_agent[262765]: 2025-12-05 10:27:13.038 262769 INFO neutron.agent.dhcp.agent [None req-cffd0a69-899c-47cf-a2fb-26ae2c67e8f5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 05 10:27:13 np0005546420.localdomain sshd[335026]: Received disconnect from 24.232.50.5 port 52314:11: Bye Bye [preauth]
Dec 05 10:27:13 np0005546420.localdomain sshd[335026]: Disconnected from authenticating user root 24.232.50.5 port 52314 [preauth]
Dec 05 10:27:14 np0005546420.localdomain ceph-mon[298353]: pgmap v976: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 2.4 MiB/s wr, 51 op/s
Dec 05 10:27:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:15.462 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:16.563 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 e301: 6 total, 6 up, 6 in
Dec 05 10:27:16 np0005546420.localdomain ceph-mon[298353]: pgmap v977: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Dec 05 10:27:16 np0005546420.localdomain ceph-mon[298353]: osdmap e301: 6 total, 6 up, 6 in
Dec 05 10:27:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:27:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:27:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:27:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:27:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:27:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18309 "" "Go-http-client/1.1"
Dec 05 10:27:18 np0005546420.localdomain ceph-mon[298353]: pgmap v979: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 4.6 KiB/s rd, 614 B/s wr, 6 op/s
Dec 05 10:27:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:27:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:27:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:27:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:27:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:27:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:27:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:27:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:27:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:27:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:27:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:27:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:27:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:20.505 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:20 np0005546420.localdomain ceph-mon[298353]: pgmap v980: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 4.6 KiB/s rd, 614 B/s wr, 6 op/s
Dec 05 10:27:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:21.592 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:27:22 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:27:22 np0005546420.localdomain systemd[1]: tmp-crun.ySBEVe.mount: Deactivated successfully.
Dec 05 10:27:22 np0005546420.localdomain podman[335029]: 2025-12-05 10:27:22.516758317 +0000 UTC m=+0.090898560 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:27:22 np0005546420.localdomain podman[335029]: 2025-12-05 10:27:22.553391028 +0000 UTC m=+0.127531251 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 10:27:22 np0005546420.localdomain podman[335028]: 2025-12-05 10:27:22.571156466 +0000 UTC m=+0.146024341 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:27:22 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:27:22 np0005546420.localdomain podman[335028]: 2025-12-05 10:27:22.584341404 +0000 UTC m=+0.159209239 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:27:22 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:27:22 np0005546420.localdomain ceph-mon[298353]: pgmap v981: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:24 np0005546420.localdomain ceph-mon[298353]: pgmap v982: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:25.534 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:26 np0005546420.localdomain sshd[335068]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:27:26 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:27:26 np0005546420.localdomain podman[335070]: 2025-12-05 10:27:26.505788401 +0000 UTC m=+0.081000293 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:27:26 np0005546420.localdomain podman[335070]: 2025-12-05 10:27:26.522408535 +0000 UTC m=+0.097620417 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 10:27:26 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:27:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:26.629 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:26 np0005546420.localdomain ceph-mon[298353]: pgmap v983: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:27 np0005546420.localdomain sshd[335068]: Received disconnect from 103.231.14.54 port 38554:11: Bye Bye [preauth]
Dec 05 10:27:27 np0005546420.localdomain sshd[335068]: Disconnected from authenticating user root 103.231.14.54 port 38554 [preauth]
Dec 05 10:27:28 np0005546420.localdomain ceph-mon[298353]: pgmap v984: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:28.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:28.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:27:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:30.578 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:30 np0005546420.localdomain ceph-mon[298353]: pgmap v985: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:31.669 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:32 np0005546420.localdomain ceph-mon[298353]: pgmap v986: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:32.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:34 np0005546420.localdomain ceph-mon[298353]: pgmap v987: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:27:35 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:27:35 np0005546420.localdomain sudo[335090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:27:35 np0005546420.localdomain sudo[335090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:27:35 np0005546420.localdomain sudo[335090]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:35 np0005546420.localdomain podman[335101]: 2025-12-05 10:27:35.529026305 +0000 UTC m=+0.100723292 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.)
Dec 05 10:27:35 np0005546420.localdomain sudo[335131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 05 10:27:35 np0005546420.localdomain sudo[335131]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:27:35 np0005546420.localdomain podman[335105]: 2025-12-05 10:27:35.572759635 +0000 UTC m=+0.138949532 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:27:35 np0005546420.localdomain podman[335101]: 2025-12-05 10:27:35.597726987 +0000 UTC m=+0.169423984 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 10:27:35 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:27:35 np0005546420.localdomain podman[335105]: 2025-12-05 10:27:35.611314366 +0000 UTC m=+0.177504223 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:27:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:35.620 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:35 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:27:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:35.867 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:35.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:35.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:27:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:35.872 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:27:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:35.889 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:27:36 np0005546420.localdomain sudo[335131]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:36 np0005546420.localdomain sudo[335190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:27:36 np0005546420.localdomain sudo[335190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:27:36 np0005546420.localdomain sudo[335190]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:36 np0005546420.localdomain sudo[335208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:27:36 np0005546420.localdomain sudo[335208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:27:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:36.754 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:36 np0005546420.localdomain ceph-mon[298353]: pgmap v988: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1313180631' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:27:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:27:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:27:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:27:36 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:27:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2419476535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:27:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:36.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:36 np0005546420.localdomain sudo[335208]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:37 np0005546420.localdomain sudo[335257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:27:37 np0005546420.localdomain sudo[335257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:27:37 np0005546420.localdomain sudo[335257]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:37 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:27:37 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:27:37 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:27:37 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:27:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:37.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:38 np0005546420.localdomain ceph-mon[298353]: pgmap v989: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:38.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:38.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:27:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:38.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:27:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:38.889 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:27:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:38.890 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:27:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:38.890 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:27:39 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:27:39 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4157837230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:27:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:39.300 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:27:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:39.502 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:27:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:39.503 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11397MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:27:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:39.504 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:27:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:39.504 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:27:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:39.578 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:27:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:39.578 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:27:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:39.594 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:27:39 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/4157837230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:27:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:27:40 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/570063418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:27:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:40.083 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:27:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:40.091 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:27:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:40.108 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:27:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:40.111 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:27:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:40.112 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:27:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:40.645 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:40 np0005546420.localdomain ceph-mon[298353]: pgmap v990: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:40 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/570063418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:27:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:27:41 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:41.757 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:41 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2599010352' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:27:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:42.113 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:27:42 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:27:42 np0005546420.localdomain podman[335320]: 2025-12-05 10:27:42.523649316 +0000 UTC m=+0.096904183 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 10:27:42 np0005546420.localdomain podman[335319]: 2025-12-05 10:27:42.56586174 +0000 UTC m=+0.138531760 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 10:27:42 np0005546420.localdomain podman[335319]: 2025-12-05 10:27:42.577753068 +0000 UTC m=+0.150423058 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:27:42 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:27:42 np0005546420.localdomain podman[335320]: 2025-12-05 10:27:42.592038149 +0000 UTC m=+0.165293076 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:27:42 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:27:42 np0005546420.localdomain ceph-mon[298353]: pgmap v991: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:42 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2164169634' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:27:43 np0005546420.localdomain sshd[335363]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:27:43 np0005546420.localdomain sshd[335363]: Accepted publickey for zuul from 38.102.83.114 port 34790 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 10:27:43 np0005546420.localdomain systemd-logind[762]: New session 72 of user zuul.
Dec 05 10:27:43 np0005546420.localdomain systemd[1]: Started Session 72 of User zuul.
Dec 05 10:27:43 np0005546420.localdomain sshd[335363]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 10:27:43 np0005546420.localdomain sudo[335383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-amvornpxdbjflteacqmwiiwipqtgbnvu ; /usr/bin/python3
Dec 05 10:27:43 np0005546420.localdomain sudo[335383]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 10:27:43 np0005546420.localdomain python3[335385]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister
                                                           _uses_shell=True zuul_log_id=fa163e3b-3c83-db09-90c9-00000000000c-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 10:27:43 np0005546420.localdomain ceph-mon[298353]: pgmap v992: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:43 np0005546420.localdomain sudo[335383]: pam_unix(sudo:session): session closed for user root
Dec 05 10:27:44 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:44.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:45.708 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:46 np0005546420.localdomain ceph-mon[298353]: mgrmap e55: np0005546419.zhsnqq(active, since 26m), standbys: np0005546420.aoeylc, np0005546421.sukfea
Dec 05 10:27:46 np0005546420.localdomain ceph-mon[298353]: pgmap v993: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:27:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:46.797 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:46.868 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:27:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:27:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:27:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:27:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:27:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:27:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18299 "" "Go-http-client/1.1"
Dec 05 10:27:48 np0005546420.localdomain ceph-mon[298353]: pgmap v994: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 05 10:27:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:27:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:27:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:27:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:27:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:27:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:27:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:27:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:27:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:27:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:27:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:27:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:27:49 np0005546420.localdomain sshd[335363]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:27:49 np0005546420.localdomain systemd[1]: session-72.scope: Deactivated successfully.
Dec 05 10:27:49 np0005546420.localdomain systemd-logind[762]: Session 72 logged out. Waiting for processes to exit.
Dec 05 10:27:49 np0005546420.localdomain systemd-logind[762]: Removed session 72.
Dec 05 10:27:49 np0005546420.localdomain ovn_controller[153683]: 2025-12-05T10:27:49Z|00416|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 05 10:27:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:50 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:50.749 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:50 np0005546420.localdomain ceph-mon[298353]: pgmap v995: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 05 10:27:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:51.838 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:52 np0005546420.localdomain ceph-mon[298353]: pgmap v996: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 05 10:27:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:27:53 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:27:53 np0005546420.localdomain podman[335389]: 2025-12-05 10:27:53.514896614 +0000 UTC m=+0.085508471 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:27:53 np0005546420.localdomain podman[335389]: 2025-12-05 10:27:53.530370592 +0000 UTC m=+0.100982469 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 10:27:53 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:27:53 np0005546420.localdomain podman[335390]: 2025-12-05 10:27:53.625671953 +0000 UTC m=+0.195468584 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 05 10:27:53 np0005546420.localdomain podman[335390]: 2025-12-05 10:27:53.637353425 +0000 UTC m=+0.207150036 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 10:27:53 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:27:54 np0005546420.localdomain ceph-mon[298353]: pgmap v997: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 05 10:27:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:27:55 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:55.790 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:56 np0005546420.localdomain sshd[335429]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:27:56 np0005546420.localdomain ceph-mon[298353]: pgmap v998: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 05 10:27:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:27:56.883 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:27:57 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:27:57 np0005546420.localdomain podman[335431]: 2025-12-05 10:27:57.499599554 +0000 UTC m=+0.076672317 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:27:57 np0005546420.localdomain podman[335431]: 2025-12-05 10:27:57.515308759 +0000 UTC m=+0.092381533 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:27:57 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:27:58 np0005546420.localdomain ceph-mon[298353]: pgmap v999: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 05 10:27:59 np0005546420.localdomain sshd[335450]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:00 np0005546420.localdomain sshd[335450]: Received disconnect from 163.44.99.31 port 52426:11: Bye Bye [preauth]
Dec 05 10:28:00 np0005546420.localdomain sshd[335450]: Disconnected from authenticating user root 163.44.99.31 port 52426 [preauth]
Dec 05 10:28:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:00.792 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:00 np0005546420.localdomain ceph-mon[298353]: pgmap v1000: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 05 10:28:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:01.912 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:02 np0005546420.localdomain ceph-mon[298353]: pgmap v1001: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 05 10:28:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3132035787' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:28:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/3132035787' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:28:04.145 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:28:04.145 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:28:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:28:04.146 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:28:04 np0005546420.localdomain ceph-mon[298353]: pgmap v1002: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:05 np0005546420.localdomain sshd[335452]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:05 np0005546420.localdomain sshd[335452]: Accepted publickey for zuul from 38.102.83.114 port 54150 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 10:28:05 np0005546420.localdomain systemd-logind[762]: New session 73 of user zuul.
Dec 05 10:28:05 np0005546420.localdomain systemd[1]: Started Session 73 of User zuul.
Dec 05 10:28:05 np0005546420.localdomain sshd[335452]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 10:28:05 np0005546420.localdomain sudo[335456]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /var/log
Dec 05 10:28:05 np0005546420.localdomain sudo[335456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 10:28:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:28:05 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:28:05 np0005546420.localdomain systemd[1]: tmp-crun.7yVt4s.mount: Deactivated successfully.
Dec 05 10:28:05 np0005546420.localdomain podman[335474]: 2025-12-05 10:28:05.737049679 +0000 UTC m=+0.089800583 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:28:05 np0005546420.localdomain podman[335474]: 2025-12-05 10:28:05.750392121 +0000 UTC m=+0.103143005 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 10:28:05 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:28:05 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:05.840 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:05 np0005546420.localdomain podman[335475]: 2025-12-05 10:28:05.878630159 +0000 UTC m=+0.224451300 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:28:05 np0005546420.localdomain podman[335475]: 2025-12-05 10:28:05.888429682 +0000 UTC m=+0.234250803 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:28:05 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:28:06 np0005546420.localdomain sshd[335429]: Received disconnect from 41.94.88.49 port 36348:11: Bye Bye [preauth]
Dec 05 10:28:06 np0005546420.localdomain sshd[335429]: Disconnected from authenticating user root 41.94.88.49 port 36348 [preauth]
Dec 05 10:28:06 np0005546420.localdomain sudo[335456]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:06 np0005546420.localdomain sshd[335455]: Received disconnect from 38.102.83.114 port 54150:11: disconnected by user
Dec 05 10:28:06 np0005546420.localdomain sshd[335455]: Disconnected from user zuul 38.102.83.114 port 54150
Dec 05 10:28:06 np0005546420.localdomain sshd[335452]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:28:06 np0005546420.localdomain systemd[1]: session-73.scope: Deactivated successfully.
Dec 05 10:28:06 np0005546420.localdomain systemd-logind[762]: Session 73 logged out. Waiting for processes to exit.
Dec 05 10:28:06 np0005546420.localdomain systemd-logind[762]: Removed session 73.
Dec 05 10:28:06 np0005546420.localdomain ceph-mon[298353]: pgmap v1003: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:06.948 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:07 np0005546420.localdomain sshd[335515]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:07 np0005546420.localdomain sshd[335515]: Accepted publickey for zuul from 38.102.83.114 port 54166 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 10:28:07 np0005546420.localdomain systemd-logind[762]: New session 74 of user zuul.
Dec 05 10:28:07 np0005546420.localdomain systemd[1]: Started Session 74 of User zuul.
Dec 05 10:28:07 np0005546420.localdomain sshd[335515]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 10:28:07 np0005546420.localdomain sudo[335519]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/containers/networks
Dec 05 10:28:07 np0005546420.localdomain sudo[335519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 10:28:07 np0005546420.localdomain sudo[335519]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:07 np0005546420.localdomain sshd[335518]: Received disconnect from 38.102.83.114 port 54166:11: disconnected by user
Dec 05 10:28:07 np0005546420.localdomain sshd[335518]: Disconnected from user zuul 38.102.83.114 port 54166
Dec 05 10:28:07 np0005546420.localdomain sshd[335515]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:28:07 np0005546420.localdomain systemd[1]: session-74.scope: Deactivated successfully.
Dec 05 10:28:07 np0005546420.localdomain systemd-logind[762]: Session 74 logged out. Waiting for processes to exit.
Dec 05 10:28:07 np0005546420.localdomain systemd-logind[762]: Removed session 74.
Dec 05 10:28:07 np0005546420.localdomain sshd[335537]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:07 np0005546420.localdomain sshd[335537]: Accepted publickey for zuul from 38.102.83.114 port 54174 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 10:28:07 np0005546420.localdomain systemd-logind[762]: New session 75 of user zuul.
Dec 05 10:28:07 np0005546420.localdomain systemd[1]: Started Session 75 of User zuul.
Dec 05 10:28:07 np0005546420.localdomain sshd[335537]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 10:28:08 np0005546420.localdomain sudo[335541]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/containers/containers.conf
Dec 05 10:28:08 np0005546420.localdomain sudo[335541]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 10:28:08 np0005546420.localdomain sudo[335541]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:08 np0005546420.localdomain sshd[335540]: Received disconnect from 38.102.83.114 port 54174:11: disconnected by user
Dec 05 10:28:08 np0005546420.localdomain sshd[335540]: Disconnected from user zuul 38.102.83.114 port 54174
Dec 05 10:28:08 np0005546420.localdomain sshd[335537]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:28:08 np0005546420.localdomain systemd[1]: session-75.scope: Deactivated successfully.
Dec 05 10:28:08 np0005546420.localdomain systemd-logind[762]: Session 75 logged out. Waiting for processes to exit.
Dec 05 10:28:08 np0005546420.localdomain systemd-logind[762]: Removed session 75.
Dec 05 10:28:08 np0005546420.localdomain sshd[335559]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:08 np0005546420.localdomain sshd[335559]: Accepted publickey for zuul from 38.102.83.114 port 54178 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 10:28:08 np0005546420.localdomain systemd-logind[762]: New session 76 of user zuul.
Dec 05 10:28:08 np0005546420.localdomain systemd[1]: Started Session 76 of User zuul.
Dec 05 10:28:08 np0005546420.localdomain sshd[335559]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 10:28:08 np0005546420.localdomain sudo[335563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/ceph
Dec 05 10:28:08 np0005546420.localdomain sudo[335563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 10:28:08 np0005546420.localdomain sudo[335563]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:08 np0005546420.localdomain sshd[335562]: Received disconnect from 38.102.83.114 port 54178:11: disconnected by user
Dec 05 10:28:08 np0005546420.localdomain sshd[335562]: Disconnected from user zuul 38.102.83.114 port 54178
Dec 05 10:28:08 np0005546420.localdomain sshd[335559]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:28:08 np0005546420.localdomain systemd[1]: session-76.scope: Deactivated successfully.
Dec 05 10:28:08 np0005546420.localdomain systemd-logind[762]: Session 76 logged out. Waiting for processes to exit.
Dec 05 10:28:08 np0005546420.localdomain systemd-logind[762]: Removed session 76.
Dec 05 10:28:08 np0005546420.localdomain ceph-mon[298353]: pgmap v1004: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:09 np0005546420.localdomain sshd[335581]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:09 np0005546420.localdomain sshd[335581]: Accepted publickey for zuul from 38.102.83.114 port 54192 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 10:28:09 np0005546420.localdomain systemd-logind[762]: New session 77 of user zuul.
Dec 05 10:28:09 np0005546420.localdomain systemd[1]: Started Session 77 of User zuul.
Dec 05 10:28:09 np0005546420.localdomain sshd[335581]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 10:28:09 np0005546420.localdomain sudo[335585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/ci
Dec 05 10:28:09 np0005546420.localdomain sudo[335585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 10:28:09 np0005546420.localdomain sudo[335585]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:09 np0005546420.localdomain sshd[335584]: Received disconnect from 38.102.83.114 port 54192:11: disconnected by user
Dec 05 10:28:09 np0005546420.localdomain sshd[335584]: Disconnected from user zuul 38.102.83.114 port 54192
Dec 05 10:28:09 np0005546420.localdomain sshd[335581]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:28:09 np0005546420.localdomain systemd[1]: session-77.scope: Deactivated successfully.
Dec 05 10:28:09 np0005546420.localdomain systemd-logind[762]: Session 77 logged out. Waiting for processes to exit.
Dec 05 10:28:09 np0005546420.localdomain systemd-logind[762]: Removed session 77.
Dec 05 10:28:09 np0005546420.localdomain sshd[335603]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:09 np0005546420.localdomain sshd[335603]: Accepted publickey for zuul from 38.102.83.114 port 58168 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 10:28:09 np0005546420.localdomain systemd-logind[762]: New session 78 of user zuul.
Dec 05 10:28:09 np0005546420.localdomain systemd[1]: Started Session 78 of User zuul.
Dec 05 10:28:09 np0005546420.localdomain sshd[335603]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 10:28:09 np0005546420.localdomain sudo[335607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/yum.conf
Dec 05 10:28:10 np0005546420.localdomain sudo[335607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 10:28:10 np0005546420.localdomain sudo[335607]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:10 np0005546420.localdomain sshd[335606]: Received disconnect from 38.102.83.114 port 58168:11: disconnected by user
Dec 05 10:28:10 np0005546420.localdomain sshd[335606]: Disconnected from user zuul 38.102.83.114 port 58168
Dec 05 10:28:10 np0005546420.localdomain sshd[335603]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:28:10 np0005546420.localdomain systemd[1]: session-78.scope: Deactivated successfully.
Dec 05 10:28:10 np0005546420.localdomain systemd-logind[762]: Session 78 logged out. Waiting for processes to exit.
Dec 05 10:28:10 np0005546420.localdomain systemd-logind[762]: Removed session 78.
Dec 05 10:28:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:10 np0005546420.localdomain sshd[335625]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:10 np0005546420.localdomain sshd[335627]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:10 np0005546420.localdomain sshd[335625]: Accepted publickey for zuul from 38.102.83.114 port 58176 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 10:28:10 np0005546420.localdomain systemd-logind[762]: New session 79 of user zuul.
Dec 05 10:28:10 np0005546420.localdomain systemd[1]: Started Session 79 of User zuul.
Dec 05 10:28:10 np0005546420.localdomain sshd[335625]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 10:28:10 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:10.843 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:10 np0005546420.localdomain ceph-mon[298353]: pgmap v1005: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:10 np0005546420.localdomain sudo[335631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/yum.repos.d
Dec 05 10:28:10 np0005546420.localdomain sudo[335631]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 10:28:10 np0005546420.localdomain sudo[335631]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:10 np0005546420.localdomain sshd[335630]: Received disconnect from 38.102.83.114 port 58176:11: disconnected by user
Dec 05 10:28:10 np0005546420.localdomain sshd[335630]: Disconnected from user zuul 38.102.83.114 port 58176
Dec 05 10:28:10 np0005546420.localdomain sshd[335625]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:28:10 np0005546420.localdomain systemd[1]: session-79.scope: Deactivated successfully.
Dec 05 10:28:10 np0005546420.localdomain systemd-logind[762]: Session 79 logged out. Waiting for processes to exit.
Dec 05 10:28:10 np0005546420.localdomain systemd-logind[762]: Removed session 79.
Dec 05 10:28:11 np0005546420.localdomain sshd[335649]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:11 np0005546420.localdomain sshd[335649]: Accepted publickey for zuul from 38.102.83.114 port 58190 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 10:28:11 np0005546420.localdomain systemd-logind[762]: New session 80 of user zuul.
Dec 05 10:28:11 np0005546420.localdomain systemd[1]: Started Session 80 of User zuul.
Dec 05 10:28:11 np0005546420.localdomain sshd[335649]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 10:28:11 np0005546420.localdomain sudo[335653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/os-net-config
Dec 05 10:28:11 np0005546420.localdomain sudo[335653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 10:28:11 np0005546420.localdomain sudo[335653]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:11 np0005546420.localdomain sshd[335652]: Received disconnect from 38.102.83.114 port 58190:11: disconnected by user
Dec 05 10:28:11 np0005546420.localdomain sshd[335652]: Disconnected from user zuul 38.102.83.114 port 58190
Dec 05 10:28:11 np0005546420.localdomain sshd[335649]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:28:11 np0005546420.localdomain systemd[1]: session-80.scope: Deactivated successfully.
Dec 05 10:28:11 np0005546420.localdomain systemd-logind[762]: Session 80 logged out. Waiting for processes to exit.
Dec 05 10:28:11 np0005546420.localdomain systemd-logind[762]: Removed session 80.
Dec 05 10:28:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:11.984 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:12 np0005546420.localdomain sshd[335671]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:12 np0005546420.localdomain sshd[335627]: Received disconnect from 197.248.8.33 port 40798:11: Bye Bye [preauth]
Dec 05 10:28:12 np0005546420.localdomain sshd[335627]: Disconnected from authenticating user root 197.248.8.33 port 40798 [preauth]
Dec 05 10:28:12 np0005546420.localdomain sshd[335671]: Accepted publickey for zuul from 38.102.83.114 port 58204 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 10:28:12 np0005546420.localdomain systemd-logind[762]: New session 81 of user zuul.
Dec 05 10:28:12 np0005546420.localdomain systemd[1]: Started Session 81 of User zuul.
Dec 05 10:28:12 np0005546420.localdomain sshd[335671]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 10:28:12 np0005546420.localdomain sudo[335675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /home/zuul/ansible_hostname
Dec 05 10:28:12 np0005546420.localdomain sudo[335675]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 10:28:12 np0005546420.localdomain sudo[335675]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:12 np0005546420.localdomain sshd[335674]: Received disconnect from 38.102.83.114 port 58204:11: disconnected by user
Dec 05 10:28:12 np0005546420.localdomain sshd[335674]: Disconnected from user zuul 38.102.83.114 port 58204
Dec 05 10:28:12 np0005546420.localdomain sshd[335671]: pam_unix(sshd:session): session closed for user zuul
Dec 05 10:28:12 np0005546420.localdomain systemd[1]: session-81.scope: Deactivated successfully.
Dec 05 10:28:12 np0005546420.localdomain systemd-logind[762]: Session 81 logged out. Waiting for processes to exit.
Dec 05 10:28:12 np0005546420.localdomain systemd-logind[762]: Removed session 81.
Dec 05 10:28:12 np0005546420.localdomain ceph-mon[298353]: pgmap v1006: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.962 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.963 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:12 np0005546420.localdomain ceilometer_agent_compute[237741]: 2025-12-05 10:28:12.965 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 05 10:28:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:28:13 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:28:13 np0005546420.localdomain podman[335693]: 2025-12-05 10:28:13.523436549 +0000 UTC m=+0.092558478 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 10:28:13 np0005546420.localdomain podman[335694]: 2025-12-05 10:28:13.575953271 +0000 UTC m=+0.143615564 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 10:28:13 np0005546420.localdomain podman[335693]: 2025-12-05 10:28:13.590988915 +0000 UTC m=+0.160110874 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 10:28:13 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:28:13 np0005546420.localdomain podman[335694]: 2025-12-05 10:28:13.619291738 +0000 UTC m=+0.186954051 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:28:13 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:28:14 np0005546420.localdomain ceph-mon[298353]: pgmap v1007: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:15 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:15.892 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:16 np0005546420.localdomain ceph-mon[298353]: pgmap v1008: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:17.008 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:28:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:28:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:28:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:28:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:28:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18313 "" "Go-http-client/1.1"
Dec 05 10:28:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:28:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:28:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:28:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:28:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:28:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:28:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:28:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:28:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:28:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:28:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:28:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:28:18 np0005546420.localdomain ceph-mon[298353]: pgmap v1009: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:19 np0005546420.localdomain sshd[335740]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:28:19 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 1800.0 total, 600.0 interval
                                                           Cumulative writes: 6619 writes, 46K keys, 6619 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s
                                                           Cumulative WAL: 6619 writes, 6619 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 2047 writes, 10K keys, 2047 commit groups, 1.0 writes per commit group, ingest: 12.90 MB, 0.02 MB/s
                                                           Interval WAL: 2047 writes, 2047 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    130.2      0.40              0.13        24    0.016       0      0       0.0       0.0
                                                             L6      1/0   17.85 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   7.7    177.9    164.1      2.43              1.09        23    0.106    307K    11K       0.0       0.0
                                                            Sum      1/0   17.85 MB   0.0      0.4     0.1      0.4       0.4      0.1       0.0   8.7    153.0    159.4      2.83              1.22        47    0.060    307K    11K       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0  15.4    155.6    156.6      0.87              0.44        14    0.062    105K   3727       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0    177.9    164.1      2.43              1.09        23    0.106    307K    11K       0.0       0.0
                                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    131.7      0.39              0.13        23    0.017       0      0       0.0       0.0
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 1800.0 total, 600.0 interval
                                                           Flush(GB): cumulative 0.050, interval 0.009
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.44 GB write, 0.25 MB/s write, 0.42 GB read, 0.24 MB/s read, 2.8 seconds
                                                           Interval compaction: 0.13 GB write, 0.23 MB/s write, 0.13 GB read, 0.23 MB/s read, 0.9 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x557fb868b350#2 capacity: 304.00 MB usage: 36.28 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000273 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(2085,34.32 MB,11.2881%) FilterBlock(47,877.92 KB,0.282022%) IndexBlock(47,1.10 MB,0.363335%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 05 10:28:19 np0005546420.localdomain ceph-mon[298353]: pgmap v1010: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:20 np0005546420.localdomain sshd[335740]: Received disconnect from 178.217.173.50 port 34130:11: Bye Bye [preauth]
Dec 05 10:28:20 np0005546420.localdomain sshd[335740]: Disconnected from authenticating user root 178.217.173.50 port 34130 [preauth]
Dec 05 10:28:20 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:20.895 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.691898) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930501692005, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1313, "num_deletes": 256, "total_data_size": 1992962, "memory_usage": 2026056, "flush_reason": "Manual Compaction"}
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930501703523, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 1308405, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45193, "largest_seqno": 46501, "table_properties": {"data_size": 1303118, "index_size": 2758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11676, "raw_average_key_size": 19, "raw_value_size": 1292156, "raw_average_value_size": 2208, "num_data_blocks": 119, "num_entries": 585, "num_filter_entries": 585, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930412, "oldest_key_time": 1764930412, "file_creation_time": 1764930501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 11668 microseconds, and 4599 cpu microseconds.
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.703574) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 1308405 bytes OK
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.703597) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.705642) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.705664) EVENT_LOG_v1 {"time_micros": 1764930501705658, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.705687) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1986635, prev total WAL file size 1986959, number of live WAL files 2.
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.706482) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034373639' seq:72057594037927935, type:22 .. '6C6F676D0035303230' seq:0, type:0; will stop at (end)
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(1277KB)], [81(17MB)]
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930501706554, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 20021545, "oldest_snapshot_seqno": -1}
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 14750 keys, 19793966 bytes, temperature: kUnknown
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930501825531, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 19793966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19707984, "index_size": 47993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36933, "raw_key_size": 396740, "raw_average_key_size": 26, "raw_value_size": 19455801, "raw_average_value_size": 1319, "num_data_blocks": 1774, "num_entries": 14750, "num_filter_entries": 14750, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928699, "oldest_key_time": 0, "file_creation_time": 1764930501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff34b52a-187a-4f6e-ae40-2039f644a3dd", "db_session_id": "BP9PLUSCNVOX5JUVXFD5", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.826086) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 19793966 bytes
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.828344) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.9 rd, 166.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 17.8 +0.0 blob) out(18.9 +0.0 blob), read-write-amplify(30.4) write-amplify(15.1) OK, records in: 15290, records dropped: 540 output_compression: NoCompression
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.828372) EVENT_LOG_v1 {"time_micros": 1764930501828360, "job": 50, "event": "compaction_finished", "compaction_time_micros": 119244, "compaction_time_cpu_micros": 61724, "output_level": 6, "num_output_files": 1, "total_output_size": 19793966, "num_input_records": 15290, "num_output_records": 14750, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930501829177, "job": 50, "event": "table_file_deletion", "file_number": 83}
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546420/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930501832505, "job": 50, "event": "table_file_deletion", "file_number": 81}
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.706370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.832723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.832731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.832735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.832738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:21 np0005546420.localdomain ceph-mon[298353]: rocksdb: (Original Log Time 2025/12/05-10:28:21.832741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 10:28:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:22.039 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:22 np0005546420.localdomain ceph-mon[298353]: pgmap v1011: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:28:24 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:28:24 np0005546420.localdomain podman[335743]: 2025-12-05 10:28:24.53035464 +0000 UTC m=+0.099057749 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 05 10:28:24 np0005546420.localdomain podman[335742]: 2025-12-05 10:28:24.610079421 +0000 UTC m=+0.178653866 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:28:24 np0005546420.localdomain podman[335743]: 2025-12-05 10:28:24.616822889 +0000 UTC m=+0.185525988 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 10:28:24 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:28:24 np0005546420.localdomain podman[335742]: 2025-12-05 10:28:24.670703012 +0000 UTC m=+0.239277497 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 10:28:24 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:28:24 np0005546420.localdomain ceph-mon[298353]: pgmap v1012: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:25 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:25.934 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:26 np0005546420.localdomain ceph-mon[298353]: pgmap v1013: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:27.082 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:28 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:28:28 np0005546420.localdomain podman[335783]: 2025-12-05 10:28:28.511101888 +0000 UTC m=+0.086057527 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:28:28 np0005546420.localdomain podman[335783]: 2025-12-05 10:28:28.525142082 +0000 UTC m=+0.100097711 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 10:28:28 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:28:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:28.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:28 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:28.871 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:28:28 np0005546420.localdomain ceph-mon[298353]: pgmap v1014: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:30 np0005546420.localdomain ceph-mon[298353]: pgmap v1015: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:30 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:30.937 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:32.126 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:32 np0005546420.localdomain ceph-mon[298353]: pgmap v1016: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:33.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:34 np0005546420.localdomain ceph-mon[298353]: pgmap v1017: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:35 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:35 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:35.961 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:28:36 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:28:36 np0005546420.localdomain podman[335802]: 2025-12-05 10:28:36.501349142 +0000 UTC m=+0.075269175 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, vcs-type=git)
Dec 05 10:28:36 np0005546420.localdomain podman[335803]: 2025-12-05 10:28:36.568320739 +0000 UTC m=+0.137284900 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:28:36 np0005546420.localdomain podman[335803]: 2025-12-05 10:28:36.579782143 +0000 UTC m=+0.148746304 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 10:28:36 np0005546420.localdomain podman[335802]: 2025-12-05 10:28:36.595123697 +0000 UTC m=+0.169043740 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public)
Dec 05 10:28:36 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:28:36 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:28:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:36.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:36.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 10:28:36 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:36.873 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 10:28:36 np0005546420.localdomain ceph-mon[298353]: pgmap v1018: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:36 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3180617333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:28:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:37.170 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:37 np0005546420.localdomain sudo[335846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 10:28:37 np0005546420.localdomain sudo[335846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:28:37 np0005546420.localdomain sudo[335846]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:37 np0005546420.localdomain sudo[335864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 05 10:28:37 np0005546420.localdomain sudo[335864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:28:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:37.579 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 10:28:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:37.870 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:37 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:37.873 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:38 np0005546420.localdomain sudo[335864]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:38 np0005546420.localdomain ceph-mon[298353]: pgmap v1019: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:28:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:28:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:28:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 05 10:28:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:28:38 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 05 10:28:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:38.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:38 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:38.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:38 np0005546420.localdomain sudo[335915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 10:28:38 np0005546420.localdomain sudo[335915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 05 10:28:38 np0005546420.localdomain sudo[335915]: pam_unix(sudo:session): session closed for user root
Dec 05 10:28:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:39.013 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:28:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:39.014 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:28:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:39.014 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:28:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:39.014 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Auditing locally available compute resources for np0005546420.localdomain (node: np0005546420.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 10:28:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:39.015 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:28:39 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:28:39 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3663309502' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:28:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:39.486 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:28:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:39.689 281103 WARNING nova.virt.libvirt.driver [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 10:28:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:39.691 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Hypervisor/Node resource view: name=np0005546420.localdomain free_ram=11419MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 10:28:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:39.692 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:28:39 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:39.692 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:28:39 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3663309502' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:28:39 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1155497546' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:28:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:40.204 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 10:28:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:40.204 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Final resource view: name=np0005546420.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 10:28:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:40.222 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 10:28:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:40 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 05 10:28:40 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3373956963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:28:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:40.639 281103 DEBUG oslo_concurrency.processutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 10:28:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:40.647 281103 DEBUG nova.compute.provider_tree [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed in ProviderTree for provider: 2850b2c4-8d07-40ab-9d82-672172ca70fc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 10:28:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:40.828 281103 DEBUG nova.scheduler.client.report [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Inventory has not changed for provider 2850b2c4-8d07-40ab-9d82-672172ca70fc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 10:28:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:40.830 281103 DEBUG nova.compute.resource_tracker [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Compute_service record updated for np0005546420.localdomain:np0005546420.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 10:28:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:40.830 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:28:40 np0005546420.localdomain ceph-mon[298353]: pgmap v1020: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:40 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3373956963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:28:40 np0005546420.localdomain ceph-mon[298353]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' 
Dec 05 10:28:40 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:40.964 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:42.200 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:42 np0005546420.localdomain ceph-mon[298353]: pgmap v1021: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:42 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:42.828 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:28:44 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:28:44 np0005546420.localdomain podman[335977]: 2025-12-05 10:28:44.527576347 +0000 UTC m=+0.102747483 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:28:44 np0005546420.localdomain podman[335977]: 2025-12-05 10:28:44.562162085 +0000 UTC m=+0.137333211 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:28:44 np0005546420.localdomain podman[335978]: 2025-12-05 10:28:44.56592085 +0000 UTC m=+0.134494643 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 05 10:28:44 np0005546420.localdomain podman[335978]: 2025-12-05 10:28:44.611450977 +0000 UTC m=+0.180024720 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 05 10:28:44 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:28:44 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:28:45 np0005546420.localdomain ceph-mon[298353]: pgmap v1022: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:45 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3797011102' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:28:45 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:45 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:45.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:46 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:45.998 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:46 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2015441384' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 05 10:28:46 np0005546420.localdomain ceph-mon[298353]: pgmap v1023: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:47 np0005546420.localdomain podman[240363]: time="2025-12-05T10:28:47Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:28:47 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:47.236 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:28:47 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:28:47 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:28:47 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18310 "" "Go-http-client/1.1"
Dec 05 10:28:48 np0005546420.localdomain ceph-mon[298353]: pgmap v1024: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:28:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:28:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:28:48 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:28:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:28:48 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:28:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:28:48 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:28:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:28:48 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:28:48 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:28:48 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:28:50 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:50 np0005546420.localdomain ceph-mon[298353]: pgmap v1025: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:51 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:51.000 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:52 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:52.267 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:52 np0005546420.localdomain sshd[336020]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:52 np0005546420.localdomain ceph-mon[298353]: pgmap v1026: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:54 np0005546420.localdomain sshd[336020]: Received disconnect from 103.231.14.54 port 37980:11: Bye Bye [preauth]
Dec 05 10:28:54 np0005546420.localdomain sshd[336020]: Disconnected from authenticating user root 103.231.14.54 port 37980 [preauth]
Dec 05 10:28:54 np0005546420.localdomain ceph-mon[298353]: pgmap v1027: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:55 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:28:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:28:55 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:28:55 np0005546420.localdomain podman[336022]: 2025-12-05 10:28:55.516760859 +0000 UTC m=+0.088291466 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:28:55 np0005546420.localdomain podman[336022]: 2025-12-05 10:28:55.549450399 +0000 UTC m=+0.120980906 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 10:28:55 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:28:55 np0005546420.localdomain podman[336023]: 2025-12-05 10:28:55.570011323 +0000 UTC m=+0.139469286 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 10:28:55 np0005546420.localdomain podman[336023]: 2025-12-05 10:28:55.605477438 +0000 UTC m=+0.174935441 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 10:28:55 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:28:56 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:56.046 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:56 np0005546420.localdomain sshd[336062]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:28:56 np0005546420.localdomain ceph-mon[298353]: pgmap v1028: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:57 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:57.309 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:28:57 np0005546420.localdomain sshd[336062]: Received disconnect from 24.232.50.5 port 54632:11: Bye Bye [preauth]
Dec 05 10:28:57 np0005546420.localdomain sshd[336062]: Disconnected from authenticating user root 24.232.50.5 port 54632 [preauth]
Dec 05 10:28:58 np0005546420.localdomain ceph-mon[298353]: pgmap v1029: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:28:59 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:28:59 np0005546420.localdomain podman[336064]: 2025-12-05 10:28:59.520031663 +0000 UTC m=+0.090731372 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 10:28:59 np0005546420.localdomain podman[336064]: 2025-12-05 10:28:59.533810208 +0000 UTC m=+0.104509937 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 10:28:59 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:28:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:59.872 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:28:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:59.872 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:28:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:59.874 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:28:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:59.874 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:28:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:59.875 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:28:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:59.876 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:28:59 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:28:59.876 281103 DEBUG oslo_concurrency.lockutils [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:29:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:00.020 281103 DEBUG nova.virt.libvirt.imagecache [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Dec 05 10:29:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:00.020 281103 WARNING nova.virt.libvirt.imagecache [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6
Dec 05 10:29:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:00.020 281103 INFO nova.virt.libvirt.imagecache [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Removable base files: /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6
Dec 05 10:29:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:00.021 281103 INFO nova.virt.libvirt.imagecache [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6
Dec 05 10:29:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:00.021 281103 DEBUG nova.virt.libvirt.imagecache [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Dec 05 10:29:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:00.022 281103 DEBUG nova.virt.libvirt.imagecache [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Dec 05 10:29:00 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:00.022 281103 DEBUG nova.virt.libvirt.imagecache [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Dec 05 10:29:00 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:00 np0005546420.localdomain ceph-mon[298353]: pgmap v1030: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:01 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:01.049 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:02 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:02.349 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:02 np0005546420.localdomain ceph-mon[298353]: pgmap v1031: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 05 10:29:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/51186308' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:29:03 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 05 10:29:03 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/51186308' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:29:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/51186308' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 05 10:29:03 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.32:0/51186308' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 05 10:29:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:29:04.147 159503 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 10:29:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:29:04.148 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 10:29:04 np0005546420.localdomain ovn_metadata_agent[159498]: 2025-12-05 10:29:04.148 159503 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 10:29:04 np0005546420.localdomain ceph-mon[298353]: pgmap v1032: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:05 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:06 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:06.090 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:06 np0005546420.localdomain sshd[336083]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:29:06 np0005546420.localdomain sshd[336083]: Accepted publickey for zuul from 192.168.122.10 port 41574 ssh2: RSA SHA256:XfUoQ8HO2f79272Jng5eaRyyfhA8XDZLUIZXMYs+beU
Dec 05 10:29:06 np0005546420.localdomain systemd-logind[762]: New session 82 of user zuul.
Dec 05 10:29:06 np0005546420.localdomain systemd[1]: Started Session 82 of User zuul.
Dec 05 10:29:06 np0005546420.localdomain sshd[336083]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 05 10:29:06 np0005546420.localdomain sudo[336087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt
Dec 05 10:29:06 np0005546420.localdomain sudo[336087]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 05 10:29:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.
Dec 05 10:29:06 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.
Dec 05 10:29:06 np0005546420.localdomain podman[336114]: 2025-12-05 10:29:06.823038832 +0000 UTC m=+0.087432980 container health_status 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 10:29:06 np0005546420.localdomain podman[336114]: 2025-12-05 10:29:06.833209896 +0000 UTC m=+0.097604044 container exec_died 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Dec 05 10:29:06 np0005546420.localdomain ceph-mon[298353]: pgmap v1033: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:06 np0005546420.localdomain podman[336115]: 2025-12-05 10:29:06.900155303 +0000 UTC m=+0.159126554 container health_status db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 10:29:06 np0005546420.localdomain podman[336115]: 2025-12-05 10:29:06.914334671 +0000 UTC m=+0.173305952 container exec_died db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 10:29:07 np0005546420.localdomain systemd[1]: 3552c0dc62de4def1b78f817464cff3d548dbd77932238f0e3b8f44401221f74.service: Deactivated successfully.
Dec 05 10:29:07 np0005546420.localdomain systemd[1]: db7c9a5d03b36e81349423468ece7b5a7a0debf472d329393a1d39cfff53d5f9.service: Deactivated successfully.
Dec 05 10:29:07 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:07.395 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:08 np0005546420.localdomain ceph-mon[298353]: pgmap v1034: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "status"} v 0)
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1972723173' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: from='client.49527 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: from='client.69404 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: from='client.59215 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: from='client.49533 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: from='client.69410 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: pgmap v1035: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: from='client.59221 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3115331078' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2286137187' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 05 10:29:10 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1972723173' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 05 10:29:11 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:11.093 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:12 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:12.394 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:12 np0005546420.localdomain ceph-mon[298353]: pgmap v1036: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:12 np0005546420.localdomain ovs-vsctl[336371]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 05 10:29:13 np0005546420.localdomain virtqemud[229316]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 05 10:29:13 np0005546420.localdomain virtqemud[229316]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 05 10:29:13 np0005546420.localdomain virtqemud[229316]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 05 10:29:14 np0005546420.localdomain systemd[1]: efi.automount: Got automount request for /efi, triggered by 336521 (lsinitrd)
Dec 05 10:29:14 np0005546420.localdomain systemd[1]: Mounting EFI System Partition Automount...
Dec 05 10:29:14 np0005546420.localdomain systemd[1]: Mounted EFI System Partition Automount.
Dec 05 10:29:14 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: cache status {prefix=cache status} (starting...)
Dec 05 10:29:14 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: client ls {prefix=client ls} (starting...)
Dec 05 10:29:14 np0005546420.localdomain lvm[336603]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 10:29:14 np0005546420.localdomain lvm[336603]: VG ceph_vg1 finished
Dec 05 10:29:14 np0005546420.localdomain lvm[336623]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 10:29:14 np0005546420.localdomain lvm[336623]: VG ceph_vg0 finished
Dec 05 10:29:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.
Dec 05 10:29:14 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.
Dec 05 10:29:14 np0005546420.localdomain podman[336629]: 2025-12-05 10:29:14.810332495 +0000 UTC m=+0.116103796 container health_status d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Dec 05 10:29:14 np0005546420.localdomain ceph-mon[298353]: pgmap v1037: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:14 np0005546420.localdomain podman[336674]: 2025-12-05 10:29:14.901693725 +0000 UTC m=+0.090194165 container health_status 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 05 10:29:14 np0005546420.localdomain podman[336629]: 2025-12-05 10:29:14.922999483 +0000 UTC m=+0.228770824 container exec_died d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 10:29:14 np0005546420.localdomain systemd[1]: d6bf748e5e7f598c3c7843bdfe8114e5427a372e357d0d59df107711b0bb77c0.service: Deactivated successfully.
Dec 05 10:29:14 np0005546420.localdomain podman[336674]: 2025-12-05 10:29:14.942303359 +0000 UTC m=+0.130803789 container exec_died 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 10:29:14 np0005546420.localdomain systemd[1]: 94fe534dba23e6e3cf61e45399b1fb7f921505d7bb062ed15701c67f22fa6110.service: Deactivated successfully.
Dec 05 10:29:15 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: damage ls {prefix=damage ls} (starting...)
Dec 05 10:29:15 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: dump loads {prefix=dump loads} (starting...)
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:15 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 05 10:29:15 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "report"} v 0)
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/51674033' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 05 10:29:15 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.49545 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.59233 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.49557 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.69431 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.59242 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3739841657' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.? ' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1440446841' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1810985558' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/51674033' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.? ' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3604837219' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:29:15 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3583268805' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3094496836' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:16.144 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:16 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "config log"} v 0)
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2201719142' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: ops {prefix=ops} (starting...)
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3640136105' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/960505685' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.69440 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.49578 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.59260 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: pgmap v1038: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.69470 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1375169561' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3094496836' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/843187867' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/4043968650' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2634953243' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/231862538' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2201719142' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/4150593839' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3640136105' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/937971322' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/358917005' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 05 10:29:16 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1785589702' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: session ls {prefix=session ls} (starting...)
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2561782686' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain podman[240363]: time="2025-12-05T10:29:17Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 10:29:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:29:17 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153091 "" "Go-http-client/1.1"
Dec 05 10:29:17 np0005546420.localdomain podman[240363]: @ - - [05/Dec/2025:10:29:17 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18312 "" "Go-http-client/1.1"
Dec 05 10:29:17 np0005546420.localdomain ceph-mds[283770]: mds.mds.np0005546420.eqhasr asok_command: status {prefix=status} (starting...)
Dec 05 10:29:17 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:17.424 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:17 np0005546420.localdomain sshd[337033]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/328669908' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3913897135' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.49629 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/960505685' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/413025714' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2561782686' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.49647 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.69521 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2547451321' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.59320 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/328669908' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2491698805' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.69530 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2060178853' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.59344 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: pgmap v1039: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1845042962' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 05 10:29:17 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3913897135' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:29:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "features"} v 0)
Dec 05 10:29:18 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/977089619' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 05 10:29:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 05 10:29:18 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1230068428' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 05 10:29:18 np0005546420.localdomain sshd[337033]: Received disconnect from 163.44.99.31 port 35306:11: Bye Bye [preauth]
Dec 05 10:29:18 np0005546420.localdomain sshd[337033]: Disconnected from authenticating user root 163.44.99.31 port 35306 [preauth]
Dec 05 10:29:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 05 10:29:18 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1187242810' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 05 10:29:18 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 05 10:29:18 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4042205101' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 05 10:29:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:29:18 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 10:29:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:29:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:29:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:29:18 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 10:29:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:29:18 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 10:29:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:29:18 np0005546420.localdomain openstack_network_exporter[242579]: ERROR   10:29:18 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 10:29:18 np0005546420.localdomain openstack_network_exporter[242579]: 
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? ' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/918391306' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1689159897' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3752640158' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3338649186' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/977089619' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? ' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1230068428' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1560698529' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3488102091' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.49689 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2335163572' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1187242810' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/4042205101' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.69569 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: from='client.49695 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 05 10:29:19 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1581666327' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2531545515' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.59386 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/4000901748' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.59392 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.49713 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2969437894' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.69593 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1514262059' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.49725 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1581666327' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.49731 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: pgmap v1040: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.69608 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1704229162' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2538132355' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1065335361' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:21.702612+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115875840 unmapped: 60129280 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:22.702788+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115875840 unmapped: 60129280 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:23.702937+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115875840 unmapped: 60129280 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:24.703143+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115892224 unmapped: 60112896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:25.703335+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115892224 unmapped: 60112896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:26.703534+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:27.703673+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:28.703825+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:29.704018+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:30.704175+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:31.704374+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:32.704535+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:33.705347+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:34.705574+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:35.705742+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:36.705939+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:37.706200+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:38.707040+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:39.707246+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115900416 unmapped: 60104704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:40.707459+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 60096512 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:41.707621+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 60096512 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets getting new tickets!
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:42.708098+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _finish_auth 0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:42.708949+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 60096512 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:43.708283+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 60096512 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:44.708430+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 60096512 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:45.708589+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 60096512 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:46.708801+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 60096512 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:47.709046+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 60096512 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:48.709245+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 60088320 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:49.709435+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 60088320 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:50.709611+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 60088320 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:51.709749+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 60088320 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:52.709893+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 60088320 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:53.710058+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 60088320 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:54.710258+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 60088320 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:55.710427+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115924992 unmapped: 60080128 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:56.710606+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115933184 unmapped: 60071936 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:57.710776+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:58.711045+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:59.711221+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:00.711403+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:01.711608+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:02.711779+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:03.712003+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:04.712196+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:05.712402+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:06.712536+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:07.712731+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:08.712910+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:09.713070+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:10.713272+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:11.713457+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115949568 unmapped: 60055552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:12.713634+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115965952 unmapped: 60039168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:13.713803+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115965952 unmapped: 60039168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:14.713988+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115965952 unmapped: 60039168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:15.714152+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115965952 unmapped: 60039168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:16.714346+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115965952 unmapped: 60039168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:17.714516+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115965952 unmapped: 60039168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:18.714799+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115965952 unmapped: 60039168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1377366 data_alloc: 285212672 data_used: 10739712
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:19.714938+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115965952 unmapped: 60039168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:20.715143+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b6ec8000/0x0/0x1bfc00000, data 0x4701e80/0x47c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115982336 unmapped: 60022784 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:21.715393+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 115982336 unmapped: 60022784 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:22.715575+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 83.998764038s of 84.180213928s, submitted: 57
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116006912 unmapped: 59998208 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 125 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:23.715725+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 125 ms_handle_reset con 0x56455ef90800 session 0x56455ec8cb40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116080640 unmapped: 59924480 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564566e9d000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1385960 data_alloc: 285212672 data_used: 10756096
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:24.715861+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 126 ms_handle_reset con 0x564566e9d000 session 0x564560f83c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116187136 unmapped: 59817984 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:25.716023+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 126 heartbeat osd_stat(store_statfs(0x1b6ec0000/0x0/0x1bfc00000, data 0x47064f8/0x47cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116195328 unmapped: 59809792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:26.716207+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116195328 unmapped: 59809792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:27.716387+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116195328 unmapped: 59809792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:28.716636+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116195328 unmapped: 59809792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1385725 data_alloc: 285212672 data_used: 10752000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:29.716904+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116195328 unmapped: 59809792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:30.717067+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:31.717253+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 127 heartbeat osd_stat(store_statfs(0x1b6ebc000/0x0/0x1bfc00000, data 0x47086f0/0x47d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:32.717461+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:33.717698+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1389751 data_alloc: 285212672 data_used: 10764288
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 127 heartbeat osd_stat(store_statfs(0x1b6ebc000/0x0/0x1bfc00000, data 0x47086f0/0x47d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:34.717910+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:35.718824+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 127 heartbeat osd_stat(store_statfs(0x1b6ebc000/0x0/0x1bfc00000, data 0x47086f0/0x47d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:36.719326+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:37.720944+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:38.721357+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1389751 data_alloc: 285212672 data_used: 10764288
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:39.721538+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 127 heartbeat osd_stat(store_statfs(0x1b6ebc000/0x0/0x1bfc00000, data 0x47086f0/0x47d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:40.721722+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:41.721926+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:42.722115+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 127 heartbeat osd_stat(store_statfs(0x1b6ebc000/0x0/0x1bfc00000, data 0x47086f0/0x47d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:43.722621+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 127 heartbeat osd_stat(store_statfs(0x1b6ebc000/0x0/0x1bfc00000, data 0x47086f0/0x47d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1389751 data_alloc: 285212672 data_used: 10764288
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:44.722797+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 59711488 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:45.723010+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d01400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116301824 unmapped: 59703296 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:46.723316+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 23.733194351s of 24.034797668s, submitted: 86
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 128 heartbeat osd_stat(store_statfs(0x1b6ebc000/0x0/0x1bfc00000, data 0x47086f0/0x47d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116318208 unmapped: 59686912 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:47.723497+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 ms_handle_reset con 0x564560d01400 session 0x564562914000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116342784 unmapped: 59662336 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:48.723711+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116342784 unmapped: 59662336 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1399661 data_alloc: 285212672 data_used: 10764288
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:49.723902+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 ms_handle_reset con 0x56455ef90000 session 0x5645629145a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e9800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 ms_handle_reset con 0x5645611e9800 session 0x564560ea2f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116375552 unmapped: 59629568 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:50.724232+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 ms_handle_reset con 0x56455ef90000 session 0x5645643332c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 heartbeat osd_stat(store_statfs(0x1b6eb4000/0x0/0x1bfc00000, data 0x470cd7b/0x47da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116375552 unmapped: 59629568 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:51.724440+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116383744 unmapped: 59621376 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 ms_handle_reset con 0x56456153ac00 session 0x5645600301e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d00800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:52.724798+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 ms_handle_reset con 0x564560d00800 session 0x564564839860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 59580416 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:53.725111+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d00000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 ms_handle_reset con 0x564560d00000 session 0x564564839c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116441088 unmapped: 59564032 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1402577 data_alloc: 285212672 data_used: 10768384
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:54.725334+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560053000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 ms_handle_reset con 0x564560053000 session 0x564564839e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116457472 unmapped: 59547648 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 ms_handle_reset con 0x56455ef90000 session 0x564560d1e960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:55.725592+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116498432 unmapped: 59506688 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:56.725846+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 heartbeat osd_stat(store_statfs(0x1b6eaf000/0x0/0x1bfc00000, data 0x470ef73/0x47de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116498432 unmapped: 59506688 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:57.726051+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a4000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.504869461s of 10.795553207s, submitted: 85
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 ms_handle_reset con 0x5645629a4000 session 0x5645629085a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116498432 unmapped: 59506688 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:58.726271+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e8400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 ms_handle_reset con 0x5645611e8400 session 0x56456383de00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562865800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 ms_handle_reset con 0x564562865800 session 0x56456383dc20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116531200 unmapped: 59473920 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1405958 data_alloc: 285212672 data_used: 10780672
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:59.726520+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb93400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 ms_handle_reset con 0x56455fb93400 session 0x564560f850e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116531200 unmapped: 59473920 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:00.726740+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 heartbeat osd_stat(store_statfs(0x1b6eae000/0x0/0x1bfc00000, data 0x470efe5/0x47e0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116531200 unmapped: 59473920 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:01.726924+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456139d400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 ms_handle_reset con 0x56456139d400 session 0x564560f834a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d01c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 ms_handle_reset con 0x564560d01c00 session 0x564564839860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:02.727136+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:03.727334+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1407799 data_alloc: 285212672 data_used: 10780672
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:04.727521+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:05.727675+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:06.727841+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 heartbeat osd_stat(store_statfs(0x1b6eb0000/0x0/0x1bfc00000, data 0x470ef73/0x47de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:07.728109+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:08.731377+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1407799 data_alloc: 285212672 data_used: 10780672
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:09.731542+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 heartbeat osd_stat(store_statfs(0x1b6eb0000/0x0/0x1bfc00000, data 0x470ef73/0x47de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:10.731796+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:11.734753+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:12.736008+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:13.737581+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116547584 unmapped: 59457536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1407799 data_alloc: 285212672 data_used: 10780672
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:14.737754+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 heartbeat osd_stat(store_statfs(0x1b6eb0000/0x0/0x1bfc00000, data 0x470ef73/0x47de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116563968 unmapped: 59441152 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:15.738235+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116563968 unmapped: 59441152 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:16.740049+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116563968 unmapped: 59441152 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:17.740557+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 heartbeat osd_stat(store_statfs(0x1b6eb0000/0x0/0x1bfc00000, data 0x470ef73/0x47de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:18.740893+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116563968 unmapped: 59441152 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:19.741065+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116563968 unmapped: 59441152 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1407959 data_alloc: 285212672 data_used: 10784768
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:20.741424+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 heartbeat osd_stat(store_statfs(0x1b6eb0000/0x0/0x1bfc00000, data 0x470ef73/0x47de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116563968 unmapped: 59441152 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:21.741748+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116563968 unmapped: 59441152 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:22.742049+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116563968 unmapped: 59441152 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:23.742237+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116563968 unmapped: 59441152 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 heartbeat osd_stat(store_statfs(0x1b6eb0000/0x0/0x1bfc00000, data 0x470ef73/0x47de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:24.742422+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1407959 data_alloc: 285212672 data_used: 10784768
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116563968 unmapped: 59441152 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:25.742580+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:26.742774+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 heartbeat osd_stat(store_statfs(0x1b6eb0000/0x0/0x1bfc00000, data 0x470ef73/0x47de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:27.743327+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:28.743950+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:29.744217+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1407959 data_alloc: 285212672 data_used: 10784768
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:30.744406+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:31.744572+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:32.744751+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 heartbeat osd_stat(store_statfs(0x1b6eb0000/0x0/0x1bfc00000, data 0x470ef73/0x47de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:33.745083+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:34.745280+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 heartbeat osd_stat(store_statfs(0x1b6eb0000/0x0/0x1bfc00000, data 0x470ef73/0x47de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1407959 data_alloc: 285212672 data_used: 10784768
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:35.745459+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:36.745618+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116572160 unmapped: 59432960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:37.745755+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 39.531276703s of 39.648880005s, submitted: 30
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116580352 unmapped: 59424768 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:38.745936+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 116588544 unmapped: 59416576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e9400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 131 ms_handle_reset con 0x5645611e9400 session 0x5645600301e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 131 heartbeat osd_stat(store_statfs(0x1b6ea9000/0x0/0x1bfc00000, data 0x4711696/0x47e4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456004ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:39.746105+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1422998 data_alloc: 301989888 data_used: 15450112
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 120258560 unmapped: 55746560 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 131 heartbeat osd_stat(store_statfs(0x1b6ea8000/0x0/0x1bfc00000, data 0x47116b4/0x47e5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 132 ms_handle_reset con 0x56456004ac00 session 0x5645643332c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:40.746253+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e8800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 132 ms_handle_reset con 0x5645611e8800 session 0x56455ff7d680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 120299520 unmapped: 55705600 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456004ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 132 ms_handle_reset con 0x56456004ac00 session 0x56455ec8a3c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153b800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:41.746362+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 120233984 unmapped: 55771136 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:42.746474+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 120283136 unmapped: 55721984 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561684c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 133 ms_handle_reset con 0x564561684c00 session 0x56455fa923c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 133 ms_handle_reset con 0x56456153b800 session 0x564565e92000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564566e9dc00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:43.746641+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 120315904 unmapped: 55689216 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b6e9f000/0x0/0x1bfc00000, data 0x4715d70/0x47ed000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:44.746796+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1432138 data_alloc: 301989888 data_used: 15462400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 120381440 unmapped: 55623680 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 134 ms_handle_reset con 0x564566e9dc00 session 0x564565e93e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d00000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 134 ms_handle_reset con 0x564560d00000 session 0x5645644ec1e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456004ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:45.746929+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 134 ms_handle_reset con 0x56456004ac00 session 0x5645644ec3c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 120414208 unmapped: 55590912 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153b800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 134 ms_handle_reset con 0x56456153b800 session 0x5645644ec780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d01c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:46.747051+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 134 ms_handle_reset con 0x564560d01c00 session 0x5645644ed2c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122036224 unmapped: 53968896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:47.747259+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122077184 unmapped: 53927936 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560053400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.054178238s of 10.936134338s, submitted: 166
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:48.747452+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 134 ms_handle_reset con 0x564560053400 session 0x5645644edc20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456139d400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122085376 unmapped: 53919744 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 134 ms_handle_reset con 0x56456139d400 session 0x564566a64d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 134 heartbeat osd_stat(store_statfs(0x1b6677000/0x0/0x1bfc00000, data 0x4f3fc63/0x5016000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:49.747640+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1434131 data_alloc: 301989888 data_used: 15462400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122093568 unmapped: 53911552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:50.747763+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122093568 unmapped: 53911552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:51.747926+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122093568 unmapped: 53911552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:52.748089+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122093568 unmapped: 53911552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc45400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 135 ms_handle_reset con 0x56455fc45400 session 0x564565e92780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:53.748291+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122101760 unmapped: 53903360 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 135 heartbeat osd_stat(store_statfs(0x1b6e9a000/0x0/0x1bfc00000, data 0x4719ebd/0x47f3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:54.748419+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1440115 data_alloc: 301989888 data_used: 15474688
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122101760 unmapped: 53903360 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a5800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 135 ms_handle_reset con 0x5645629a5800 session 0x564565e93c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 135 ms_handle_reset con 0x56456153ac00 session 0x56455ec8a960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:55.748553+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122109952 unmapped: 53895168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 135 heartbeat osd_stat(store_statfs(0x1b6e9b000/0x0/0x1bfc00000, data 0x4719ebd/0x47f3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:56.748724+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122109952 unmapped: 53895168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:57.748898+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122109952 unmapped: 53895168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:58.749110+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122109952 unmapped: 53895168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:59.749294+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560055400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 135 heartbeat osd_stat(store_statfs(0x1b6e9b000/0x0/0x1bfc00000, data 0x4719e5b/0x47f2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.002696037s of 11.172268867s, submitted: 53
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1439962 data_alloc: 301989888 data_used: 15474688
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 135 ms_handle_reset con 0x564560055400 session 0x564562914d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122109952 unmapped: 53895168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:00.749459+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122109952 unmapped: 53895168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560053400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:01.749588+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 136 ms_handle_reset con 0x564560053400 session 0x56455ff7c780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122109952 unmapped: 53895168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:02.749788+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 136 heartbeat osd_stat(store_statfs(0x1b6e96000/0x0/0x1bfc00000, data 0x471c190/0x47f7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122109952 unmapped: 53895168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc45400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:03.749987+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 136 heartbeat osd_stat(store_statfs(0x1b6e96000/0x0/0x1bfc00000, data 0x471c16d/0x47f6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 136 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 136 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 136 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 136 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122126336 unmapped: 53878784 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 137 ms_handle_reset con 0x56455fc45400 session 0x56455ff7e5a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560055400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:04.750179+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 137 ms_handle_reset con 0x564560055400 session 0x564560d1f0e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1447943 data_alloc: 301989888 data_used: 15486976
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 122150912 unmapped: 53854208 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:05.750347+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123273216 unmapped: 52731904 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:06.750506+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123273216 unmapped: 52731904 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 139 ms_handle_reset con 0x56456153ac00 session 0x564566a650e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:07.750669+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 139 heartbeat osd_stat(store_statfs(0x1b6e8a000/0x0/0x1bfc00000, data 0x4722b77/0x4802000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123305984 unmapped: 52699136 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 139 ms_handle_reset con 0x564562907c00 session 0x564560f372c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560c9b400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 139 heartbeat osd_stat(store_statfs(0x1b6e8a000/0x0/0x1bfc00000, data 0x4722b77/0x4802000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:08.751357+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 139 ms_handle_reset con 0x564560c9b400 session 0x564560f84960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123338752 unmapped: 52666368 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:09.751519+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1451610 data_alloc: 301989888 data_used: 15486976
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123338752 unmapped: 52666368 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:10.751672+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123338752 unmapped: 52666368 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.974517822s of 11.203431129s, submitted: 58
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 140 heartbeat osd_stat(store_statfs(0x1b6e8c000/0x0/0x1bfc00000, data 0x4722b77/0x4802000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:11.752078+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123338752 unmapped: 52666368 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:12.752314+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123338752 unmapped: 52666368 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:13.752526+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123338752 unmapped: 52666368 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:14.752673+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1455812 data_alloc: 301989888 data_used: 15499264
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123346944 unmapped: 52658176 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:15.763034+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123346944 unmapped: 52658176 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 140 heartbeat osd_stat(store_statfs(0x1b6e87000/0x0/0x1bfc00000, data 0x4724d8b/0x4806000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 140 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 140 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 140 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:16.763312+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123387904 unmapped: 52617216 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b6e83000/0x0/0x1bfc00000, data 0x4726f83/0x480a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:17.763445+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123428864 unmapped: 52576256 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:18.763717+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123428864 unmapped: 52576256 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Got map version 49
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:19.763926+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1459998 data_alloc: 301989888 data_used: 15511552
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123658240 unmapped: 52346880 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:20.764355+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123658240 unmapped: 52346880 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.202172279s of 10.278817177s, submitted: 36
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:21.764666+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b6e7e000/0x0/0x1bfc00000, data 0x472cf83/0x4810000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123658240 unmapped: 52346880 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:22.764808+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123674624 unmapped: 52330496 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:23.765158+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123674624 unmapped: 52330496 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:24.765494+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1461862 data_alloc: 301989888 data_used: 15511552
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123674624 unmapped: 52330496 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:25.765768+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123674624 unmapped: 52330496 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:26.766078+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123740160 unmapped: 52264960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b6e73000/0x0/0x1bfc00000, data 0x47362f5/0x481b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:27.766229+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123740160 unmapped: 52264960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:28.766607+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123764736 unmapped: 52240384 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Got map version 50
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b6e6e000/0x0/0x1bfc00000, data 0x473b6f5/0x4820000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:29.766764+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1461040 data_alloc: 301989888 data_used: 15507456
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123846656 unmapped: 52158464 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:30.767017+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123887616 unmapped: 52117504 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.864096642s of 10.000542641s, submitted: 32
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:31.767171+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123887616 unmapped: 52117504 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:32.767333+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123887616 unmapped: 52117504 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:33.767489+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123887616 unmapped: 52117504 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b6e69000/0x0/0x1bfc00000, data 0x4741fbd/0x4825000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:34.767645+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1461712 data_alloc: 301989888 data_used: 15507456
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123887616 unmapped: 52117504 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:35.767847+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123887616 unmapped: 52117504 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:36.768059+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123912192 unmapped: 52092928 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b6e66000/0x0/0x1bfc00000, data 0x4744b0c/0x4828000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:37.768198+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123953152 unmapped: 52051968 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:38.768362+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123961344 unmapped: 52043776 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:39.768516+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1462816 data_alloc: 301989888 data_used: 15507456
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123961344 unmapped: 52043776 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561582800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 141 ms_handle_reset con 0x564561582800 session 0x564564838d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:40.768712+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123977728 unmapped: 52027392 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.911486626s of 10.363918304s, submitted: 24
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:41.768905+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560054800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123977728 unmapped: 52027392 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:42.769062+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 142 ms_handle_reset con 0x564560054800 session 0x5645629083c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 142 heartbeat osd_stat(store_statfs(0x1b6e5a000/0x0/0x1bfc00000, data 0x474f343/0x4834000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123985920 unmapped: 52019200 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 142 heartbeat osd_stat(store_statfs(0x1b6e5a000/0x0/0x1bfc00000, data 0x474f343/0x4834000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:43.769260+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 123985920 unmapped: 52019200 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561583800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 142 ms_handle_reset con 0x564561583800 session 0x5645629094a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e8c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:44.769446+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 ms_handle_reset con 0x5645611e8c00 session 0x56455fbc4780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1473659 data_alloc: 301989888 data_used: 15532032
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124002304 unmapped: 52002816 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:45.769586+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124026880 unmapped: 51978240 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564566e9cc00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 ms_handle_reset con 0x564566e9cc00 session 0x564560d1f680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560054800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e8c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 ms_handle_reset con 0x5645611e8c00 session 0x5645643334a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 ms_handle_reset con 0x564560054800 session 0x5645648394a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:46.769789+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124043264 unmapped: 51961856 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 ms_handle_reset con 0x564560052400 session 0x56455fa970e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456139e400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:47.770012+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 ms_handle_reset con 0x56456139e400 session 0x564563755860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124084224 unmapped: 51920896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:48.770246+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153b800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 ms_handle_reset con 0x56456153b800 session 0x56455fbc5860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124084224 unmapped: 51920896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 heartbeat osd_stat(store_statfs(0x1b6e47000/0x0/0x1bfc00000, data 0x475f850/0x4847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:49.770412+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1477843 data_alloc: 301989888 data_used: 15532032
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124084224 unmapped: 51920896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 ms_handle_reset con 0x564560052400 session 0x56455ec8f0e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560054800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 144 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:50.770548+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 144 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 144 ms_handle_reset con 0x564560054800 session 0x56455fa97860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124207104 unmapped: 51798016 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:51.770757+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124207104 unmapped: 51798016 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:52.770951+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 144 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.865207672s of 11.245396614s, submitted: 101
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124207104 unmapped: 51798016 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 144 heartbeat osd_stat(store_statfs(0x1b6e3b000/0x0/0x1bfc00000, data 0x4767c5d/0x4852000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:53.771189+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124207104 unmapped: 51798016 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:54.771346+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1478665 data_alloc: 301989888 data_used: 15544320
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124207104 unmapped: 51798016 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:55.771488+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124207104 unmapped: 51798016 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:56.771643+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124231680 unmapped: 51773440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:57.771801+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d01c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124231680 unmapped: 51773440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:58.772056+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124231680 unmapped: 51773440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 144 heartbeat osd_stat(store_statfs(0x1b6e2f000/0x0/0x1bfc00000, data 0x47754a6/0x485f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:59.772223+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1483707 data_alloc: 301989888 data_used: 15544320
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124231680 unmapped: 51773440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:00.772382+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124231680 unmapped: 51773440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:01.772536+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124231680 unmapped: 51773440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:02.772698+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124231680 unmapped: 51773440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:03.772864+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 144 heartbeat osd_stat(store_statfs(0x1b6e2f000/0x0/0x1bfc00000, data 0x47759a8/0x485f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124231680 unmapped: 51773440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:04.773054+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1481211 data_alloc: 301989888 data_used: 15544320
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124231680 unmapped: 51773440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:05.773219+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124231680 unmapped: 51773440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 13.310228348s of 13.391004562s, submitted: 24
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:06.773361+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124321792 unmapped: 51683328 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 145 heartbeat osd_stat(store_statfs(0x1b6e22000/0x0/0x1bfc00000, data 0x477f61c/0x486b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:07.773509+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124321792 unmapped: 51683328 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:08.773681+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124321792 unmapped: 51683328 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:09.773826+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1487941 data_alloc: 301989888 data_used: 15556608
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124321792 unmapped: 51683328 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:10.774021+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124321792 unmapped: 51683328 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:11.774210+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124321792 unmapped: 51683328 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:12.774395+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 145 heartbeat osd_stat(store_statfs(0x1b6e21000/0x0/0x1bfc00000, data 0x4781aa0/0x486d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561685800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 145 ms_handle_reset con 0x564561685800 session 0x5645643321e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124321792 unmapped: 51683328 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:13.774528+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124321792 unmapped: 51683328 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 145 ms_handle_reset con 0x56455f4a2000 session 0x5645643325a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560c9bc00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 145 ms_handle_reset con 0x564560c9bc00 session 0x5645649bfc20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:14.774663+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1486862 data_alloc: 301989888 data_used: 15556608
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 124346368 unmapped: 51658752 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:15.774802+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125394944 unmapped: 50610176 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.835091591s of 10.000766754s, submitted: 82
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:16.775047+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125411328 unmapped: 50593792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:17.775259+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125411328 unmapped: 50593792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b6e18000/0x0/0x1bfc00000, data 0x47875c1/0x4875000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [0,0,0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:18.775440+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125411328 unmapped: 50593792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645612ed400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 ms_handle_reset con 0x5645612ed400 session 0x5645649be780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:19.775590+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b6e16000/0x0/0x1bfc00000, data 0x478a899/0x4878000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1492684 data_alloc: 301989888 data_used: 15581184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125411328 unmapped: 50593792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:20.775722+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125411328 unmapped: 50593792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:21.775865+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef91400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 ms_handle_reset con 0x56455ef91400 session 0x5645649bf860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560055c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 ms_handle_reset con 0x564560055c00 session 0x5645649bf4a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125452288 unmapped: 50552832 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:22.776049+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b6e16000/0x0/0x1bfc00000, data 0x478a8ed/0x4878000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125452288 unmapped: 50552832 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:23.776221+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d01400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 ms_handle_reset con 0x564560d01400 session 0x5645649be3c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125452288 unmapped: 50552832 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:24.776341+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b6e12000/0x0/0x1bfc00000, data 0x478e37b/0x487c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1493564 data_alloc: 301989888 data_used: 15581184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125452288 unmapped: 50552832 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:25.776471+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125452288 unmapped: 50552832 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.891856194s of 10.004553795s, submitted: 30
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564845000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 ms_handle_reset con 0x564564845000 session 0x5645649bf680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef91400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:26.776614+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 ms_handle_reset con 0x56455ef91400 session 0x5645649bfa40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125501440 unmapped: 50503680 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:27.776840+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125558784 unmapped: 50446336 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:28.777058+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125558784 unmapped: 50446336 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:29.777251+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b6dfe000/0x0/0x1bfc00000, data 0x47a0fff/0x4890000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1497305 data_alloc: 301989888 data_used: 15581184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b6dfe000/0x0/0x1bfc00000, data 0x47a0fff/0x4890000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125558784 unmapped: 50446336 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:30.777406+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125558784 unmapped: 50446336 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:31.777574+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125566976 unmapped: 50438144 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:32.777752+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b6dfe000/0x0/0x1bfc00000, data 0x47a1466/0x4890000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125566976 unmapped: 50438144 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:33.777995+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125566976 unmapped: 50438144 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:34.778152+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1498887 data_alloc: 301989888 data_used: 15581184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125566976 unmapped: 50438144 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:35.778334+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b6dea000/0x0/0x1bfc00000, data 0x47b454f/0x48a4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125517824 unmapped: 50487296 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.805922508s of 10.001557350s, submitted: 46
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:36.778570+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 125526016 unmapped: 50479104 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:37.778815+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126582784 unmapped: 49422336 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:38.779089+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126582784 unmapped: 49422336 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:39.779237+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 148 heartbeat osd_stat(store_statfs(0x1b6dda000/0x0/0x1bfc00000, data 0x47c0493/0x48b2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1509105 data_alloc: 301989888 data_used: 15593472
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126590976 unmapped: 49414144 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 148 heartbeat osd_stat(store_statfs(0x1b6dc7000/0x0/0x1bfc00000, data 0x47d2982/0x48c7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:40.779514+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126590976 unmapped: 49414144 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:41.779685+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126566400 unmapped: 49438720 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:42.779845+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126566400 unmapped: 49438720 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:43.780012+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126566400 unmapped: 49438720 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:44.780173+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1504455 data_alloc: 301989888 data_used: 15593472
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126566400 unmapped: 49438720 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:45.780338+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 149 handle_osd_map epochs [149,149], i have 149, src has [1,149]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126640128 unmapped: 49364992 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.643765450s of 10.046680450s, submitted: 123
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 149 heartbeat osd_stat(store_statfs(0x1b6dad000/0x0/0x1bfc00000, data 0x47e9bef/0x48e0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:46.780493+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126664704 unmapped: 49340416 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 150 heartbeat osd_stat(store_statfs(0x1b6da6000/0x0/0x1bfc00000, data 0x47eeed6/0x48e5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:47.780646+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126722048 unmapped: 49283072 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:48.780819+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 151 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126738432 unmapped: 49266688 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:49.781008+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1520067 data_alloc: 301989888 data_used: 15605760
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126738432 unmapped: 49266688 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:50.781164+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126738432 unmapped: 49266688 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:51.781329+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126738432 unmapped: 49266688 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:52.781557+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126738432 unmapped: 49266688 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 151 heartbeat osd_stat(store_statfs(0x1b6d85000/0x0/0x1bfc00000, data 0x481271a/0x4909000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:53.781791+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126738432 unmapped: 49266688 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:54.782009+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1517613 data_alloc: 301989888 data_used: 15605760
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 126738432 unmapped: 49266688 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:55.782173+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 152 handle_osd_map epochs [152,152], i have 152, src has [1,152]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 152 handle_osd_map epochs [152,152], i have 152, src has [1,152]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127811584 unmapped: 48193536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:56.782331+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b6976000/0x0/0x1bfc00000, data 0x481e2e9/0x4917000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 152 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 152 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.438241005s of 10.705512047s, submitted: 119
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127836160 unmapped: 48168960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:57.782489+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127836160 unmapped: 48168960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:58.782699+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127836160 unmapped: 48168960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:59.782854+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1524289 data_alloc: 301989888 data_used: 15618048
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127836160 unmapped: 48168960 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc45400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 153 ms_handle_reset con 0x56455fc45400 session 0x5645649be960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:00.783037+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127860736 unmapped: 48144384 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:01.783184+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127860736 unmapped: 48144384 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc44000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 153 ms_handle_reset con 0x56455fc44000 session 0x56455ff781e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:02.783350+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 153 heartbeat osd_stat(store_statfs(0x1b696d000/0x0/0x1bfc00000, data 0x4827694/0x4921000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127877120 unmapped: 48128000 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:03.783560+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127877120 unmapped: 48128000 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 153 heartbeat osd_stat(store_statfs(0x1b696b000/0x0/0x1bfc00000, data 0x4827706/0x4923000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:04.783710+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564566e9d000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1528645 data_alloc: 301989888 data_used: 15618048
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 153 ms_handle_reset con 0x564566e9d000 session 0x564562704000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d00800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127877120 unmapped: 48128000 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:05.783851+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 153 ms_handle_reset con 0x564560d00800 session 0x5645627043c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d00800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 154 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 154 ms_handle_reset con 0x564560d00800 session 0x564562704780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc44000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127598592 unmapped: 48406528 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 154 ms_handle_reset con 0x56455fc44000 session 0x564562704b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:06.784098+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127598592 unmapped: 48406528 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561684400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.078940392s of 10.536138535s, submitted: 138
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:07.784285+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127606784 unmapped: 48398336 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 155 ms_handle_reset con 0x564561684400 session 0x564562704d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:08.784563+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b5c58000/0x0/0x1bfc00000, data 0x5534728/0x5636000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127614976 unmapped: 48390144 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:09.784764+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564566e9d000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 155 ms_handle_reset con 0x564566e9d000 session 0x5645627052c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153f000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1645290 data_alloc: 301989888 data_used: 15638528
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127623168 unmapped: 48381952 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 155 ms_handle_reset con 0x56456153f000 session 0x564562705680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:10.784952+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f13f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127631360 unmapped: 48373760 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 155 ms_handle_reset con 0x56455f13f800 session 0x564562705c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 155 ms_handle_reset con 0x56455e95a400 session 0x564565e76000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:11.785173+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127647744 unmapped: 48357376 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:12.785332+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153a400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 155 ms_handle_reset con 0x56456153a400 session 0x564565e761e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127647744 unmapped: 48357376 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:13.785533+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e8000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127811584 unmapped: 48193536 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:14.785724+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 155 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e8400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 ms_handle_reset con 0x5645611e8400 session 0x5645627050e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 ms_handle_reset con 0x5645611e8000 session 0x564565e765a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 heartbeat osd_stat(store_statfs(0x1b5c47000/0x0/0x1bfc00000, data 0x5545df9/0x5647000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1654203 data_alloc: 301989888 data_used: 15659008
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127795200 unmapped: 48209920 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:15.785870+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e8000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 ms_handle_reset con 0x5645611e8000 session 0x564565e76b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 127819776 unmapped: 48185344 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 ms_handle_reset con 0x56455e95a400 session 0x564565e76d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:16.786067+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 heartbeat osd_stat(store_statfs(0x1b5c35000/0x0/0x1bfc00000, data 0x55559a1/0x5658000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128876544 unmapped: 47128576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:17.786240+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 heartbeat osd_stat(store_statfs(0x1b5c35000/0x0/0x1bfc00000, data 0x5555991/0x5657000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128876544 unmapped: 47128576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:18.786556+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a4000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.008294106s of 11.537883759s, submitted: 138
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 ms_handle_reset con 0x5645629a4000 session 0x564565e770e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95b800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 ms_handle_reset con 0x56455e95b800 session 0x564565e772c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128876544 unmapped: 47128576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:19.786757+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1652277 data_alloc: 301989888 data_used: 15659008
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128876544 unmapped: 47128576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:20.787026+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 heartbeat osd_stat(store_statfs(0x1b5c36000/0x0/0x1bfc00000, data 0x55559a1/0x5658000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128876544 unmapped: 47128576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:21.787232+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128876544 unmapped: 47128576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:22.787417+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128876544 unmapped: 47128576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:23.787579+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128933888 unmapped: 47071232 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:24.787818+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1656185 data_alloc: 301989888 data_used: 15659008
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128933888 unmapped: 47071232 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 heartbeat osd_stat(store_statfs(0x1b5c1c000/0x0/0x1bfc00000, data 0x556ebe6/0x5672000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:25.788060+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128942080 unmapped: 47063040 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:26.788256+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 heartbeat osd_stat(store_statfs(0x1b5c0e000/0x0/0x1bfc00000, data 0x557cd16/0x5680000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128942080 unmapped: 47063040 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:27.788415+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562864000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 ms_handle_reset con 0x564562864000 session 0x564565e77680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 heartbeat osd_stat(store_statfs(0x1b5bf6000/0x0/0x1bfc00000, data 0x55928dd/0x5697000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128958464 unmapped: 47046656 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:28.788613+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128958464 unmapped: 47046656 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562864000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.203467369s of 10.407424927s, submitted: 44
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 heartbeat osd_stat(store_statfs(0x1b5bf6000/0x0/0x1bfc00000, data 0x55928dd/0x5697000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [0,0,0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 ms_handle_reset con 0x564562864000 session 0x564565e77860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:29.788739+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 ms_handle_reset con 0x56455e95a400 session 0x564565e77e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1658683 data_alloc: 301989888 data_used: 15659008
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 128999424 unmapped: 47005696 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:30.788886+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130064384 unmapped: 45940736 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:31.789041+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 heartbeat osd_stat(store_statfs(0x1b5bec000/0x0/0x1bfc00000, data 0x559c311/0x56a2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130064384 unmapped: 45940736 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:32.789215+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130064384 unmapped: 45940736 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 heartbeat osd_stat(store_statfs(0x1b5bea000/0x0/0x1bfc00000, data 0x559eb96/0x56a4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:33.789419+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130121728 unmapped: 45883392 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:34.789560+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1664531 data_alloc: 301989888 data_used: 15659008
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130121728 unmapped: 45883392 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:35.789732+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560055000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 ms_handle_reset con 0x564560055000 session 0x564562704d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130195456 unmapped: 45809664 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:36.789910+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560054c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130195456 unmapped: 45809664 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:37.790044+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 ms_handle_reset con 0x564560052000 session 0x564565e76f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 157 ms_handle_reset con 0x564560054c00 session 0x564562705680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130220032 unmapped: 45785088 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:38.790287+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 157 heartbeat osd_stat(store_statfs(0x1b5ba8000/0x0/0x1bfc00000, data 0x55d7647/0x56e4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130351104 unmapped: 45654016 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560054c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.903853416s of 10.170388222s, submitted: 69
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:39.790422+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 157 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 158 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1686717 data_alloc: 301989888 data_used: 15683584
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 158 ms_handle_reset con 0x56455e95a400 session 0x564562915a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130375680 unmapped: 45629440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 158 ms_handle_reset con 0x564560054c00 session 0x564562705c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:40.790541+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.2 total, 600.0 interval
                                                          Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s
                                                          Cumulative WAL: 10K writes, 3073 syncs, 3.41 writes per sync, written: 0.04 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 5529 writes, 18K keys, 5529 commit groups, 1.0 writes per commit group, ingest: 16.77 MB, 0.03 MB/s
                                                          Interval WAL: 5529 writes, 2368 syncs, 2.33 writes per sync, written: 0.02 GB, 0.03 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130392064 unmapped: 45613056 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:41.790689+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 159 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 159 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 159 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 159 ms_handle_reset con 0x56456153ac00 session 0x56455ec8eb40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564845c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 159 ms_handle_reset con 0x564560052000 session 0x564560ea2780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 159 heartbeat osd_stat(store_statfs(0x1b5b8f000/0x0/0x1bfc00000, data 0x55ed27b/0x56fe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130498560 unmapped: 45506560 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:42.790858+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 160 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 160 ms_handle_reset con 0x564564845c00 session 0x56455fa9d680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 160 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130498560 unmapped: 45506560 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:43.791052+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 160 ms_handle_reset con 0x56455e95a400 session 0x564560021c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560054c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 160 ms_handle_reset con 0x564560054c00 session 0x56455ff7f860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130531328 unmapped: 45473792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 160 ms_handle_reset con 0x56456153ac00 session 0x564563755a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:44.791217+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 161 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 161 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 161 ms_handle_reset con 0x564560052000 session 0x56455ff781e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1699209 data_alloc: 301989888 data_used: 15699968
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130539520 unmapped: 45465600 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:45.791411+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564845c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 161 ms_handle_reset con 0x564564845c00 session 0x5645649bf4a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564845c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 161 heartbeat osd_stat(store_statfs(0x1b5b5e000/0x0/0x1bfc00000, data 0x561c70b/0x572f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130695168 unmapped: 45309952 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:46.791568+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 162 ms_handle_reset con 0x564564845c00 session 0x5645649be3c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 130342912 unmapped: 45662208 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 162 heartbeat osd_stat(store_statfs(0x1b5b3c000/0x0/0x1bfc00000, data 0x563b915/0x5750000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:47.791729+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e8400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 162 ms_handle_reset con 0x5645611e8400 session 0x5645649bf680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e8000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131596288 unmapped: 44408832 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:48.792003+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 163 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 163 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 163 ms_handle_reset con 0x5645611e8000 session 0x56455ff72f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131596288 unmapped: 44408832 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:49.792215+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 163 heartbeat osd_stat(store_statfs(0x1b5b26000/0x0/0x1bfc00000, data 0x56547e5/0x5767000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562865000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.915215492s of 10.791973114s, submitted: 225
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1705303 data_alloc: 301989888 data_used: 15712256
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 163 ms_handle_reset con 0x564562865000 session 0x564562c24000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a4c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131596288 unmapped: 44408832 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:50.792366+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 164 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 164 ms_handle_reset con 0x5645629a4c00 session 0x564562c241e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131768320 unmapped: 44236800 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:51.792557+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131768320 unmapped: 44236800 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:52.792749+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131768320 unmapped: 44236800 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:53.792908+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131817472 unmapped: 44187648 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:54.793041+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 164 ms_handle_reset con 0x564560052400 session 0x564562c245a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1710623 data_alloc: 301989888 data_used: 15724544
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131923968 unmapped: 44081152 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:55.793203+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 164 heartbeat osd_stat(store_statfs(0x1b496a000/0x0/0x1bfc00000, data 0x566fa34/0x5783000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 165 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131948544 unmapped: 44056576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:56.793357+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153b000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131997696 unmapped: 44007424 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 165 ms_handle_reset con 0x56456153b000 session 0x564562c24780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:57.793508+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131137536 unmapped: 44867584 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153a800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:58.793695+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456139e400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 166 ms_handle_reset con 0x56456153a800 session 0x5645648392c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131137536 unmapped: 44867584 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:59.793889+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 166 ms_handle_reset con 0x56455ef90000 session 0x564560d1e000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 166 heartbeat osd_stat(store_statfs(0x1b4949000/0x0/0x1bfc00000, data 0x568d465/0x57a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564566e9d000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 167 ms_handle_reset con 0x564566e9d000 session 0x564563755e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 167 ms_handle_reset con 0x56456139e400 session 0x56455fc4f860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.654723167s of 10.004004478s, submitted: 121
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1723039 data_alloc: 301989888 data_used: 15749120
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131153920 unmapped: 44851200 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:00.794019+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 167 ms_handle_reset con 0x56455ef90000 session 0x56455fc4f0e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 167 ms_handle_reset con 0x564560052400 session 0x56455ff792c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560053000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131244032 unmapped: 44761088 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a4000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 168 ms_handle_reset con 0x5645629a4000 session 0x564566a641e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:01.794152+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 168 ms_handle_reset con 0x564560053000 session 0x5645600310e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131260416 unmapped: 44744704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 169 ms_handle_reset con 0x56455e95a800 session 0x564562c24b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:02.794316+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131276800 unmapped: 44728320 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:03.794541+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131276800 unmapped: 44728320 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:04.794743+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 169 handle_osd_map epochs [167,169], i have 169, src has [1,169]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 169 ms_handle_reset con 0x56455ef90000 session 0x564562c24d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1736543 data_alloc: 301989888 data_used: 15749120
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131284992 unmapped: 44720128 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:05.794989+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 169 heartbeat osd_stat(store_statfs(0x1b4923000/0x0/0x1bfc00000, data 0x56a807c/0x57ca000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 169 ms_handle_reset con 0x564560052400 session 0x564562c250e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 170 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 170 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 44695552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:06.795162+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f13f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 170 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 44695552 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 171 handle_osd_map epochs [170,171], i have 171, src has [1,171]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:07.795345+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 171 ms_handle_reset con 0x56455f13f800 session 0x564562c252c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456004a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc47c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 171 ms_handle_reset con 0x56455fc47c00 session 0x56455ff7cb40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 171 ms_handle_reset con 0x56455e95a800 session 0x564562908f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 131342336 unmapped: 44662784 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:08.795620+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 172 ms_handle_reset con 0x56455ef90000 session 0x56455ff72d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f13f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 172 ms_handle_reset con 0x56455f13f800 session 0x564562914000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132521984 unmapped: 43483136 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:09.795781+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 172 heartbeat osd_stat(store_statfs(0x1b48f2000/0x0/0x1bfc00000, data 0x56d0d13/0x57f9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 172 ms_handle_reset con 0x56456004a000 session 0x564562c25680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.800837517s of 10.077706337s, submitted: 88
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1750418 data_alloc: 301989888 data_used: 15753216
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 172 ms_handle_reset con 0x564560052400 session 0x564560031e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132530176 unmapped: 43474944 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:10.795930+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 172 ms_handle_reset con 0x564560052400 session 0x56455ff7da40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132505600 unmapped: 43499520 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561684400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 172 ms_handle_reset con 0x564561684400 session 0x564562c241e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:11.796048+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 173 ms_handle_reset con 0x56455f4a2000 session 0x564566a65e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e9400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 173 heartbeat osd_stat(store_statfs(0x1b48e3000/0x0/0x1bfc00000, data 0x56def7b/0x580a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132513792 unmapped: 43491328 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562865800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 173 ms_handle_reset con 0x5645611e9400 session 0x56455ff72f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:12.796196+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 173 ms_handle_reset con 0x564562865800 session 0x56455ff76000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 173 ms_handle_reset con 0x56455f4a2000 session 0x5645648394a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132521984 unmapped: 43483136 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:13.796372+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 174 handle_osd_map epochs [174,174], i have 174, src has [1,174]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 174 handle_osd_map epochs [174,174], i have 174, src has [1,174]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 174 ms_handle_reset con 0x56455ef90400 session 0x5645637550e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 174 heartbeat osd_stat(store_statfs(0x1b48d4000/0x0/0x1bfc00000, data 0x56ee17c/0x5818000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132530176 unmapped: 43474944 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:14.796554+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1753899 data_alloc: 301989888 data_used: 15802368
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132530176 unmapped: 43474944 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:15.796713+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132620288 unmapped: 43384832 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:16.796863+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 175 ms_handle_reset con 0x564560052400 session 0x5645649bf4a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a5c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 175 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 175 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 175 ms_handle_reset con 0x5645629a5c00 session 0x5645649be3c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132669440 unmapped: 43335680 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:17.797037+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 175 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132759552 unmapped: 43245568 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:18.797256+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564566e9dc00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 175 ms_handle_reset con 0x564566e9dc00 session 0x56455ff7f860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132972544 unmapped: 43032576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:19.797424+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 175 heartbeat osd_stat(store_statfs(0x1b48af000/0x0/0x1bfc00000, data 0x571542a/0x583f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1761017 data_alloc: 301989888 data_used: 15818752
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.601909637s of 10.052778244s, submitted: 117
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 132980736 unmapped: 43024384 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:20.797587+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 176 handle_osd_map epochs [176,176], i have 176, src has [1,176]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 176 handle_osd_map epochs [176,176], i have 176, src has [1,176]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 176 heartbeat osd_stat(store_statfs(0x1b48a1000/0x0/0x1bfc00000, data 0x5724796/0x584d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134094848 unmapped: 41910272 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:21.797814+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561582800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 177 ms_handle_reset con 0x564561582800 session 0x56455fbc50e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 41893888 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:22.798063+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561684c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 178 ms_handle_reset con 0x564561684c00 session 0x564562704780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 178 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 178 ms_handle_reset con 0x56455ef90400 session 0x56455fa9d680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134160384 unmapped: 41844736 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:23.798256+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134160384 unmapped: 41844736 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:24.798428+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 178 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 179 handle_osd_map epochs [179,179], i have 179, src has [1,179]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561583800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 179 ms_handle_reset con 0x564561583800 session 0x56455ec8eb40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 179 heartbeat osd_stat(store_statfs(0x1b486d000/0x0/0x1bfc00000, data 0x574ec8e/0x5880000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1780534 data_alloc: 301989888 data_used: 15847424
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134193152 unmapped: 41811968 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:25.798609+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134217728 unmapped: 41787392 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:26.798748+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb92400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134217728 unmapped: 41787392 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:27.798874+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 181 ms_handle_reset con 0x56455fb92400 session 0x5645629092c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb92400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134250496 unmapped: 41754624 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 181 handle_osd_map epochs [181,181], i have 181, src has [1,181]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:28.799033+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 181 ms_handle_reset con 0x56455fb92400 session 0x564562704d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134258688 unmapped: 41746432 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:29.799231+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1787497 data_alloc: 301989888 data_used: 15855616
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134258688 unmapped: 41746432 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:30.799399+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 181 heartbeat osd_stat(store_statfs(0x1b485a000/0x0/0x1bfc00000, data 0x5760e32/0x5893000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 181 handle_osd_map epochs [182,182], i have 182, src has [1,182]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.756709099s of 10.233570099s, submitted: 176
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 181 handle_osd_map epochs [182,182], i have 182, src has [1,182]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135307264 unmapped: 40697856 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:31.799558+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135307264 unmapped: 40697856 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:32.799684+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135307264 unmapped: 40697856 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:33.799880+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135307264 unmapped: 40697856 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:34.800038+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1789419 data_alloc: 301989888 data_used: 15855616
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135307264 unmapped: 40697856 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:35.800221+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 182 handle_osd_map epochs [182,183], i have 182, src has [1,183]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 183 heartbeat osd_stat(store_statfs(0x1b4839000/0x0/0x1bfc00000, data 0x577dd77/0x58b4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135315456 unmapped: 40689664 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:36.800411+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135315456 unmapped: 40689664 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:37.800563+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:38.800727+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135315456 unmapped: 40689664 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 183 heartbeat osd_stat(store_statfs(0x1b4837000/0x0/0x1bfc00000, data 0x577fa46/0x58b6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:39.800891+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135315456 unmapped: 40689664 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1793765 data_alloc: 301989888 data_used: 15867904
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:40.801129+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135315456 unmapped: 40689664 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.306642532s of 10.432556152s, submitted: 59
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:41.801302+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135323648 unmapped: 40681472 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:42.801471+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135323648 unmapped: 40681472 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:43.801618+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135323648 unmapped: 40681472 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 183 heartbeat osd_stat(store_statfs(0x1b4832000/0x0/0x1bfc00000, data 0x5785593/0x58bc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:44.801752+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135331840 unmapped: 40673280 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 183 ms_handle_reset con 0x56455f1d6400 session 0x564565e77e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1797267 data_alloc: 301989888 data_used: 15867904
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:45.801951+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135389184 unmapped: 40615936 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb93000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 183 ms_handle_reset con 0x56455fb93000 session 0x564565e774a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:46.802212+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135397376 unmapped: 40607744 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:47.802375+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135397376 unmapped: 40607744 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:48.802561+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135397376 unmapped: 40607744 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 183 ms_handle_reset con 0x56455e95a000 session 0x5645639623c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 183 heartbeat osd_stat(store_statfs(0x1b4811000/0x0/0x1bfc00000, data 0x57a53ca/0x58dd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:49.802714+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135495680 unmapped: 40509440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1802275 data_alloc: 301989888 data_used: 15867904
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:50.802867+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 183 heartbeat osd_stat(store_statfs(0x1b4807000/0x0/0x1bfc00000, data 0x57ae602/0x58e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135495680 unmapped: 40509440 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d7800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 183 ms_handle_reset con 0x56455f1d7800 session 0x56455ec8c3c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.722873688s of 10.000324249s, submitted: 28
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:51.803061+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134799360 unmapped: 41205760 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 183 handle_osd_map epochs [183,184], i have 183, src has [1,184]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 handle_osd_map epochs [184,184], i have 184, src has [1,184]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 heartbeat osd_stat(store_statfs(0x1b47f3000/0x0/0x1bfc00000, data 0x57c03a5/0x58fa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:52.804479+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134897664 unmapped: 41107456 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560c9a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 ms_handle_reset con 0x564560c9a000 session 0x56455ec8f2c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95b800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 ms_handle_reset con 0x56455ef90000 session 0x564562705680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561685800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:53.804659+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 134938624 unmapped: 41066496 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 ms_handle_reset con 0x56455e95b800 session 0x564565e93a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 ms_handle_reset con 0x564561685800 session 0x56455ff785a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562864400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 ms_handle_reset con 0x564562864400 session 0x564565e770e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95b800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:54.804856+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135020544 unmapped: 40984576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 ms_handle_reset con 0x56455e95b800 session 0x564566a652c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:55.805031+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1805159 data_alloc: 301989888 data_used: 15880192
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560c9a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 ms_handle_reset con 0x56455ef90000 session 0x56455ff7c000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135020544 unmapped: 40984576 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 ms_handle_reset con 0x564560c9a000 session 0x564560f85680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561685800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 ms_handle_reset con 0x564561685800 session 0x564560f845a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 heartbeat osd_stat(store_statfs(0x1b47e2000/0x0/0x1bfc00000, data 0x57d52ee/0x590c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:56.805206+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135045120 unmapped: 40960000 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:57.805372+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135045120 unmapped: 40960000 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 heartbeat osd_stat(store_statfs(0x1b43da000/0x0/0x1bfc00000, data 0x57de1d3/0x5914000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a4c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 ms_handle_reset con 0x5645629a4c00 session 0x56455ec8b4a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:58.805556+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135061504 unmapped: 40943616 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a4c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 ms_handle_reset con 0x5645629a4c00 session 0x564564332960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:59.805754+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 135069696 unmapped: 40935424 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:00.806008+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1806854 data_alloc: 301989888 data_used: 15880192
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 136159232 unmapped: 39845888 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 185 handle_osd_map epochs [185,185], i have 185, src has [1,185]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.200223923s of 10.001218796s, submitted: 145
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:01.806135+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137207808 unmapped: 38797312 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:02.806249+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137207808 unmapped: 38797312 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 185 heartbeat osd_stat(store_statfs(0x1b3203000/0x0/0x1bfc00000, data 0x5811a82/0x594a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:03.806437+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137322496 unmapped: 38682624 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:04.806666+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137322496 unmapped: 38682624 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:05.806846+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1809864 data_alloc: 301989888 data_used: 15892480
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137322496 unmapped: 38682624 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:06.807030+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137322496 unmapped: 38682624 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 186 heartbeat osd_stat(store_statfs(0x1b3204000/0x0/0x1bfc00000, data 0x5811a82/0x594a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:07.807177+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137322496 unmapped: 38682624 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:08.807359+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137322496 unmapped: 38682624 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:09.807532+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137322496 unmapped: 38682624 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:10.807670+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1815274 data_alloc: 301989888 data_used: 15904768
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137322496 unmapped: 38682624 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.910212517s of 10.004390717s, submitted: 48
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:11.807818+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137338880 unmapped: 38666240 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d00800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 186 ms_handle_reset con 0x564560d00800 session 0x5645629145a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:12.807978+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137338880 unmapped: 38666240 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 186 heartbeat osd_stat(store_statfs(0x1b31ec000/0x0/0x1bfc00000, data 0x5828553/0x5962000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 186 heartbeat osd_stat(store_statfs(0x1b31ec000/0x0/0x1bfc00000, data 0x5828553/0x5962000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:13.808170+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137338880 unmapped: 38666240 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:14.808320+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137338880 unmapped: 38666240 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 186 heartbeat osd_stat(store_statfs(0x1b31ec000/0x0/0x1bfc00000, data 0x5828553/0x5962000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:15.808495+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1817198 data_alloc: 301989888 data_used: 15904768
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137338880 unmapped: 38666240 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 ms_handle_reset con 0x56455fc46400 session 0x564565e92780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a5c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 handle_osd_map epochs [187,187], i have 187, src has [1,187]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 handle_osd_map epochs [187,187], i have 187, src has [1,187]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 ms_handle_reset con 0x5645629a5c00 session 0x56455ff763c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:16.808724+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137347072 unmapped: 38658048 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:17.808919+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137347072 unmapped: 38658048 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560055400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 handle_osd_map epochs [187,187], i have 187, src has [1,187]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 ms_handle_reset con 0x564560055400 session 0x56456383c960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:18.809188+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137347072 unmapped: 38658048 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:19.809328+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137355264 unmapped: 38649856 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 ms_handle_reset con 0x56455fc46400 session 0x5645649bed20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d00800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 ms_handle_reset con 0x564560d00800 session 0x56455ff7d680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:20.809516+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1821717 data_alloc: 301989888 data_used: 15917056
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137379840 unmapped: 38625280 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 heartbeat osd_stat(store_statfs(0x1b31db000/0x0/0x1bfc00000, data 0x5835e74/0x5972000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.824007988s of 10.000343323s, submitted: 67
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:21.809671+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137388032 unmapped: 38617088 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:22.809834+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137396224 unmapped: 38608896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 heartbeat osd_stat(store_statfs(0x1b31d1000/0x0/0x1bfc00000, data 0x584179a/0x597d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 ms_handle_reset con 0x56456153ac00 session 0x564565e76000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:23.810035+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137396224 unmapped: 38608896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:24.810182+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137396224 unmapped: 38608896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 187 handle_osd_map epochs [187,188], i have 187, src has [1,188]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 188 ms_handle_reset con 0x56455e95a000 session 0x564563962000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:25.810323+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1823875 data_alloc: 301989888 data_used: 15929344
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137404416 unmapped: 38600704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 188 handle_osd_map epochs [188,188], i have 188, src has [1,188]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:26.810467+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d7400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137404416 unmapped: 38600704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 188 handle_osd_map epochs [188,189], i have 188, src has [1,189]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 189 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 189 ms_handle_reset con 0x56455f1d7400 session 0x564563754b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:27.810623+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137404416 unmapped: 38600704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d7400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 189 ms_handle_reset con 0x56455f1d7400 session 0x564566a64b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 189 heartbeat osd_stat(store_statfs(0x1b31b9000/0x0/0x1bfc00000, data 0x5853928/0x5994000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:28.810817+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137404416 unmapped: 38600704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 189 ms_handle_reset con 0x56455e95a000 session 0x564560c985a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:29.811025+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 190 ms_handle_reset con 0x56455fc46400 session 0x564563754d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d00800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137404416 unmapped: 38600704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 191 ms_handle_reset con 0x564560d00800 session 0x564564839680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:30.811219+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 191 handle_osd_map epochs [190,191], i have 191, src has [1,191]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 191 handle_osd_map epochs [190,191], i have 191, src has [1,191]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1838153 data_alloc: 301989888 data_used: 15929344
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137396224 unmapped: 38608896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 191 ms_handle_reset con 0x56456153ac00 session 0x564563755c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.811610222s of 10.026434898s, submitted: 61
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 191 handle_osd_map epochs [190,191], i have 191, src has [1,191]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 191 heartbeat osd_stat(store_statfs(0x1b319c000/0x0/0x1bfc00000, data 0x586d88b/0x59b1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 191 ms_handle_reset con 0x56456153ac00 session 0x5645634841e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:31.811438+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137396224 unmapped: 38608896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 191 heartbeat osd_stat(store_statfs(0x1b319c000/0x0/0x1bfc00000, data 0x586d8df/0x59b1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:32.811717+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137396224 unmapped: 38608896 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456004ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 191 ms_handle_reset con 0x56456004ac00 session 0x564562c25a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:33.812461+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137404416 unmapped: 38600704 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb92400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 192 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 192 ms_handle_reset con 0x56455fb92400 session 0x564562c25c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:34.812615+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137437184 unmapped: 38567936 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:35.812763+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1851278 data_alloc: 301989888 data_used: 15941632
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 137469952 unmapped: 38535168 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 194 heartbeat osd_stat(store_statfs(0x1b3172000/0x0/0x1bfc00000, data 0x5890e8a/0x59da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564844000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561685800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 194 ms_handle_reset con 0x564561685800 session 0x564560d1f680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 194 ms_handle_reset con 0x564564844000 session 0x564560f37860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a4400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 194 ms_handle_reset con 0x5645629a4400 session 0x564560020780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:36.812931+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138543104 unmapped: 37462016 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564844000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 194 ms_handle_reset con 0x564564844000 session 0x56455fc4eb40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb92400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 194 ms_handle_reset con 0x56455fb92400 session 0x564564333a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:37.813741+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456004ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138543104 unmapped: 37462016 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 194 ms_handle_reset con 0x56456004ac00 session 0x56455fbc5e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:38.813934+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138543104 unmapped: 37462016 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:39.814119+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138665984 unmapped: 37339136 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 194 heartbeat osd_stat(store_statfs(0x1b314b000/0x0/0x1bfc00000, data 0x58b6c9c/0x5a03000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:40.814312+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1854586 data_alloc: 301989888 data_used: 15949824
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138674176 unmapped: 37330944 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.724265099s of 10.007371902s, submitted: 105
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138674176 unmapped: 37330944 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:42.243813+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 195 handle_osd_map epochs [195,196], i have 195, src has [1,196]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 196 heartbeat osd_stat(store_statfs(0x1b3147000/0x0/0x1bfc00000, data 0x58b92a7/0x5a06000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138690560 unmapped: 37314560 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d7400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 196 ms_handle_reset con 0x56455f1d7400 session 0x564560d1e1e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:43.244043+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb92400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138706944 unmapped: 37298176 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:44.244218+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 197 ms_handle_reset con 0x56455fb92400 session 0x564562c24000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 197 heartbeat osd_stat(store_statfs(0x1b3130000/0x0/0x1bfc00000, data 0x58caf46/0x5a1c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456004ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 197 ms_handle_reset con 0x56456004ac00 session 0x56455ff725a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a4400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138723328 unmapped: 37281792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:45.244363+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 197 ms_handle_reset con 0x5645629a4400 session 0x5645639621e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1865124 data_alloc: 301989888 data_used: 15962112
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 197 heartbeat osd_stat(store_statfs(0x1b3133000/0x0/0x1bfc00000, data 0x58caee4/0x5a1b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138723328 unmapped: 37281792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:46.244537+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562864000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 197 ms_handle_reset con 0x564562864000 session 0x564565e772c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456139e400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138723328 unmapped: 37281792 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:47.244731+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 197 ms_handle_reset con 0x56456139e400 session 0x56455ff781e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138739712 unmapped: 37265408 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456139e400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 197 ms_handle_reset con 0x56456139e400 session 0x564565e76d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:48.245051+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138911744 unmapped: 37093376 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:49.245300+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138911744 unmapped: 37093376 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 197 heartbeat osd_stat(store_statfs(0x1b40e7000/0x0/0x1bfc00000, data 0x58f7f73/0x5a47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:50.245471+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1872958 data_alloc: 301989888 data_used: 15974400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138911744 unmapped: 37093376 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:51.245659+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138911744 unmapped: 37093376 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:52.245810+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138911744 unmapped: 37093376 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:53.246043+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138911744 unmapped: 37093376 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 heartbeat osd_stat(store_statfs(0x1b40dc000/0x0/0x1bfc00000, data 0x5901bc2/0x5a52000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:54.246208+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138911744 unmapped: 37093376 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:55.246448+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1873072 data_alloc: 301989888 data_used: 15974400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 138911744 unmapped: 37093376 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:56.246632+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 14.547604561s of 14.961909294s, submitted: 136
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e9400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 ms_handle_reset con 0x5645611e9400 session 0x564565e77a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 139051008 unmapped: 36954112 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 heartbeat osd_stat(store_statfs(0x1b40a4000/0x0/0x1bfc00000, data 0x5938cb4/0x5a8a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:57.246788+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 heartbeat osd_stat(store_statfs(0x1b40a4000/0x0/0x1bfc00000, data 0x5938cb4/0x5a8a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 ms_handle_reset con 0x564562907c00 session 0x56455ff79860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 139288576 unmapped: 36716544 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 heartbeat osd_stat(store_statfs(0x1b40a4000/0x0/0x1bfc00000, data 0x5938cb4/0x5a8a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:58.247055+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc44c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 ms_handle_reset con 0x56455fc44c00 session 0x564562c24d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 139304960 unmapped: 36700160 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:59.247228+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560055000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 ms_handle_reset con 0x564560055000 session 0x5645649be1e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560055000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 ms_handle_reset con 0x564560055000 session 0x564563754000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc44c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 140369920 unmapped: 35635200 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:00.247372+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 ms_handle_reset con 0x56455fc44c00 session 0x564563484960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1884751 data_alloc: 301989888 data_used: 15974400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 140378112 unmapped: 35627008 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645611e9400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:01.247473+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 ms_handle_reset con 0x56455f4a2000 session 0x564564332b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb93c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148824064 unmapped: 27181056 heap: 176005120 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561684400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560089400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 199 ms_handle_reset con 0x564561684400 session 0x564560ea2780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:02.247660+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 199 ms_handle_reset con 0x56455fb93c00 session 0x5645649bf2c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 140746752 unmapped: 43655168 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:03.247808+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 200 ms_handle_reset con 0x56455f4a2000 session 0x56455ff73680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 200 heartbeat osd_stat(store_statfs(0x1b287c000/0x0/0x1bfc00000, data 0x715bbd7/0x72b1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 200 ms_handle_reset con 0x5645611e9400 session 0x56455ff7da40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 140771328 unmapped: 43630592 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:04.248033+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 140771328 unmapped: 43630592 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc44c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:05.248203+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2176237 data_alloc: 301989888 data_used: 15994880
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 140771328 unmapped: 43630592 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:06.248355+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.248637199s of 10.000167847s, submitted: 152
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 201 ms_handle_reset con 0x56455fc44c00 session 0x56455ff76780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150388736 unmapped: 34013184 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:07.248517+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 201 heartbeat osd_stat(store_statfs(0x1b184b000/0x0/0x1bfc00000, data 0x81874bf/0x82e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 140967936 unmapped: 43433984 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560c9a400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:08.248653+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc44000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 201 ms_handle_reset con 0x56455fc44000 session 0x5645629141e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 201 heartbeat osd_stat(store_statfs(0x1b1031000/0x0/0x1bfc00000, data 0x89a29d4/0x8afc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 141000704 unmapped: 43401216 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:09.248821+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 202 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 202 ms_handle_reset con 0x564560c9a400 session 0x56455ff79c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 202 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 140443648 unmapped: 43958272 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 202 ms_handle_reset con 0x56455f4a2000 session 0x56455ec8e960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:10.249020+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2344382 data_alloc: 301989888 data_used: 16015360
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 140476416 unmapped: 43925504 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:11.249192+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 140476416 unmapped: 43925504 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:12.249339+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 140500992 unmapped: 43900928 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:13.249494+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 202 heartbeat osd_stat(store_statfs(0x1af829000/0x0/0x1bfc00000, data 0xa1a9e7d/0xa305000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562612800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 202 ms_handle_reset con 0x564562612800 session 0x564562908000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc49400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 141647872 unmapped: 42754048 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:14.249642+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 202 heartbeat osd_stat(store_statfs(0x1ae829000/0x0/0x1bfc00000, data 0xb1a9e7d/0xb305000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 202 ms_handle_reset con 0x56455fc49400 session 0x56455fbc4780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 141852672 unmapped: 42549248 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:15.249827+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562613c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 202 ms_handle_reset con 0x564562613c00 session 0x56455ff732c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2679735 data_alloc: 301989888 data_used: 16027648
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150536192 unmapped: 33865728 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:16.250062+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.973635674s of 10.000379562s, submitted: 189
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56456153f800 session 0x56455fbc41e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 143310848 unmapped: 41091072 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:17.250228+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455f4a2000 session 0x56455fbc5860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc49400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 143368192 unmapped: 41033728 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:18.250374+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455fc49400 session 0x564560d1fe00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56456153f800 session 0x56455fc4e3c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 151822336 unmapped: 32579584 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:19.250566+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562612800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x564562612800 session 0x56456383d860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 143572992 unmapped: 40828928 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:20.250749+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 heartbeat osd_stat(store_statfs(0x1aa7ee000/0x0/0x1bfc00000, data 0xf1e607d/0xf340000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562613c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x564562613c00 session 0x56455ff72000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3007535 data_alloc: 301989888 data_used: 16027648
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 152002560 unmapped: 32399360 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:21.250940+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb93000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455fb93000 session 0x5645644ec960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 143753216 unmapped: 40648704 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:22.251142+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564844000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x564564844000 session 0x5645643330e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560088000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 143867904 unmapped: 40534016 heap: 184401920 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x564560088000 session 0x5645644ec1e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:23.251308+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645646d0000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564845400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x564564845400 session 0x5645649be960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x5645646d0000 session 0x564560ea32c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f13f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 144457728 unmapped: 43098112 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:24.251535+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 144482304 unmapped: 43073536 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:25.251864+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3424746 data_alloc: 301989888 data_used: 16031744
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 144482304 unmapped: 43073536 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 heartbeat osd_stat(store_statfs(0x1a64d1000/0x0/0x1bfc00000, data 0x135031c7/0x1365d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:26.252023+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 heartbeat osd_stat(store_statfs(0x1a64d1000/0x0/0x1bfc00000, data 0x135031c7/0x1365d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d7c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455f13f800 session 0x564566a654a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.807360649s of 10.547606468s, submitted: 81
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455f1d7c00 session 0x5645649be780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 144490496 unmapped: 43065344 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:27.252166+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455e95a000 session 0x564560f83e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562612000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x564562612000 session 0x564560f85860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560055800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 144515072 unmapped: 43040768 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:28.252311+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x564560055800 session 0x5645643325a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 heartbeat osd_stat(store_statfs(0x1a57a1000/0x0/0x1bfc00000, data 0x142331c7/0x1438d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 144572416 unmapped: 42983424 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:29.252497+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 144654336 unmapped: 42901504 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:30.252657+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455e95a000 session 0x56455fa970e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f13f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3690383 data_alloc: 301989888 data_used: 16027648
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 144728064 unmapped: 42827776 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:31.252797+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455f13f800 session 0x56455ff78d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef91400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 heartbeat osd_stat(store_statfs(0x1a3fa1000/0x0/0x1bfc00000, data 0x15a331c7/0x15b8d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455ef91400 session 0x564560ea2d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:32.252972+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153272320 unmapped: 34283520 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56456153f800 session 0x56455ec8cd20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560d00c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:33.253129+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153378816 unmapped: 34177024 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x564560d00c00 session 0x5645600205a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:34.253348+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 145104896 unmapped: 42450944 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455e95a000 session 0x5645643321e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:35.253501+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 146317312 unmapped: 41238528 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef91400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455ef91400 session 0x564560f82b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f13f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 heartbeat osd_stat(store_statfs(0x1a02ae000/0x0/0x1bfc00000, data 0x1972423a/0x19880000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 ms_handle_reset con 0x56455f13f800 session 0x56456383c5a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 203 handle_osd_map epochs [203,204], i have 203, src has [1,204]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4071254 data_alloc: 301989888 data_used: 16039936
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:36.253646+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 146366464 unmapped: 41189376 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 204 ms_handle_reset con 0x56456153f800 session 0x564562915a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645629a5c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 204 ms_handle_reset con 0x5645629a5c00 session 0x5645637543c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.990674019s of 10.328284264s, submitted: 134
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:37.253839+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 154828800 unmapped: 32727040 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:38.253979+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 146481152 unmapped: 41074688 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 204 ms_handle_reset con 0x56455e95a000 session 0x5645637541e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef91400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:39.254178+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 146563072 unmapped: 40992768 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 205 ms_handle_reset con 0x56455ef91400 session 0x56455ff774a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:40.254329+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 147644416 unmapped: 39911424 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f13f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 205 ms_handle_reset con 0x56455f13f800 session 0x56455ff7e000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:41.254458+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4372482 data_alloc: 301989888 data_used: 16060416
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 156123136 unmapped: 31432704 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 205 heartbeat osd_stat(store_statfs(0x19e7d5000/0x0/0x1bfc00000, data 0x1b1f88f2/0x1b358000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 205 ms_handle_reset con 0x56456153f800 session 0x564565e932c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:42.254595+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 162078720 unmapped: 25477120 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645646d0400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 205 ms_handle_reset con 0x5645646d0400 session 0x564560030b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:43.254790+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 147931136 unmapped: 39624704 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 205 heartbeat osd_stat(store_statfs(0x19cfca000/0x0/0x1bfc00000, data 0x1ca05d5a/0x1cb64000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 205 heartbeat osd_stat(store_statfs(0x19c7ca000/0x0/0x1bfc00000, data 0x1d205d5a/0x1d364000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 205 handle_osd_map epochs [205,206], i have 205, src has [1,206]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 206 ms_handle_reset con 0x564560089400 session 0x56455ec8f4a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:44.254944+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148029440 unmapped: 39526400 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 206 ms_handle_reset con 0x56455e95a000 session 0x564563754f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:45.255124+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148029440 unmapped: 39526400 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef91400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:46.255296+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4522402 data_alloc: 301989888 data_used: 16076800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f13f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148135936 unmapped: 39419904 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 207 ms_handle_reset con 0x56455f13f800 session 0x564566a64960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 207 handle_osd_map epochs [207,208], i have 207, src has [1,208]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 handle_osd_map epochs [208,208], i have 208, src has [1,208]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.273447037s of 10.014431000s, submitted: 110
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:47.255442+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 handle_osd_map epochs [206,208], i have 208, src has [1,208]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148201472 unmapped: 39354368 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x56455ef91400 session 0x5645627045a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153a400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x56456153a400 session 0x564563754960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 heartbeat osd_stat(store_statfs(0x1b1fa7000/0x0/0x1bfc00000, data 0x5a20907/0x5b84000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [0,0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:48.255585+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x56455e95a000 session 0x56455fa9d4a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148275200 unmapped: 39280640 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef91400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:49.255742+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x56455ef91400 session 0x5645644ed2c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f13f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148291584 unmapped: 39264256 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x56455f13f800 session 0x5645634850e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:50.255891+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148299776 unmapped: 39256064 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560089400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x564560089400 session 0x564565e761e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb93800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560054400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x564560054400 session 0x564564333c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x56455fb93800 session 0x56456383cd20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:51.256037+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2015386 data_alloc: 301989888 data_used: 16080896
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148283392 unmapped: 39272448 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:52.256120+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148283392 unmapped: 39272448 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561684400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x564561684400 session 0x5645644ed0e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x56455f1d6400 session 0x56455ff76b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562865000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x564562907c00 session 0x56455fa923c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 ms_handle_reset con 0x564562865000 session 0x564566a64780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:53.256297+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148463616 unmapped: 39092224 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 heartbeat osd_stat(store_statfs(0x1b3f6b000/0x0/0x1bfc00000, data 0x5a60a89/0x5bc3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:54.256441+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148463616 unmapped: 39092224 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:55.256657+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148463616 unmapped: 39092224 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:56.256796+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2021089 data_alloc: 301989888 data_used: 16093184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148463616 unmapped: 39092224 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 heartbeat osd_stat(store_statfs(0x1b3f3b000/0x0/0x1bfc00000, data 0x5a8d031/0x5bf2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:57.256913+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148480000 unmapped: 39075840 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562864000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.967191696s of 10.697873116s, submitted: 204
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 ms_handle_reset con 0x564562864000 session 0x5645639630e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:58.257201+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148586496 unmapped: 38969344 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 ms_handle_reset con 0x56456153f800 session 0x564566a641e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:59.257419+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148275200 unmapped: 39280640 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 heartbeat osd_stat(store_statfs(0x1b3f3b000/0x0/0x1bfc00000, data 0x5a8cff8/0x5bf2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:00.257590+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 149331968 unmapped: 38223872 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:01.257920+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2025589 data_alloc: 301989888 data_used: 16093184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 149372928 unmapped: 38182912 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:02.258043+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 149389312 unmapped: 38166528 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 heartbeat osd_stat(store_statfs(0x1b3b09000/0x0/0x1bfc00000, data 0x5abe6e7/0x5c25000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:03.258209+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 149389312 unmapped: 38166528 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 ms_handle_reset con 0x56455fc46800 session 0x5645639632c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564844c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:04.258315+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 ms_handle_reset con 0x564564844c00 session 0x564560f82780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565494400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 ms_handle_reset con 0x564565494400 session 0x564562442960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148627456 unmapped: 38928384 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 ms_handle_reset con 0x56455fc46800 session 0x564562443e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:05.258876+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148635648 unmapped: 38920192 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:06.259143+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 heartbeat osd_stat(store_statfs(0x1b3b01000/0x0/0x1bfc00000, data 0x5ac9c8f/0x5c2d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2025196 data_alloc: 301989888 data_used: 16093184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148635648 unmapped: 38920192 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 heartbeat osd_stat(store_statfs(0x1b3b01000/0x0/0x1bfc00000, data 0x5ac9c8f/0x5c2d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:07.259478+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148635648 unmapped: 38920192 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 heartbeat osd_stat(store_statfs(0x1b3af8000/0x0/0x1bfc00000, data 0x5ad2496/0x5c36000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:08.259818+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148635648 unmapped: 38920192 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.811724663s of 11.070095062s, submitted: 69
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:09.260284+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148635648 unmapped: 38920192 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 heartbeat osd_stat(store_statfs(0x1b3ae5000/0x0/0x1bfc00000, data 0x5ae5b82/0x5c49000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:10.260567+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148692992 unmapped: 38862848 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:11.260751+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2028896 data_alloc: 301989888 data_used: 16093184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148774912 unmapped: 38780928 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:12.261066+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148774912 unmapped: 38780928 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:13.261302+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148774912 unmapped: 38780928 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:14.261450+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148602880 unmapped: 38952960 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:15.261655+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148611072 unmapped: 38944768 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 heartbeat osd_stat(store_statfs(0x1b3aa9000/0x0/0x1bfc00000, data 0x5b203d8/0x5c85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:16.261812+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2031676 data_alloc: 301989888 data_used: 16093184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148717568 unmapped: 38838272 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 209 handle_osd_map epochs [209,210], i have 209, src has [1,210]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:17.262032+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148725760 unmapped: 38830080 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 210 heartbeat osd_stat(store_statfs(0x1b3a9e000/0x0/0x1bfc00000, data 0x5b28adf/0x5c8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:18.262262+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148725760 unmapped: 38830080 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:19.263314+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148725760 unmapped: 38830080 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:20.264205+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148725760 unmapped: 38830080 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.873185158s of 12.114329338s, submitted: 62
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:21.264656+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 210 heartbeat osd_stat(store_statfs(0x1b3a9e000/0x0/0x1bfc00000, data 0x5b28adf/0x5c8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2039406 data_alloc: 301989888 data_used: 16105472
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148725760 unmapped: 38830080 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:22.265088+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148848640 unmapped: 38707200 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 210 ms_handle_reset con 0x564562907400 session 0x564560f82000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:23.265253+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148848640 unmapped: 38707200 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 210 heartbeat osd_stat(store_statfs(0x1b3a72000/0x0/0x1bfc00000, data 0x5b55c0f/0x5cbc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:24.265457+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148848640 unmapped: 38707200 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:25.265653+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564566e9c000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 148848640 unmapped: 38707200 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 211 ms_handle_reset con 0x564566e9c000 session 0x564562c24b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:26.265883+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2043479 data_alloc: 301989888 data_used: 16121856
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645612ed400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 149905408 unmapped: 37650432 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 211 ms_handle_reset con 0x5645612ed400 session 0x5645649be780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:27.266184+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645658c8400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 149929984 unmapped: 37625856 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 212 ms_handle_reset con 0x5645658c8400 session 0x5645649be1e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:28.266485+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 212 heartbeat osd_stat(store_statfs(0x1b3a63000/0x0/0x1bfc00000, data 0x5b5cbe5/0x5cc9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 149929984 unmapped: 37625856 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645658c8400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 212 ms_handle_reset con 0x56455fc46800 session 0x5645649bf0e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 212 ms_handle_reset con 0x5645658c8400 session 0x5645649be960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 212 heartbeat osd_stat(store_statfs(0x1b3a62000/0x0/0x1bfc00000, data 0x5b5cc47/0x5cca000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:29.266709+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645612ed400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 149946368 unmapped: 37609472 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 213 ms_handle_reset con 0x5645612ed400 session 0x5645649bed20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 213 ms_handle_reset con 0x564562907400 session 0x56455fc4eb40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:30.266909+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564566e9c000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 213 ms_handle_reset con 0x564562907800 session 0x564563484000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 149970944 unmapped: 37584896 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 213 ms_handle_reset con 0x564566e9c000 session 0x564560ea2b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:31.267122+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.042254448s of 10.231460571s, submitted: 63
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645612ed400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2053364 data_alloc: 301989888 data_used: 16138240
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 213 ms_handle_reset con 0x5645612ed400 session 0x564560ea2780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565cb6400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150052864 unmapped: 37502976 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 214 ms_handle_reset con 0x56455fc46800 session 0x564560d1ef00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:32.267416+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150142976 unmapped: 37412864 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 215 ms_handle_reset con 0x564565cb6400 session 0x564560ea32c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565494000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:33.267680+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150192128 unmapped: 37363712 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 216 handle_osd_map epochs [215,216], i have 216, src has [1,216]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645658c9c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 216 ms_handle_reset con 0x5645658c9c00 session 0x564563962f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 216 ms_handle_reset con 0x564565494000 session 0x5645644ed2c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 216 ms_handle_reset con 0x56455fc46800 session 0x56455ff72b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:34.267867+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150249472 unmapped: 37306368 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 216 heartbeat osd_stat(store_statfs(0x1b3a40000/0x0/0x1bfc00000, data 0x5b7945a/0x5ced000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645612ed400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 216 ms_handle_reset con 0x5645612ed400 session 0x5645644ec1e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565cb6400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:35.268120+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150249472 unmapped: 37306368 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 217 ms_handle_reset con 0x564565cb6400 session 0x564562914f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:36.268409+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564566e9c000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 217 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2067463 data_alloc: 301989888 data_used: 16146432
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562612400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 217 ms_handle_reset con 0x564562612400 session 0x5645644ed4a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150323200 unmapped: 37232640 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 218 heartbeat osd_stat(store_statfs(0x1b3a2f000/0x0/0x1bfc00000, data 0x5b87b4a/0x5cfc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 218 ms_handle_reset con 0x564566e9c000 session 0x5645629141e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:37.268579+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 218 ms_handle_reset con 0x56455fc46800 session 0x5645637543c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150339584 unmapped: 37216256 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645612ed400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565494000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 218 ms_handle_reset con 0x564565494000 session 0x56455ff77680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:38.270057+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150347776 unmapped: 37208064 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 218 handle_osd_map epochs [218,219], i have 218, src has [1,219]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 219 ms_handle_reset con 0x5645612ed400 session 0x564563754000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:39.270579+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150421504 unmapped: 37134336 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565cb6400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 219 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:40.271439+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150446080 unmapped: 37109760 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 220 ms_handle_reset con 0x564565cb6400 session 0x56455ec8c780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 220 heartbeat osd_stat(store_statfs(0x1b3a02000/0x0/0x1bfc00000, data 0x5babe0b/0x5d2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:41.272283+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.470438004s of 10.004314423s, submitted: 129
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 220 handle_osd_map epochs [220,221], i have 220, src has [1,221]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2101254 data_alloc: 301989888 data_used: 16158720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150560768 unmapped: 36995072 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Got map version 51
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 221 ms_handle_reset con 0x56456153ac00 session 0x56455ec8c000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153bc00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:42.272447+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150274048 unmapped: 37281792 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 222 ms_handle_reset con 0x56456153bc00 session 0x56455ff7e000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:43.272651+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 222 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565f75800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 222 ms_handle_reset con 0x564565f75800 session 0x5645600205a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565f75c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150478848 unmapped: 37076992 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 223 ms_handle_reset con 0x564565f75c00 session 0x56455fa970e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 223 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:44.272800+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456004a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 223 ms_handle_reset con 0x56456004a000 session 0x56455fc4ef00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 150511616 unmapped: 37044224 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 223 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 223 handle_osd_map epochs [223,224], i have 223, src has [1,224]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 224 ms_handle_reset con 0x56456153ac00 session 0x56455ff7cf00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 224 ms_handle_reset con 0x564562907400 session 0x56455ff72b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153bc00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:45.273013+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 151699456 unmapped: 35856384 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 224 heartbeat osd_stat(store_statfs(0x1b399e000/0x0/0x1bfc00000, data 0x5c0a001/0x5d8d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 225 ms_handle_reset con 0x56456153bc00 session 0x56455ff73680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:46.273189+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565f74c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2113673 data_alloc: 301989888 data_used: 16158720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 225 ms_handle_reset con 0x564565f74c00 session 0x564562c24d20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560053800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 151625728 unmapped: 35930112 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 226 ms_handle_reset con 0x564560053800 session 0x564562c25a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 226 ms_handle_reset con 0x56456153ac00 session 0x564562c24000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153bc00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:47.273339+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 151674880 unmapped: 35880960 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 227 ms_handle_reset con 0x56456153bc00 session 0x564560d1e1e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 227 handle_osd_map epochs [226,227], i have 227, src has [1,227]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:48.273592+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 151805952 unmapped: 35749888 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 227 ms_handle_reset con 0x564562907400 session 0x56455fa92b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565f74c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:49.273893+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 227 ms_handle_reset con 0x564565f74c00 session 0x5645649be1e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645646d0000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 227 ms_handle_reset con 0x5645646d0000 session 0x564565e76780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 151879680 unmapped: 35676160 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 227 ms_handle_reset con 0x56456153ac00 session 0x564560ea2f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:50.274040+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153bc00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 227 ms_handle_reset con 0x56456153bc00 session 0x564566a65e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 151904256 unmapped: 35651584 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 227 handle_osd_map epochs [227,228], i have 227, src has [1,228]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 228 heartbeat osd_stat(store_statfs(0x1b396d000/0x0/0x1bfc00000, data 0x5c39f45/0x5dc1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:51.274228+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.158398628s of 10.000335693s, submitted: 245
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2126283 data_alloc: 301989888 data_used: 16166912
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 152027136 unmapped: 35528704 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562612000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 228 ms_handle_reset con 0x564562612000 session 0x564562909860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:52.274456+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 152035328 unmapped: 35520512 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645658c9400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 228 ms_handle_reset con 0x5645658c9400 session 0x564562c25860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565622000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562612800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 229 ms_handle_reset con 0x564562612800 session 0x564563962b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 229 ms_handle_reset con 0x564565622000 session 0x5645639634a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:53.274669+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 152100864 unmapped: 35454976 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ac00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 229 ms_handle_reset con 0x56456153ac00 session 0x564560031e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:54.274855+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153bc00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 152117248 unmapped: 35438592 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 230 ms_handle_reset con 0x56456153bc00 session 0x564562909e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:55.275002+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 230 handle_osd_map epochs [229,230], i have 230, src has [1,230]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 152166400 unmapped: 35389440 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:56.275203+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2145440 data_alloc: 301989888 data_used: 16179200
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 231 ms_handle_reset con 0x56455e95a800 session 0x5645629150e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153223168 unmapped: 34332672 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 231 ms_handle_reset con 0x56455ef90800 session 0x56455ff723c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 231 heartbeat osd_stat(store_statfs(0x1b38f0000/0x0/0x1bfc00000, data 0x5cae77a/0x5e3c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:57.275390+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 232 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 232 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153239552 unmapped: 34316288 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:58.275688+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153247744 unmapped: 34308096 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:59.275947+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 232 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 154394624 unmapped: 33161216 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 232 heartbeat osd_stat(store_statfs(0x1b38df000/0x0/0x1bfc00000, data 0x5cc0ddf/0x5e4f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:00.276380+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 232 heartbeat osd_stat(store_statfs(0x1b38df000/0x0/0x1bfc00000, data 0x5cc0ddf/0x5e4f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 232 ms_handle_reset con 0x56455ef90800 session 0x56455ff77e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 154402816 unmapped: 33153024 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:01.276573+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 233 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.345770836s of 10.000824928s, submitted: 204
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 233 heartbeat osd_stat(store_statfs(0x1b38df000/0x0/0x1bfc00000, data 0x5cc0ddf/0x5e4f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2152242 data_alloc: 301989888 data_used: 16191488
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 233 ms_handle_reset con 0x56455f1d6c00 session 0x564560ea3a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153ec00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153575424 unmapped: 33980416 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 233 ms_handle_reset con 0x56456153ec00 session 0x564564332f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:02.276789+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153657344 unmapped: 33898496 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:03.277104+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153657344 unmapped: 33898496 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:04.277219+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153665536 unmapped: 33890304 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561684400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 233 ms_handle_reset con 0x564561684400 session 0x56455ff7fc20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:05.277445+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 233 heartbeat osd_stat(store_statfs(0x1b38a6000/0x0/0x1bfc00000, data 0x5cf8896/0x5e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153681920 unmapped: 33873920 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 234 ms_handle_reset con 0x56455f4a2000 session 0x56455ec8e000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:06.277572+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 234 ms_handle_reset con 0x56455f4a2000 session 0x5645649be5a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2159553 data_alloc: 301989888 data_used: 16203776
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153698304 unmapped: 33857536 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:07.277771+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153755648 unmapped: 33800192 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:08.277990+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153755648 unmapped: 33800192 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:09.278255+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153763840 unmapped: 33792000 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:10.278443+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153763840 unmapped: 33792000 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 234 heartbeat osd_stat(store_statfs(0x1b388c000/0x0/0x1bfc00000, data 0x5d0f7de/0x5ea1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:11.278641+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.777141571s of 10.000526428s, submitted: 68
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2161405 data_alloc: 301989888 data_used: 16203776
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153772032 unmapped: 33783808 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:12.278839+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153772032 unmapped: 33783808 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:13.279042+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153870336 unmapped: 33685504 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 234 heartbeat osd_stat(store_statfs(0x1b385c000/0x0/0x1bfc00000, data 0x5d4088f/0x5ed2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:14.279193+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153870336 unmapped: 33685504 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:15.279380+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 234 ms_handle_reset con 0x56455f1d6800 session 0x564560c98780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153960448 unmapped: 33595392 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 234 heartbeat osd_stat(store_statfs(0x1b384e000/0x0/0x1bfc00000, data 0x5d4e529/0x5ee0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:16.279591+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564844000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2164263 data_alloc: 301989888 data_used: 16203776
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153960448 unmapped: 33595392 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:17.279722+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 235 ms_handle_reset con 0x564564844000 session 0x564560ea3a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 153976832 unmapped: 33579008 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 235 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 235 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:18.279894+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 154075136 unmapped: 33480704 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560088000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 235 ms_handle_reset con 0x564560088000 session 0x564560d1e960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153b400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:19.280071+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 236 ms_handle_reset con 0x56456153b400 session 0x564566a654a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 154091520 unmapped: 33464320 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 236 handle_osd_map epochs [236,236], i have 236, src has [1,236]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:20.280185+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153b400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 236 ms_handle_reset con 0x56456153b400 session 0x564565e76780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 236 handle_osd_map epochs [236,236], i have 236, src has [1,236]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 236 ms_handle_reset con 0x56455f1d6800 session 0x564565e770e0
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 05 10:29:20 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1156651427' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 154099712 unmapped: 33456128 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:21.280344+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.761653900s of 10.000305176s, submitted: 55
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 236 heartbeat osd_stat(store_statfs(0x1b3817000/0x0/0x1bfc00000, data 0x5d8015d/0x5f16000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2176955 data_alloc: 301989888 data_used: 16216064
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 155213824 unmapped: 32342016 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 236 heartbeat osd_stat(store_statfs(0x1b37f9000/0x0/0x1bfc00000, data 0x5d9d96f/0x5f34000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:22.280486+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 155344896 unmapped: 32210944 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:23.280636+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 155344896 unmapped: 32210944 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 236 heartbeat osd_stat(store_statfs(0x1b37eb000/0x0/0x1bfc00000, data 0x5dad31c/0x5f43000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:24.280824+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 155344896 unmapped: 32210944 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:25.281025+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565cb6800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 236 ms_handle_reset con 0x564565cb6800 session 0x56455ff723c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 155377664 unmapped: 32178176 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:26.281125+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 handle_osd_map epochs [237,237], i have 237, src has [1,237]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560088000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2187435 data_alloc: 301989888 data_used: 16232448
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 ms_handle_reset con 0x564560088000 session 0x5645629150e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 155492352 unmapped: 32063488 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:27.281242+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 heartbeat osd_stat(store_statfs(0x1b37a9000/0x0/0x1bfc00000, data 0x5deae01/0x5f84000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 ms_handle_reset con 0x56455fc46800 session 0x564562909a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 155516928 unmapped: 32038912 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:28.281439+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 ms_handle_reset con 0x56455fc46800 session 0x564562909e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 heartbeat osd_stat(store_statfs(0x1b37a7000/0x0/0x1bfc00000, data 0x5deae74/0x5f86000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 155574272 unmapped: 31981568 heap: 187555840 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 ms_handle_reset con 0x56455f1d6800 session 0x564563962b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560088000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:29.281566+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153b400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 ms_handle_reset con 0x56456153b400 session 0x564560031e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 ms_handle_reset con 0x564560088000 session 0x56455ec8cb40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565cb6800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561684c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 156598272 unmapped: 36249600 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 ms_handle_reset con 0x564561684c00 session 0x564560ea2f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 ms_handle_reset con 0x564565cb6800 session 0x564562705c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:30.281801+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561684c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 156606464 unmapped: 36241408 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 ms_handle_reset con 0x564561684c00 session 0x56455fa92b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 ms_handle_reset con 0x56455fc46800 session 0x5645627043c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:31.282002+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.528427124s of 10.004014969s, submitted: 111
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2309319 data_alloc: 301989888 data_used: 16236544
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 156663808 unmapped: 36184064 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb93000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 238 ms_handle_reset con 0x56455fb93000 session 0x5645627045a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153a800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:32.282151+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc49000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 238 ms_handle_reset con 0x56455f4a2c00 session 0x56456383c3c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 238 ms_handle_reset con 0x56456153a800 session 0x564562704780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 157949952 unmapped: 34897920 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 238 handle_osd_map epochs [238,239], i have 238, src has [1,239]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 239 ms_handle_reset con 0x56455fc49000 session 0x564562c241e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:33.282316+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 239 ms_handle_reset con 0x56455f1d6800 session 0x564562c250e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 157949952 unmapped: 34897920 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 239 heartbeat osd_stat(store_statfs(0x1b1d00000/0x0/0x1bfc00000, data 0x788c5fa/0x7a2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:34.282502+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb93000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 239 ms_handle_reset con 0x56455fb93000 session 0x56455ff7f680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 158072832 unmapped: 34775040 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:35.282603+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 158269440 unmapped: 34578432 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:36.282771+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2414062 data_alloc: 301989888 data_used: 16248832
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562906400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 158277632 unmapped: 34570240 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564844800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565494800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:37.282915+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 239 ms_handle_reset con 0x564565494800 session 0x5645637543c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 239 heartbeat osd_stat(store_statfs(0x1b18dd000/0x0/0x1bfc00000, data 0x7cb0813/0x7e51000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 157433856 unmapped: 35414016 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 239 handle_osd_map epochs [240,240], i have 240, src has [1,240]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 240 ms_handle_reset con 0x564564844800 session 0x5645639625a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:38.283056+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 240 ms_handle_reset con 0x564562906400 session 0x56455ff7e000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 240 handle_osd_map epochs [240,240], i have 240, src has [1,240]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 157466624 unmapped: 35381248 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:39.283274+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 241 ms_handle_reset con 0x56455f1d6800 session 0x5645644ed0e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb93000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 241 heartbeat osd_stat(store_statfs(0x1b18a4000/0x0/0x1bfc00000, data 0x7ce5efa/0x7e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [0,0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 157483008 unmapped: 35364864 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 241 ms_handle_reset con 0x56455fb93000 session 0x5645649bf680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:40.283467+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 158556160 unmapped: 34291712 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:41.283649+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.069855690s of 10.144371033s, submitted: 173
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fbc0800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 241 heartbeat osd_stat(store_statfs(0x1b1ca3000/0x0/0x1bfc00000, data 0x78e8294/0x7a8a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2430531 data_alloc: 301989888 data_used: 16261120
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 159637504 unmapped: 33210368 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:42.283784+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 242 ms_handle_reset con 0x56455fbc0800 session 0x56455fa923c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 159670272 unmapped: 33177600 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:43.284020+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fbc0800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 159858688 unmapped: 32989184 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:44.284506+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 243 ms_handle_reset con 0x56455fbc0800 session 0x5645649bf0e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 243 handle_osd_map epochs [243,243], i have 243, src has [1,243]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 243 heartbeat osd_stat(store_statfs(0x1b2764000/0x0/0x1bfc00000, data 0x6da255a/0x6f44000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 243 ms_handle_reset con 0x56455f1d6800 session 0x5645627052c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 159973376 unmapped: 32874496 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:45.284696+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fb93000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 243 ms_handle_reset con 0x56455fb93000 session 0x56455ec8c1e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 159973376 unmapped: 32874496 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:46.284845+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 243 handle_osd_map epochs [244,244], i have 244, src has [1,244]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 244 handle_osd_map epochs [244,244], i have 244, src has [1,244]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2247894 data_alloc: 301989888 data_used: 16281600
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 160432128 unmapped: 32415744 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:47.285086+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Got map version 52
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 160579584 unmapped: 32268288 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 244 handle_osd_map epochs [244,244], i have 244, src has [1,244]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:48.285248+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 160579584 unmapped: 32268288 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:49.285436+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 160768000 unmapped: 32079872 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:50.285591+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 244 heartbeat osd_stat(store_statfs(0x1b3224000/0x0/0x1bfc00000, data 0x5f621ae/0x6108000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 160768000 unmapped: 32079872 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 244 handle_osd_map epochs [244,245], i have 244, src has [1,245]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:51.285775+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2258584 data_alloc: 301989888 data_used: 16293888
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 160792576 unmapped: 32055296 heap: 192847872 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:52.285928+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153f800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.632995605s of 11.197392464s, submitted: 454
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 heartbeat osd_stat(store_statfs(0x1b31f2000/0x0/0x1bfc00000, data 0x5f958e9/0x613b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 174284800 unmapped: 22241280 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 ms_handle_reset con 0x56456153f800 session 0x5645644ec5a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:53.286032+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 162922496 unmapped: 33603584 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:54.286373+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 163004416 unmapped: 33521664 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:55.286525+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 heartbeat osd_stat(store_statfs(0x1b235b000/0x0/0x1bfc00000, data 0x6e2dfa6/0x6fd3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 163045376 unmapped: 33480704 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 heartbeat osd_stat(store_statfs(0x1b235b000/0x0/0x1bfc00000, data 0x6e2dfa6/0x6fd3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562613c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:56.286659+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 ms_handle_reset con 0x564562613c00 session 0x564564333860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 heartbeat osd_stat(store_statfs(0x1b2315000/0x0/0x1bfc00000, data 0x6e71f8d/0x7019000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc48c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2378104 data_alloc: 301989888 data_used: 16293888
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 163127296 unmapped: 33398784 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:57.286821+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153a000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 163127296 unmapped: 33398784 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:58.287056+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 164773888 unmapped: 31752192 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:59.287302+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 168919040 unmapped: 27607040 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:00.287468+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 168919040 unmapped: 27607040 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:01.287632+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2454816 data_alloc: 301989888 data_used: 25776128
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 170434560 unmapped: 26091520 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:02.287783+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 heartbeat osd_stat(store_statfs(0x1b22a1000/0x0/0x1bfc00000, data 0x6ee450a/0x708d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 170516480 unmapped: 26009600 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.939274788s of 10.383980751s, submitted: 85
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:03.287949+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 170713088 unmapped: 25812992 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:04.288140+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 170737664 unmapped: 25788416 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:05.288313+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 170737664 unmapped: 25788416 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:06.288481+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2470498 data_alloc: 301989888 data_used: 25776128
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 170958848 unmapped: 25567232 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:07.288632+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 heartbeat osd_stat(store_statfs(0x1b106e000/0x0/0x1bfc00000, data 0x6f7642e/0x711f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 171327488 unmapped: 25198592 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:08.288844+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 170590208 unmapped: 25935872 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:09.289047+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 171712512 unmapped: 24813568 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:10.289208+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176513024 unmapped: 20013056 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:11.289355+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2599046 data_alloc: 301989888 data_used: 26664960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 175448064 unmapped: 21078016 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:12.289480+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176259072 unmapped: 20267008 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.260718346s of 10.007199287s, submitted: 194
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:13.289616+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 heartbeat osd_stat(store_statfs(0x1b0051000/0x0/0x1bfc00000, data 0x7f9327d/0x813c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 174080000 unmapped: 22446080 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:14.289745+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176185344 unmapped: 20340736 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:15.289851+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176185344 unmapped: 20340736 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:16.290010+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2607314 data_alloc: 301989888 data_used: 26914816
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176300032 unmapped: 20226048 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:17.290189+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176300032 unmapped: 20226048 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:18.290331+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 heartbeat osd_stat(store_statfs(0x1aed56000/0x0/0x1bfc00000, data 0x80efbff/0x8298000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176308224 unmapped: 20217856 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:19.290526+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176316416 unmapped: 20209664 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:20.290688+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176300032 unmapped: 20226048 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:21.290804+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2613400 data_alloc: 301989888 data_used: 26914816
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176300032 unmapped: 20226048 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:22.290920+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176308224 unmapped: 20217856 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 heartbeat osd_stat(store_statfs(0x1aed16000/0x0/0x1bfc00000, data 0x8130cca/0x82d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:23.291069+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 heartbeat osd_stat(store_statfs(0x1aed16000/0x0/0x1bfc00000, data 0x8130cca/0x82d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176357376 unmapped: 20168704 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:24.291288+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.852371216s of 11.272130966s, submitted: 100
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 heartbeat osd_stat(store_statfs(0x1aecf5000/0x0/0x1bfc00000, data 0x815160b/0x82f9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176373760 unmapped: 20152320 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:25.291444+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 177487872 unmapped: 19038208 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:26.291634+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2625584 data_alloc: 301989888 data_used: 26914816
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 177364992 unmapped: 19161088 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:27.291755+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 177364992 unmapped: 19161088 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:28.291940+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 177741824 unmapped: 18784256 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:29.292213+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 246 heartbeat osd_stat(store_statfs(0x1aec8a000/0x0/0x1bfc00000, data 0x81b9922/0x8364000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 177750016 unmapped: 18776064 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:30.292358+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 246 heartbeat osd_stat(store_statfs(0x1aec65000/0x0/0x1bfc00000, data 0x81de730/0x8389000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 177741824 unmapped: 18784256 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:31.292541+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2625502 data_alloc: 301989888 data_used: 26927104
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 177741824 unmapped: 18784256 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:32.292705+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178790400 unmapped: 17735680 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:33.292877+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178790400 unmapped: 17735680 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:34.293045+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560055000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178913280 unmapped: 17612800 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:35.293197+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 246 heartbeat osd_stat(store_statfs(0x1aec1d000/0x0/0x1bfc00000, data 0x82258e4/0x83cf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178913280 unmapped: 17612800 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:36.293392+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.615652084s of 11.978124619s, submitted: 107
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 247 ms_handle_reset con 0x56455fc46800 session 0x564563963a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 247 heartbeat osd_stat(store_statfs(0x1aec1d000/0x0/0x1bfc00000, data 0x82258e4/0x83cf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2640898 data_alloc: 301989888 data_used: 26939392
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179019776 unmapped: 17506304 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:37.293556+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 248 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562613c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645612ed400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 248 ms_handle_reset con 0x5645612ed400 session 0x56455ff7c000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179044352 unmapped: 17481728 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:38.293705+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 249 ms_handle_reset con 0x564562613c00 session 0x564563963c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 249 ms_handle_reset con 0x56455f4a2000 session 0x564563755c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179044352 unmapped: 17481728 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:39.293990+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179159040 unmapped: 17367040 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:40.294143+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 249 heartbeat osd_stat(store_statfs(0x1aeba8000/0x0/0x1bfc00000, data 0x8290ade/0x8443000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179175424 unmapped: 17350656 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:41.294279+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645658c9000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560052c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 249 ms_handle_reset con 0x564560052c00 session 0x564563963e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2649447 data_alloc: 301989888 data_used: 26947584
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179175424 unmapped: 17350656 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:42.294436+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 250 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 250 ms_handle_reset con 0x56455fc46800 session 0x564562443680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179200000 unmapped: 17326080 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:43.294569+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 251 ms_handle_reset con 0x56455f4a2000 session 0x56455ec8d2c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 251 ms_handle_reset con 0x5645658c9000 session 0x56455ff7e960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 251 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179224576 unmapped: 17301504 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:44.294709+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645612ed400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179232768 unmapped: 17293312 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:45.294860+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 252 ms_handle_reset con 0x5645612ed400 session 0x56455ff794a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 180297728 unmapped: 16228352 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:46.295015+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.501951218s of 10.003910065s, submitted: 143
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 252 heartbeat osd_stat(store_statfs(0x1aeb5f000/0x0/0x1bfc00000, data 0x82d3e55/0x848c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [0,1,0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2671581 data_alloc: 301989888 data_used: 26980352
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 181387264 unmapped: 15138816 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:47.295121+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560088800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 252 heartbeat osd_stat(store_statfs(0x1aeb28000/0x0/0x1bfc00000, data 0x830f100/0x84c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565f75c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 252 ms_handle_reset con 0x564565f75c00 session 0x564563754b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 181387264 unmapped: 15138816 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:48.295254+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455fc46800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 253 ms_handle_reset con 0x56455f4a2000 session 0x564562c24b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 181387264 unmapped: 15138816 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:49.295419+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 253 handle_osd_map epochs [254,254], i have 254, src has [1,254]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 254 ms_handle_reset con 0x56455fc46800 session 0x5645649beb40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 254 heartbeat osd_stat(store_statfs(0x1aeb0d000/0x0/0x1bfc00000, data 0x8325dd0/0x84e0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 254 ms_handle_reset con 0x564560088800 session 0x56455ff79a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 181616640 unmapped: 14909440 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:50.295588+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645612ed400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 181616640 unmapped: 14909440 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:51.295805+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2689737 data_alloc: 301989888 data_used: 26992640
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 181641216 unmapped: 14884864 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 256 ms_handle_reset con 0x5645612ed400 session 0x56455ff78960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:52.295942+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 256 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 181657600 unmapped: 14868480 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:53.296159+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 256 handle_osd_map epochs [255,256], i have 256, src has [1,256]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 181665792 unmapped: 14860288 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 257 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:54.296286+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 257 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 257 ms_handle_reset con 0x564562907400 session 0x56455ff7da40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 181665792 unmapped: 14860288 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:55.296458+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 257 heartbeat osd_stat(store_statfs(0x1aea96000/0x0/0x1bfc00000, data 0x83979ab/0x8557000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 257 ms_handle_reset con 0x564560055000 session 0x56455ff72000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 181665792 unmapped: 14860288 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:56.296595+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.500565529s of 10.000756264s, submitted: 145
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 257 ms_handle_reset con 0x56456153a000 session 0x56455ec8f2c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 257 ms_handle_reset con 0x56455fc48c00 session 0x564562914f00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2694445 data_alloc: 301989888 data_used: 27004928
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645658c9000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 182730752 unmapped: 13795328 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:57.296719+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 257 ms_handle_reset con 0x5645658c9000 session 0x56455ec8b4a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178782208 unmapped: 17743872 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:58.296865+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178880512 unmapped: 17645568 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:59.297024+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178012160 unmapped: 18513920 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:00.297131+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b0932000/0x0/0x1bfc00000, data 0x64fc162/0x66bb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178012160 unmapped: 18513920 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:01.297246+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95b800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2403368 data_alloc: 301989888 data_used: 16408576
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 176979968 unmapped: 19546112 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:02.297375+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 259 handle_osd_map epochs [259,259], i have 259, src has [1,259]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 259 handle_osd_map epochs [260,260], i have 260, src has [1,260]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 177012736 unmapped: 19513344 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:03.297507+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178135040 unmapped: 18391040 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:04.297661+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 261 ms_handle_reset con 0x56455e95b800 session 0x564562442b40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95b800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 261 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 261 ms_handle_reset con 0x56455e95b800 session 0x564562443a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 261 heartbeat osd_stat(store_statfs(0x1b08d9000/0x0/0x1bfc00000, data 0x654f59d/0x6712000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178126848 unmapped: 18399232 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:05.297802+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178126848 unmapped: 18399232 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:06.297991+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.352023125s of 10.004399300s, submitted: 194
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2428992 data_alloc: 301989888 data_used: 16408576
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178708480 unmapped: 17817600 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:07.298128+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178913280 unmapped: 17612800 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:08.298256+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 261 heartbeat osd_stat(store_statfs(0x1b0863000/0x0/0x1bfc00000, data 0x65c65b4/0x6789000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178913280 unmapped: 17612800 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:09.298417+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 178987008 unmapped: 17539072 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:10.298588+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179077120 unmapped: 17448960 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:11.298671+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 262 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 262 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2431748 data_alloc: 301989888 data_used: 16420864
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179077120 unmapped: 17448960 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:12.298909+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 180248576 unmapped: 16277504 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:13.299108+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:14.299287+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 180256768 unmapped: 16269312 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 262 heartbeat osd_stat(store_statfs(0x1b07e8000/0x0/0x1bfc00000, data 0x6640f07/0x6806000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:15.299563+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 180256768 unmapped: 16269312 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:16.299702+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 180297728 unmapped: 16228352 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.738206863s of 10.000194550s, submitted: 71
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2439104 data_alloc: 301989888 data_used: 16424960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:17.299871+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179347456 unmapped: 17178624 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:18.300048+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179470336 unmapped: 17055744 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:19.300446+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179478528 unmapped: 17047552 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:20.300650+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179462144 unmapped: 17063936 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 262 heartbeat osd_stat(store_statfs(0x1b075d000/0x0/0x1bfc00000, data 0x66c8e9a/0x6890000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:21.300890+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179462144 unmapped: 17063936 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2449130 data_alloc: 301989888 data_used: 16424960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:22.301927+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 179462144 unmapped: 17063936 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:23.302191+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 180559872 unmapped: 15966208 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 262 ms_handle_reset con 0x564562907400 session 0x5645637552c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:24.302643+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 180568064 unmapped: 15958016 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561583400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:25.302899+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 180584448 unmapped: 15941632 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 handle_osd_map epochs [263,263], i have 263, src has [1,263]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 handle_osd_map epochs [263,263], i have 263, src has [1,263]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 ms_handle_reset con 0x564561583400 session 0x564563755e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 heartbeat osd_stat(store_statfs(0x1b0705000/0x0/0x1bfc00000, data 0x6722fdc/0x68e9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:26.303073+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.684124947s of 10.001629829s, submitted: 66
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 180592640 unmapped: 15933440 heap: 196526080 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2463777 data_alloc: 301989888 data_used: 16437248
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d7800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565f74400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:27.303307+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 ms_handle_reset con 0x564565f74400 session 0x564564333c20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 ms_handle_reset con 0x56455f1d7800 session 0x564565e930e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 180609024 unmapped: 16965632 heap: 197574656 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d7800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:28.303550+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 180707328 unmapped: 16867328 heap: 197574656 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95b800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:29.303749+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 186155008 unmapped: 32415744 heap: 218570752 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:30.303947+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 heartbeat osd_stat(store_statfs(0x1ad28f000/0x0/0x1bfc00000, data 0x9b952ae/0x9d5f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 184066048 unmapped: 34504704 heap: 218570752 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:31.304092+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 183902208 unmapped: 34668544 heap: 218570752 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3318806 data_alloc: 301989888 data_used: 16437248
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:32.304297+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192405504 unmapped: 30367744 heap: 222773248 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:33.304430+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 201146368 unmapped: 21626880 heap: 222773248 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:34.304657+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193724416 unmapped: 29048832 heap: 222773248 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:35.304816+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 185352192 unmapped: 37421056 heap: 222773248 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 heartbeat osd_stat(store_statfs(0x1a2e3f000/0x0/0x1bfc00000, data 0x13be6047/0x13daf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:36.304991+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 7.252369404s of 10.003318787s, submitted: 281
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 189628416 unmapped: 33144832 heap: 222773248 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4385304 data_alloc: 301989888 data_used: 16437248
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:37.305164+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 197099520 unmapped: 29876224 heap: 226975744 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:38.305335+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194478080 unmapped: 32497664 heap: 226975744 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:39.305662+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187375616 unmapped: 39600128 heap: 226975744 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:40.305820+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192061440 unmapped: 39116800 heap: 231178240 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:41.306014+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 200802304 unmapped: 30375936 heap: 231178240 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 heartbeat osd_stat(store_statfs(0x1935b6000/0x0/0x1bfc00000, data 0x2346d392/0x23638000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [1,0,0,0,0,0,1,0,4,3])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 6019244 data_alloc: 301989888 data_used: 16437248
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:42.306168+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 188776448 unmapped: 46604288 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 ms_handle_reset con 0x56455e95b800 session 0x5645643321e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 ms_handle_reset con 0x56455f1d7800 session 0x564560f834a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x5645658c8000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 ms_handle_reset con 0x5645658c8000 session 0x564560f84780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455ef90800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:43.306310+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 ms_handle_reset con 0x56455ef90800 session 0x5645639621e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 189882368 unmapped: 45498368 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562612000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:44.306537+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190152704 unmapped: 45228032 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 ms_handle_reset con 0x564562612000 session 0x56455fc4e960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 heartbeat osd_stat(store_statfs(0x19095a000/0x0/0x1bfc00000, data 0x260c85c0/0x26294000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:45.306703+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 188637184 unmapped: 46743552 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565622000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 ms_handle_reset con 0x564565622000 session 0x564560d1fe00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95b800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:46.306848+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 6.118512154s of 10.000954628s, submitted: 435
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 188678144 unmapped: 46702592 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 264 ms_handle_reset con 0x56455e95b800 session 0x564560c981e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:47.307001+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2637915 data_alloc: 301989888 data_used: 16449536
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 188735488 unmapped: 46645248 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:48.307125+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187678720 unmapped: 47702016 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 264 heartbeat osd_stat(store_statfs(0x1b00f1000/0x0/0x1bfc00000, data 0x6931c2a/0x6afd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:49.307299+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 265 ms_handle_reset con 0x56455e95a400 session 0x5645644ed0e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187621376 unmapped: 47759360 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:50.307723+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187629568 unmapped: 47751168 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560089400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 266 ms_handle_reset con 0x564560089400 session 0x564563754960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d7c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:51.307905+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 266 handle_osd_map epochs [266,266], i have 266, src has [1,266]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187629568 unmapped: 47751168 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 266 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 267 ms_handle_reset con 0x56455f1d7c00 session 0x564560f36960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:52.308117+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2639775 data_alloc: 301989888 data_used: 16474112
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187654144 unmapped: 47726592 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d7c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 267 ms_handle_reset con 0x56455f1d7c00 session 0x56455ec8e000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:53.308325+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187654144 unmapped: 47726592 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b00e6000/0x0/0x1bfc00000, data 0x693851d/0x6b07000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:54.308469+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95a400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187654144 unmapped: 47726592 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 268 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 268 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:55.308583+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 268 ms_handle_reset con 0x56455e95a400 session 0x564562c25680
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 268 heartbeat osd_stat(store_statfs(0x1b00e5000/0x0/0x1bfc00000, data 0x693861a/0x6b09000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187662336 unmapped: 47718400 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455e95b800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560089400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 268 ms_handle_reset con 0x564560089400 session 0x5645644ec960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 268 ms_handle_reset con 0x56455e95b800 session 0x564562c24960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565622000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:56.308725+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.378430367s of 10.001433372s, submitted: 174
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 269 ms_handle_reset con 0x564565622000 session 0x564562c252c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187695104 unmapped: 47685632 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565286c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 269 ms_handle_reset con 0x564565286c00 session 0x564562705860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565288800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 269 heartbeat osd_stat(store_statfs(0x1b00db000/0x0/0x1bfc00000, data 0x693ccfd/0x6b13000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:57.308865+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2653949 data_alloc: 301989888 data_used: 16486400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 269 ms_handle_reset con 0x564565288800 session 0x564562705a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187727872 unmapped: 47652864 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565286000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 269 ms_handle_reset con 0x564565286000 session 0x564560031e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565287400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:58.309013+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187727872 unmapped: 47652864 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 270 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 270 ms_handle_reset con 0x564565287400 session 0x5645629090e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:59.309185+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153a800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 270 ms_handle_reset con 0x56456153a800 session 0x564562909a40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153a800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 270 heartbeat osd_stat(store_statfs(0x1b00d7000/0x0/0x1bfc00000, data 0x693efd1/0x6b16000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [0,0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 187768832 unmapped: 47611904 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 271 ms_handle_reset con 0x56456153a800 session 0x564562909e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:00.309337+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 188850176 unmapped: 46530560 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565286000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 271 ms_handle_reset con 0x564565286000 session 0x5645629152c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:01.309487+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 188858368 unmapped: 46522368 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 272 handle_osd_map epochs [272,272], i have 272, src has [1,272]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 272 heartbeat osd_stat(store_statfs(0x1b00d3000/0x0/0x1bfc00000, data 0x6941326/0x6b1a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:02.309747+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2669623 data_alloc: 301989888 data_used: 16510976
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 188858368 unmapped: 46522368 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 273 ms_handle_reset con 0x564562907400 session 0x564562915860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f4a2000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565f74400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:03.309914+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 274 ms_handle_reset con 0x564565f74400 session 0x564562c24780
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 274 ms_handle_reset con 0x56455f4a2000 session 0x5645629150e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 274 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 274 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 274 handle_osd_map epochs [273,274], i have 274, src has [1,274]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 189939712 unmapped: 45441024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56456153a800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:04.310094+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 274 ms_handle_reset con 0x56456153a800 session 0x56455ec8cd20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562907400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 189939712 unmapped: 45441024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 274 ms_handle_reset con 0x564562907400 session 0x564562c241e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:05.310258+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565286000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 274 heartbeat osd_stat(store_statfs(0x1b00c5000/0x0/0x1bfc00000, data 0x6947d02/0x6b27000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 274 ms_handle_reset con 0x564565286000 session 0x564562c25e00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564565f74400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190005248 unmapped: 45375488 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:06.310432+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 275 ms_handle_reset con 0x564565f74400 session 0x56455fc4fe00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 275 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.286320686s of 10.007546425s, submitted: 261
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562613c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190021632 unmapped: 45359104 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 275 ms_handle_reset con 0x564562613c00 session 0x5645649be3c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561582000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 275 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 275 ms_handle_reset con 0x564561582000 session 0x56455ec8c3c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:07.310560+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2677312 data_alloc: 301989888 data_used: 16510976
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190070784 unmapped: 45309952 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b00c2000/0x0/0x1bfc00000, data 0x694a121/0x6b2a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564560054800
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 276 ms_handle_reset con 0x564560054800 session 0x5645629083c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564562613c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 276 ms_handle_reset con 0x564562613c00 session 0x56455fc4e3c0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:08.310693+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190136320 unmapped: 45244416 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 276 heartbeat osd_stat(store_statfs(0x1b00be000/0x0/0x1bfc00000, data 0x694c4c8/0x6b2d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:09.310894+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190136320 unmapped: 45244416 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:10.311082+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190136320 unmapped: 45244416 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:11.311242+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190152704 unmapped: 45228032 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:12.311386+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2681184 data_alloc: 301989888 data_used: 16523264
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190152704 unmapped: 45228032 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:13.311536+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b00b9000/0x0/0x1bfc00000, data 0x6950a5e/0x6b34000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190177280 unmapped: 45203456 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:14.311687+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190193664 unmapped: 45187072 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:15.311851+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190193664 unmapped: 45187072 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:16.312112+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 279 handle_osd_map epochs [279,279], i have 279, src has [1,279]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 279 handle_osd_map epochs [279,279], i have 279, src has [1,279]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.483242989s of 10.876601219s, submitted: 189
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:17.312308+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2691578 data_alloc: 301989888 data_used: 16535552
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:18.312636+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 279 heartbeat osd_stat(store_statfs(0x1b00b4000/0x0/0x1bfc00000, data 0x6952d3a/0x6b39000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:19.312827+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:20.313020+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 279 heartbeat osd_stat(store_statfs(0x1b00b6000/0x0/0x1bfc00000, data 0x6952c73/0x6b38000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 279 handle_osd_map epochs [279,280], i have 279, src has [1,280]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:21.313206+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 280 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190234624 unmapped: 45146112 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:22.313387+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2695450 data_alloc: 301989888 data_used: 16547840
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:23.313720+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:24.314416+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 281 heartbeat osd_stat(store_statfs(0x1b00af000/0x0/0x1bfc00000, data 0x695714b/0x6b3f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:25.314592+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:26.314854+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:27.315196+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2696688 data_alloc: 301989888 data_used: 16547840
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.065249443s of 10.265866280s, submitted: 73
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 281 handle_osd_map epochs [282,282], i have 282, src has [1,282]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190300160 unmapped: 45080576 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:28.315757+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190300160 unmapped: 45080576 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 282 heartbeat osd_stat(store_statfs(0x1b00ab000/0x0/0x1bfc00000, data 0x6959489/0x6b42000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:29.316090+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190300160 unmapped: 45080576 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:30.316477+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190308352 unmapped: 45072384 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:31.316870+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190349312 unmapped: 45031424 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:32.317023+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2702560 data_alloc: 301989888 data_used: 16572416
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190349312 unmapped: 45031424 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 283 heartbeat osd_stat(store_statfs(0x1b00a7000/0x0/0x1bfc00000, data 0x695b6ab/0x6b46000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:33.317159+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190349312 unmapped: 45031424 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 283 heartbeat osd_stat(store_statfs(0x1b00a7000/0x0/0x1bfc00000, data 0x695b6ab/0x6b46000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:34.317284+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190349312 unmapped: 45031424 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:35.317503+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190349312 unmapped: 45031424 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:36.317761+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 handle_osd_map epochs [284,284], i have 284, src has [1,284]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190357504 unmapped: 45023232 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:37.318017+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2705016 data_alloc: 301989888 data_used: 16584704
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190357504 unmapped: 45023232 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:38.318157+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190365696 unmapped: 45015040 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:39.318861+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a4000/0x0/0x1bfc00000, data 0x695d7fc/0x6b49000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190365696 unmapped: 45015040 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:40.319038+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190365696 unmapped: 45015040 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:41.319312+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 14.113472939s of 14.264537811s, submitted: 80
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190373888 unmapped: 45006848 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:42.319552+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2704488 data_alloc: 301989888 data_used: 16584704
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190373888 unmapped: 45006848 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a5000/0x0/0x1bfc00000, data 0x695d7fc/0x6b49000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:43.319738+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190373888 unmapped: 45006848 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:44.319916+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190373888 unmapped: 45006848 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:45.320084+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190373888 unmapped: 45006848 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:46.320243+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a4000/0x0/0x1bfc00000, data 0x695d897/0x6b4a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:47.320391+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2706080 data_alloc: 301989888 data_used: 16584704
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:48.320518+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:49.320686+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 ms_handle_reset con 0x56455fc48000 session 0x564560f84000
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 ms_handle_reset con 0x56455fc47400 session 0x56455fbc54a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564561684c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:50.320831+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a4000/0x0/0x1bfc00000, data 0x695d897/0x6b4a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:51.320995+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:52.321180+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2705728 data_alloc: 301989888 data_used: 16584704
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:53.321398+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a4000/0x0/0x1bfc00000, data 0x695d897/0x6b4a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a4000/0x0/0x1bfc00000, data 0x695d897/0x6b4a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:54.321592+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:55.321758+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:56.321880+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 14.591362953s of 14.624481201s, submitted: 6
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:57.322077+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2705904 data_alloc: 301989888 data_used: 16584704
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:58.322237+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:59.322453+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a4000/0x0/0x1bfc00000, data 0x695d897/0x6b4a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:00.322614+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:01.322813+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:02.322990+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2705214 data_alloc: 301989888 data_used: 16584704
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190398464 unmapped: 44982272 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a5000/0x0/0x1bfc00000, data 0x695d7fc/0x6b49000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:03.323226+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190398464 unmapped: 44982272 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:04.323334+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190398464 unmapped: 44982272 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:05.323572+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190398464 unmapped: 44982272 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:06.323800+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.955564499s of 10.000794411s, submitted: 9
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190128128 unmapped: 45252608 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:07.324047+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2704524 data_alloc: 301989888 data_used: 16584704
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190128128 unmapped: 45252608 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:08.324210+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a6000/0x0/0x1bfc00000, data 0x695d761/0x6b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190128128 unmapped: 45252608 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:09.324432+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190128128 unmapped: 45252608 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:10.324633+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a6000/0x0/0x1bfc00000, data 0x695d761/0x6b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190136320 unmapped: 45244416 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:11.324801+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a6000/0x0/0x1bfc00000, data 0x695d761/0x6b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190136320 unmapped: 45244416 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:12.324925+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2704524 data_alloc: 301989888 data_used: 16584704
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190136320 unmapped: 45244416 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:13.325034+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190136320 unmapped: 45244416 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:14.325171+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190136320 unmapped: 45244416 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:15.325349+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a6000/0x0/0x1bfc00000, data 0x695d761/0x6b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190160896 unmapped: 45219840 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:16.325516+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.976320267s of 10.000534058s, submitted: 5
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190160896 unmapped: 45219840 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a6000/0x0/0x1bfc00000, data 0x695d761/0x6b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:17.325754+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2705324 data_alloc: 301989888 data_used: 16605184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190160896 unmapped: 45219840 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:18.325912+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190169088 unmapped: 45211648 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:19.326147+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190169088 unmapped: 45211648 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:20.326335+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190169088 unmapped: 45211648 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a6000/0x0/0x1bfc00000, data 0x695d761/0x6b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:21.326509+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190169088 unmapped: 45211648 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:22.326618+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2705324 data_alloc: 301989888 data_used: 16605184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190169088 unmapped: 45211648 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:23.326823+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190169088 unmapped: 45211648 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:24.327023+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190169088 unmapped: 45211648 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:25.327192+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190169088 unmapped: 45211648 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:26.327383+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a6000/0x0/0x1bfc00000, data 0x695d761/0x6b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.984609604s of 10.000021935s, submitted: 3
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190177280 unmapped: 45203456 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:27.327632+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2705324 data_alloc: 301989888 data_used: 16605184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190177280 unmapped: 45203456 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:28.328165+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190177280 unmapped: 45203456 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a6000/0x0/0x1bfc00000, data 0x695d761/0x6b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:29.328614+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190177280 unmapped: 45203456 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:30.329014+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a6000/0x0/0x1bfc00000, data 0x695d761/0x6b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190177280 unmapped: 45203456 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:31.329358+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190177280 unmapped: 45203456 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:32.329920+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a5000/0x0/0x1bfc00000, data 0x695d7fc/0x6b49000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2706916 data_alloc: 301989888 data_used: 16605184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190177280 unmapped: 45203456 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a5000/0x0/0x1bfc00000, data 0x695d7fc/0x6b49000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:33.330029+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190177280 unmapped: 45203456 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:34.330329+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:35.330553+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a4000/0x0/0x1bfc00000, data 0x695d897/0x6b4a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:36.330711+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.969836235s of 10.000657082s, submitted: 6
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:37.331028+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2708508 data_alloc: 301989888 data_used: 16605184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:38.331308+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:39.331595+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:40.332003+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a3000/0x0/0x1bfc00000, data 0x695d932/0x6b4b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:41.343676+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:42.343913+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a2000/0x0/0x1bfc00000, data 0x695d9cd/0x6b4c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2711868 data_alloc: 301989888 data_used: 16605184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:43.344830+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a2000/0x0/0x1bfc00000, data 0x695d9cd/0x6b4c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:44.345403+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:45.345740+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:46.346120+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.975413322s of 10.000435829s, submitted: 5
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190185472 unmapped: 45195264 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:47.346349+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a2000/0x0/0x1bfc00000, data 0x695d9cd/0x6b4c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2711868 data_alloc: 301989888 data_used: 16605184
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190193664 unmapped: 45187072 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:48.346553+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190193664 unmapped: 45187072 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:49.346797+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190193664 unmapped: 45187072 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:50.347039+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190201856 unmapped: 45178880 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:51.347194+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b00a2000/0x0/0x1bfc00000, data 0x695d9cd/0x6b4c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190201856 unmapped: 45178880 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:52.347331+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2714514 data_alloc: 301989888 data_used: 16617472
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190210048 unmapped: 45170688 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:53.347539+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190210048 unmapped: 45170688 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:54.347776+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190210048 unmapped: 45170688 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:55.348068+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190210048 unmapped: 45170688 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:56.348263+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.816808701s of 10.001707077s, submitted: 47
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190210048 unmapped: 45170688 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:57.348453+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 285 heartbeat osd_stat(store_statfs(0x1afca1000/0x0/0x1bfc00000, data 0x695fb77/0x6b4d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2713136 data_alloc: 301989888 data_used: 16617472
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190210048 unmapped: 45170688 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:58.348622+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190210048 unmapped: 45170688 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:59.348856+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190210048 unmapped: 45170688 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:00.349187+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 285 heartbeat osd_stat(store_statfs(0x1afca2000/0x0/0x1bfc00000, data 0x695fadc/0x6b4c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190210048 unmapped: 45170688 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:01.349672+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 handle_osd_map epochs [286,286], i have 286, src has [1,286]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afca2000/0x0/0x1bfc00000, data 0x695fadc/0x6b4c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:02.349849+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2716280 data_alloc: 301989888 data_used: 16629760
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:03.350213+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9d000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:04.350680+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:05.350995+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9d000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:06.351251+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190234624 unmapped: 45146112 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:07.351563+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2716280 data_alloc: 301989888 data_used: 16629760
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190234624 unmapped: 45146112 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:08.351721+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190234624 unmapped: 45146112 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:09.351942+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9d000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9d000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190234624 unmapped: 45146112 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:10.352506+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9d000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190234624 unmapped: 45146112 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:11.352656+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190234624 unmapped: 45146112 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:12.352919+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2716280 data_alloc: 301989888 data_used: 16629760
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190234624 unmapped: 45146112 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:13.353148+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190234624 unmapped: 45146112 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:14.353798+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:15.354028+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9d000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:16.354754+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:17.354947+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2716280 data_alloc: 301989888 data_used: 16629760
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:18.355381+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:19.355635+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:20.355869+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:21.356086+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9d000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 25.403402328s of 25.452810287s, submitted: 24
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:22.356556+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9d000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9d000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2716456 data_alloc: 301989888 data_used: 16629760
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:23.356804+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:24.360433+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9d000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:25.360634+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:26.360845+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9d000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:27.361005+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2715576 data_alloc: 301989888 data_used: 16629760
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9e000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:28.361191+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190267392 unmapped: 45113344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:29.364075+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190267392 unmapped: 45113344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:30.364284+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190267392 unmapped: 45113344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9c000/0x0/0x1bfc00000, data 0x6961e0a/0x6b52000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:31.364488+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190275584 unmapped: 45105152 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:32.364659+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190275584 unmapped: 45105152 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2717184 data_alloc: 301989888 data_used: 16629760
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:33.364817+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190275584 unmapped: 45105152 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:34.365023+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190275584 unmapped: 45105152 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.709640503s of 12.753151894s, submitted: 8
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:35.365183+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190275584 unmapped: 45105152 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:36.365392+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190275584 unmapped: 45105152 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 heartbeat osd_stat(store_statfs(0x1afc9e000/0x0/0x1bfc00000, data 0x6961cd4/0x6b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:37.365569+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190300160 unmapped: 45080576 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2720504 data_alloc: 301989888 data_used: 16642048
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:38.365757+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190300160 unmapped: 45080576 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 287 handle_osd_map epochs [287,287], i have 287, src has [1,287]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:39.365925+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190308352 unmapped: 45072384 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:40.366103+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 287 heartbeat osd_stat(store_statfs(0x1afc99000/0x0/0x1bfc00000, data 0x696404f/0x6b54000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190308352 unmapped: 45072384 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.2 total, 600.0 interval
                                                          Cumulative writes: 21K writes, 84K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.01 MB/s
                                                          Cumulative WAL: 21K writes, 7932 syncs, 2.77 writes per sync, written: 0.08 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 41.02 MB, 0.07 MB/s
                                                          Interval WAL: 11K writes, 4859 syncs, 2.36 writes per sync, written: 0.04 GB, 0.07 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:41.366255+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190308352 unmapped: 45072384 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 287 heartbeat osd_stat(store_statfs(0x1afc9a000/0x0/0x1bfc00000, data 0x696404f/0x6b54000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:42.366396+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190308352 unmapped: 45072384 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2721568 data_alloc: 301989888 data_used: 16642048
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:43.366552+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190308352 unmapped: 45072384 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:44.366679+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190308352 unmapped: 45072384 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:45.366900+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190308352 unmapped: 45072384 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 287 heartbeat osd_stat(store_statfs(0x1afc99000/0x0/0x1bfc00000, data 0x69640ea/0x6b55000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:46.367043+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.626863480s of 11.805592537s, submitted: 44
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190324736 unmapped: 45056000 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:47.367217+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190324736 unmapped: 45056000 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2725418 data_alloc: 301989888 data_used: 16654336
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:48.367350+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190324736 unmapped: 45056000 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:49.367575+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190332928 unmapped: 45047808 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Got map version 53
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc93000/0x0/0x1bfc00000, data 0x69664e9/0x6b5b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:50.367758+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190201856 unmapped: 45178880 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:51.367912+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190210048 unmapped: 45170688 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:52.368050+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2733984 data_alloc: 301989888 data_used: 16654336
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:53.368206+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190226432 unmapped: 45154304 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:54.368324+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:55.368507+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc92000/0x0/0x1bfc00000, data 0x6966526/0x6b5c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:56.368673+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.909127235s of 10.004127502s, submitted: 34
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:57.369063+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2732766 data_alloc: 301989888 data_used: 16654336
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:58.369256+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:59.369414+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc92000/0x0/0x1bfc00000, data 0x6966526/0x6b5c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:00.369580+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190242816 unmapped: 45137920 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:01.369773+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:02.369943+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2732076 data_alloc: 301989888 data_used: 16654336
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:03.370138+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:04.370370+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc93000/0x0/0x1bfc00000, data 0x6966414/0x6b5b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:05.370762+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:06.370928+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:07.371100+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2732076 data_alloc: 301989888 data_used: 16654336
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:08.371330+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190251008 unmapped: 45129728 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc93000/0x0/0x1bfc00000, data 0x6966414/0x6b5b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:09.371567+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:10.371732+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc93000/0x0/0x1bfc00000, data 0x6966414/0x6b5b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:11.371892+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 14.972538948s of 14.987371445s, submitted: 3
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc93000/0x0/0x1bfc00000, data 0x6966414/0x6b5b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:12.373700+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2732252 data_alloc: 301989888 data_used: 16654336
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:13.374010+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:14.374450+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:15.374735+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc93000/0x0/0x1bfc00000, data 0x6966414/0x6b5b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:16.375183+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190259200 unmapped: 45121536 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:17.375484+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc93000/0x0/0x1bfc00000, data 0x6966414/0x6b5b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190267392 unmapped: 45113344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2732412 data_alloc: 301989888 data_used: 16658432
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:18.375874+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190267392 unmapped: 45113344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:19.376260+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190267392 unmapped: 45113344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:20.376658+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190267392 unmapped: 45113344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:21.377101+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.994676590s of 10.004303932s, submitted: 2
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190267392 unmapped: 45113344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc93000/0x0/0x1bfc00000, data 0x6966414/0x6b5b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:22.377356+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190267392 unmapped: 45113344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2732236 data_alloc: 301989888 data_used: 16658432
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:23.377592+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190267392 unmapped: 45113344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:24.378025+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190267392 unmapped: 45113344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:25.378268+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190275584 unmapped: 45105152 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:26.378431+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc94000/0x0/0x1bfc00000, data 0x696637d/0x6b5a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190283776 unmapped: 45096960 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:27.385217+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190291968 unmapped: 45088768 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2735082 data_alloc: 301989888 data_used: 16658432
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:28.385529+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190291968 unmapped: 45088768 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:29.385802+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190291968 unmapped: 45088768 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:30.386036+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190291968 unmapped: 45088768 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:31.386258+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.954007149s of 10.000628471s, submitted: 10
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190291968 unmapped: 45088768 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 heartbeat osd_stat(store_statfs(0x1afc94000/0x0/0x1bfc00000, data 0x696637d/0x6b5a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:32.386538+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 289 handle_osd_map epochs [289,289], i have 289, src has [1,289]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 289 handle_osd_map epochs [289,289], i have 289, src has [1,289]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190316544 unmapped: 45064192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2734804 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:33.386693+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 289 heartbeat osd_stat(store_statfs(0x1afc91000/0x0/0x1bfc00000, data 0x69685c2/0x6b5c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190316544 unmapped: 45064192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:34.386869+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190316544 unmapped: 45064192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:35.387163+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190316544 unmapped: 45064192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:36.387370+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190316544 unmapped: 45064192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 289 heartbeat osd_stat(store_statfs(0x1afc91000/0x0/0x1bfc00000, data 0x69685c2/0x6b5c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:37.387510+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190316544 unmapped: 45064192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2734804 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:38.387687+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190316544 unmapped: 45064192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:39.387796+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190324736 unmapped: 45056000 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:40.387941+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190324736 unmapped: 45056000 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:41.388130+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.884110451s of 10.013845444s, submitted: 39
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc91000/0x0/0x1bfc00000, data 0x69685c2/0x6b5c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190332928 unmapped: 45047808 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:42.388278+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8d000/0x0/0x1bfc00000, data 0x696a7ba/0x6b60000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190332928 unmapped: 45047808 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:43.388475+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2737630 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190332928 unmapped: 45047808 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8d000/0x0/0x1bfc00000, data 0x696a7ba/0x6b60000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:44.388630+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190332928 unmapped: 45047808 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:45.388778+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191381504 unmapped: 43999232 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:46.389017+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190332928 unmapped: 45047808 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:47.389228+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190332928 unmapped: 45047808 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:48.389483+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2736926 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8e000/0x0/0x1bfc00000, data 0x696a7ba/0x6b60000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190332928 unmapped: 45047808 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:49.389688+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190341120 unmapped: 45039616 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:50.390133+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190341120 unmapped: 45039616 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:51.390302+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.973407745s of 10.031680107s, submitted: 23
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190341120 unmapped: 45039616 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:52.390477+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190341120 unmapped: 45039616 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:53.390785+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742054 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190357504 unmapped: 45023232 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8b000/0x0/0x1bfc00000, data 0x696aad6/0x6b63000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:54.391075+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190357504 unmapped: 45023232 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:55.391248+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190357504 unmapped: 45023232 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:56.391385+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc89000/0x0/0x1bfc00000, data 0x696ac09/0x6b65000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190357504 unmapped: 45023232 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:57.391626+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190357504 unmapped: 45023232 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:58.391777+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2745238 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190365696 unmapped: 45015040 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc89000/0x0/0x1bfc00000, data 0x696ac09/0x6b65000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:59.392028+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190365696 unmapped: 45015040 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:00.392209+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8b000/0x0/0x1bfc00000, data 0x696a9fe/0x6b63000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190373888 unmapped: 45006848 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:01.392365+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190373888 unmapped: 45006848 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:02.392521+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190373888 unmapped: 45006848 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:03.392696+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2743506 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190373888 unmapped: 45006848 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:04.392871+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190373888 unmapped: 45006848 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8b000/0x0/0x1bfc00000, data 0x696a9fe/0x6b63000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:05.393065+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:06.393198+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 14.886683464s of 14.954931259s, submitted: 14
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8b000/0x0/0x1bfc00000, data 0x696a9fe/0x6b63000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:07.393366+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:08.393554+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742816 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:09.394249+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:10.394892+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:11.395411+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:12.395751+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190382080 unmapped: 44998656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:13.396154+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742816 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:14.396551+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:15.396924+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:16.397234+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:17.397483+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:18.397771+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742816 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:19.398079+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:20.398342+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190390272 unmapped: 44990464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:21.398626+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 14.984094620s of 14.997872353s, submitted: 3
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190398464 unmapped: 44982272 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:22.399125+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190406656 unmapped: 44974080 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:23.399368+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742992 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190406656 unmapped: 44974080 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:24.399556+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190406656 unmapped: 44974080 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:25.399822+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190406656 unmapped: 44974080 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:26.400054+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190406656 unmapped: 44974080 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:27.400252+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190406656 unmapped: 44974080 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:28.400486+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742992 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190406656 unmapped: 44974080 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:29.400676+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190406656 unmapped: 44974080 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:30.400872+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190414848 unmapped: 44965888 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:31.401101+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.991374016s of 10.053936005s, submitted: 3
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190423040 unmapped: 44957696 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:32.401398+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190423040 unmapped: 44957696 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:33.401639+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742816 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190423040 unmapped: 44957696 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:34.401780+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190423040 unmapped: 44957696 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:35.401942+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190423040 unmapped: 44957696 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:36.402218+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190423040 unmapped: 44957696 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:37.402473+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190423040 unmapped: 44957696 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:38.402710+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742832 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190431232 unmapped: 44949504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:39.402916+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190431232 unmapped: 44949504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:40.403132+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190431232 unmapped: 44949504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:41.403370+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190431232 unmapped: 44949504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:42.403596+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190431232 unmapped: 44949504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:43.403824+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742832 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190431232 unmapped: 44949504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:44.404078+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190431232 unmapped: 44949504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:45.404284+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190431232 unmapped: 44949504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:46.404447+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 14.916477203s of 14.946298599s, submitted: 6
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190431232 unmapped: 44949504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:47.404580+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190431232 unmapped: 44949504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:48.404796+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742640 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190431232 unmapped: 44949504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:49.405040+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190439424 unmapped: 44941312 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:50.405215+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190439424 unmapped: 44941312 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:51.405383+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190447616 unmapped: 44933120 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:52.405565+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190447616 unmapped: 44933120 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:53.405735+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742992 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190447616 unmapped: 44933120 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:54.405881+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190455808 unmapped: 44924928 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:55.406091+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190455808 unmapped: 44924928 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:56.406270+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.980244637s of 10.000483513s, submitted: 5
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190455808 unmapped: 44924928 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:57.406473+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190455808 unmapped: 44924928 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:58.406699+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2742816 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8ec/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190455808 unmapped: 44924928 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:59.406919+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190455808 unmapped: 44924928 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:00.407141+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190455808 unmapped: 44924928 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:01.407303+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8c000/0x0/0x1bfc00000, data 0x696a8f0/0x6b62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190455808 unmapped: 44924928 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:02.407509+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190464000 unmapped: 44916736 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:03.407682+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2743718 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190464000 unmapped: 44916736 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:04.407905+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8d000/0x0/0x1bfc00000, data 0x696a855/0x6b61000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190464000 unmapped: 44916736 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:05.408186+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190464000 unmapped: 44916736 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:06.408303+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.957878113s of 10.003526688s, submitted: 10
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190472192 unmapped: 44908544 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:07.408447+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8d000/0x0/0x1bfc00000, data 0x696a855/0x6b61000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190472192 unmapped: 44908544 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:08.408604+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2743028 data_alloc: 301989888 data_used: 16670720
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190472192 unmapped: 44908544 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:09.408821+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190472192 unmapped: 44908544 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:10.409016+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8d000/0x0/0x1bfc00000, data 0x696a855/0x6b61000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8d000/0x0/0x1bfc00000, data 0x696a855/0x6b61000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190480384 unmapped: 44900352 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:11.409221+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 heartbeat osd_stat(store_statfs(0x1afc8d000/0x0/0x1bfc00000, data 0x696a855/0x6b61000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190496768 unmapped: 44883968 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:12.409473+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 291 heartbeat osd_stat(store_statfs(0x1afc8e000/0x0/0x1bfc00000, data 0x696a7ba/0x6b60000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190496768 unmapped: 44883968 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:13.409667+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2746540 data_alloc: 301989888 data_used: 16683008
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190496768 unmapped: 44883968 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:14.409868+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 291 heartbeat osd_stat(store_statfs(0x1afc89000/0x0/0x1bfc00000, data 0x696cb35/0x6b64000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 190496768 unmapped: 44883968 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:15.410001+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 291 ms_handle_reset con 0x56455f1d6c00 session 0x56455fa974a0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191815680 unmapped: 43565056 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:16.410175+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.883187294s of 10.002672195s, submitted: 317
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 291 heartbeat osd_stat(store_statfs(0x1afc89000/0x0/0x1bfc00000, data 0x696cb35/0x6b64000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 291 heartbeat osd_stat(store_statfs(0x1afc8a000/0x0/0x1bfc00000, data 0x696cb35/0x6b64000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191815680 unmapped: 43565056 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:17.410359+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Got map version 54
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191963136 unmapped: 43417600 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:18.410539+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2745484 data_alloc: 301989888 data_used: 16683008
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191963136 unmapped: 43417600 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:19.428725+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 291 heartbeat osd_stat(store_statfs(0x1afc8a000/0x0/0x1bfc00000, data 0x696cb35/0x6b64000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191963136 unmapped: 43417600 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:20.429028+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191963136 unmapped: 43417600 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:21.429160+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 291 heartbeat osd_stat(store_statfs(0x1afc8a000/0x0/0x1bfc00000, data 0x696cb35/0x6b64000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191987712 unmapped: 43393024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:22.429351+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191987712 unmapped: 43393024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:23.429548+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2749686 data_alloc: 301989888 data_used: 16695296
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191987712 unmapped: 43393024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:24.429801+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191987712 unmapped: 43393024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:25.430030+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191987712 unmapped: 43393024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:26.430184+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.955340385s of 10.001447678s, submitted: 19
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1b0c85000/0x0/0x1bfc00000, data 0x696ed2d/0x6b68000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191987712 unmapped: 43393024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:27.430414+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191987712 unmapped: 43393024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:28.430664+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1b0c85000/0x0/0x1bfc00000, data 0x696ed2d/0x6b68000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2749862 data_alloc: 301989888 data_used: 16695296
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191987712 unmapped: 43393024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:29.430937+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191987712 unmapped: 43393024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1b0c84000/0x0/0x1bfc00000, data 0x696fd81/0x6b6a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:30.431171+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 191987712 unmapped: 43393024 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:31.431310+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192143360 unmapped: 43237376 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:32.431482+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192151552 unmapped: 43229184 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:33.431649+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2755286 data_alloc: 301989888 data_used: 16695296
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192151552 unmapped: 43229184 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:34.431883+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192151552 unmapped: 43229184 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:35.432069+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1b0c60000/0x0/0x1bfc00000, data 0x699392e/0x6b8e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1b0c60000/0x0/0x1bfc00000, data 0x699392e/0x6b8e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192241664 unmapped: 43139072 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:36.432207+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.919886589s of 10.004167557s, submitted: 13
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193396736 unmapped: 41984000 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:37.432363+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193495040 unmapped: 41885696 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:38.432554+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2762974 data_alloc: 301989888 data_used: 16695296
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193495040 unmapped: 41885696 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:39.432780+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1b0c15000/0x0/0x1bfc00000, data 0x69de8e3/0x6bd9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193708032 unmapped: 41672704 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:40.433018+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193708032 unmapped: 41672704 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:41.433171+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193708032 unmapped: 41672704 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:42.433422+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193716224 unmapped: 41664512 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:43.433734+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2763666 data_alloc: 301989888 data_used: 16695296
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193716224 unmapped: 41664512 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1b0bc1000/0x0/0x1bfc00000, data 0x6a31e09/0x6c2d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:44.433942+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193716224 unmapped: 41664512 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:45.434208+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193716224 unmapped: 41664512 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:46.434385+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.896431923s of 10.003762245s, submitted: 23
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1af9f1000/0x0/0x1bfc00000, data 0x6a61246/0x6c5d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193961984 unmapped: 41418752 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:47.434536+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193961984 unmapped: 41418752 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:48.434690+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2769268 data_alloc: 301989888 data_used: 16695296
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193961984 unmapped: 41418752 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:49.434872+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1af9cc000/0x0/0x1bfc00000, data 0x6a873de/0x6c82000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194068480 unmapped: 41312256 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:50.435057+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1af9b1000/0x0/0x1bfc00000, data 0x6aa2399/0x6c9d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194068480 unmapped: 41312256 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:51.435199+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194068480 unmapped: 41312256 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:52.435394+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192733184 unmapped: 42647552 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:53.435577+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2769632 data_alloc: 301989888 data_used: 16695296
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192733184 unmapped: 42647552 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:54.435728+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1af973000/0x0/0x1bfc00000, data 0x6ae0d6c/0x6cdb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1af973000/0x0/0x1bfc00000, data 0x6ae0d6c/0x6cdb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192733184 unmapped: 42647552 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:55.435864+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192733184 unmapped: 42647552 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:56.436004+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.862298965s of 10.004086494s, submitted: 29
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192815104 unmapped: 42565632 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:57.436167+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 192815104 unmapped: 42565632 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:58.469192+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2777092 data_alloc: 301989888 data_used: 16695296
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1af92f000/0x0/0x1bfc00000, data 0x6b24482/0x6d1f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193019904 unmapped: 42360832 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:59.469344+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193159168 unmapped: 42221568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:00.469515+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193159168 unmapped: 42221568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:01.469725+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193159168 unmapped: 42221568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:02.469884+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 193093632 unmapped: 42287104 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1af902000/0x0/0x1bfc00000, data 0x6b51397/0x6d4c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:03.470034+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2779012 data_alloc: 301989888 data_used: 16695296
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194224128 unmapped: 41156608 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:04.470226+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194224128 unmapped: 41156608 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:05.470397+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1af8e6000/0x0/0x1bfc00000, data 0x6b6c3ed/0x6d68000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194224128 unmapped: 41156608 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.866591454s of 10.001554489s, submitted: 24
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:06.470522+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194224128 unmapped: 41156608 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:07.470677+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194363392 unmapped: 41017344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:08.470766+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2787084 data_alloc: 301989888 data_used: 16695296
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194363392 unmapped: 41017344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:09.471022+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194363392 unmapped: 41017344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1af885000/0x0/0x1bfc00000, data 0x6bcd6f6/0x6dc9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [0,0,0,1])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:10.471136+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194363392 unmapped: 41017344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:11.471275+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 heartbeat osd_stat(store_statfs(0x1af861000/0x0/0x1bfc00000, data 0x6bf0c12/0x6ded000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194363392 unmapped: 41017344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:12.471457+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194363392 unmapped: 41017344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:13.471661+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2786870 data_alloc: 301989888 data_used: 16695296
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194363392 unmapped: 41017344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:14.471897+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194363392 unmapped: 41017344 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:15.472047+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.856023788s of 10.000191689s, submitted: 32
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194469888 unmapped: 40910848 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:16.472171+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194478080 unmapped: 40902656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 293 heartbeat osd_stat(store_statfs(0x1af7f2000/0x0/0x1bfc00000, data 0x6c61b5a/0x6e5c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:17.472408+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194478080 unmapped: 40902656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:18.472565+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2795474 data_alloc: 301989888 data_used: 16707584
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 293 heartbeat osd_stat(store_statfs(0x1af7ee000/0x0/0x1bfc00000, data 0x6c63e3a/0x6e5f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194478080 unmapped: 40902656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:19.472747+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194478080 unmapped: 40902656 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:20.472878+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194412544 unmapped: 40968192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:21.472995+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194412544 unmapped: 40968192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:22.473102+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194412544 unmapped: 40968192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:23.473233+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2792826 data_alloc: 301989888 data_used: 16707584
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:24.473413+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194412544 unmapped: 40968192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 293 heartbeat osd_stat(store_statfs(0x1af7ee000/0x0/0x1bfc00000, data 0x6c64358/0x6e60000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:25.473586+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194412544 unmapped: 40968192 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 293 heartbeat osd_stat(store_statfs(0x1af7ee000/0x0/0x1bfc00000, data 0x6c64358/0x6e60000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:26.473738+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194486272 unmapped: 40894464 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 293 heartbeat osd_stat(store_statfs(0x1af7ee000/0x0/0x1bfc00000, data 0x6c64358/0x6e60000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 293 handle_osd_map epochs [294,294], i have 294, src has [1,294]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.092788696s of 10.216505051s, submitted: 51
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:27.473878+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194502656 unmapped: 40878080 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:28.473996+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194502656 unmapped: 40878080 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2800724 data_alloc: 301989888 data_used: 16719872
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:29.474161+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194576384 unmapped: 40804352 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 294 heartbeat osd_stat(store_statfs(0x1af7c1000/0x0/0x1bfc00000, data 0x6c8e754/0x6e8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:30.474448+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194576384 unmapped: 40804352 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 294 heartbeat osd_stat(store_statfs(0x1af7b0000/0x0/0x1bfc00000, data 0x6ca0b01/0x6e9e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:31.474625+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194707456 unmapped: 40673280 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:32.474816+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 194707456 unmapped: 40673280 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:33.474989+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195837952 unmapped: 39542784 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2808224 data_alloc: 301989888 data_used: 16719872
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:34.475181+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195837952 unmapped: 39542784 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:35.475382+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195837952 unmapped: 39542784 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:36.475609+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196067328 unmapped: 39313408 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 294 heartbeat osd_stat(store_statfs(0x1af76c000/0x0/0x1bfc00000, data 0x6ce403f/0x6ee2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.917705536s of 10.135392189s, submitted: 40
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:37.475771+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196075520 unmapped: 39305216 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:38.476114+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195264512 unmapped: 40116224 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2808580 data_alloc: 301989888 data_used: 16736256
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:39.476394+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195264512 unmapped: 40116224 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 295 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:40.476556+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195272704 unmapped: 40108032 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 295 heartbeat osd_stat(store_statfs(0x1af70e000/0x0/0x1bfc00000, data 0x6d406b9/0x6f40000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:41.476852+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195272704 unmapped: 40108032 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 295 heartbeat osd_stat(store_statfs(0x1af6da000/0x0/0x1bfc00000, data 0x6d7561b/0x6f74000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:42.477042+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195272704 unmapped: 40108032 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:43.477239+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195272704 unmapped: 40108032 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2815200 data_alloc: 301989888 data_used: 16736256
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:44.477367+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195469312 unmapped: 39911424 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:45.477529+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195469312 unmapped: 39911424 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 295 heartbeat osd_stat(store_statfs(0x1af6da000/0x0/0x1bfc00000, data 0x6d7561b/0x6f74000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:46.477645+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195534848 unmapped: 39845888 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.857649803s of 10.099657059s, submitted: 60
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:47.477851+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195543040 unmapped: 39837696 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:48.478014+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195543040 unmapped: 39837696 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816490 data_alloc: 301989888 data_used: 16748544
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 296 heartbeat osd_stat(store_statfs(0x1af6d5000/0x0/0x1bfc00000, data 0x6d77813/0x6f78000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:49.478215+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195551232 unmapped: 39829504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:50.478447+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195551232 unmapped: 39829504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:51.478776+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195551232 unmapped: 39829504 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:52.479039+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195559424 unmapped: 39821312 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 297 heartbeat osd_stat(store_statfs(0x1af6d2000/0x0/0x1bfc00000, data 0x6d79b8e/0x6f7c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:53.479371+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195559424 unmapped: 39821312 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2818612 data_alloc: 301989888 data_used: 16748544
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:54.479575+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195559424 unmapped: 39821312 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:55.480046+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195559424 unmapped: 39821312 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:56.480511+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195559424 unmapped: 39821312 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:57.481097+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195559424 unmapped: 39821312 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:58.481561+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195559424 unmapped: 39821312 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2818612 data_alloc: 301989888 data_used: 16748544
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 297 heartbeat osd_stat(store_statfs(0x1af6d2000/0x0/0x1bfc00000, data 0x6d79b8e/0x6f7c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:59.482059+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195559424 unmapped: 39821312 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:00.482423+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195559424 unmapped: 39821312 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:01.482740+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195567616 unmapped: 39813120 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 14.739822388s of 14.911304474s, submitted: 51
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:02.483031+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195584000 unmapped: 39796736 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:03.483244+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195584000 unmapped: 39796736 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2822638 data_alloc: 301989888 data_used: 16760832
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:04.483427+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195600384 unmapped: 39780352 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:05.483700+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195600384 unmapped: 39780352 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:06.483886+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195600384 unmapped: 39780352 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:07.484073+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195600384 unmapped: 39780352 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:08.484211+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195600384 unmapped: 39780352 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2822638 data_alloc: 301989888 data_used: 16760832
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:09.484381+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195608576 unmapped: 39772160 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:10.484551+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195608576 unmapped: 39772160 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:11.484682+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195616768 unmapped: 39763968 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:12.484875+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195616768 unmapped: 39763968 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:13.485149+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195616768 unmapped: 39763968 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2822638 data_alloc: 301989888 data_used: 16760832
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:14.485353+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195616768 unmapped: 39763968 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:15.485495+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195616768 unmapped: 39763968 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:16.485804+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195616768 unmapped: 39763968 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:17.486103+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195616768 unmapped: 39763968 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:18.486421+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195624960 unmapped: 39755776 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2822798 data_alloc: 301989888 data_used: 16764928
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:19.486751+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195624960 unmapped: 39755776 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:20.486900+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195624960 unmapped: 39755776 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:21.487069+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195624960 unmapped: 39755776 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:22.487220+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195624960 unmapped: 39755776 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:23.487386+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195624960 unmapped: 39755776 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2822798 data_alloc: 301989888 data_used: 16764928
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:24.487618+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195624960 unmapped: 39755776 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:25.487790+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195624960 unmapped: 39755776 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:26.488005+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195624960 unmapped: 39755776 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:27.488202+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195633152 unmapped: 39747584 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:28.488388+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195633152 unmapped: 39747584 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2822798 data_alloc: 301989888 data_used: 16764928
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:29.488608+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195633152 unmapped: 39747584 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:30.488763+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195633152 unmapped: 39747584 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:31.488990+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195633152 unmapped: 39747584 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:32.489134+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195633152 unmapped: 39747584 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 heartbeat osd_stat(store_statfs(0x1af6cd000/0x0/0x1bfc00000, data 0x6d7bd86/0x6f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:33.489292+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x564564845400
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 31.900314331s of 31.918132782s, submitted: 28
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 204038144 unmapped: 31342592 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2931614 data_alloc: 301989888 data_used: 16764928
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:34.489451+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195977216 unmapped: 39403520 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 ms_handle_reset con 0x564564845400 session 0x564564838960
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: handle_auth_request added challenge on 0x56455f1d6c00
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _renew_subs
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:35.489630+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195977216 unmapped: 39403520 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:36.491048+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 300 ms_handle_reset con 0x56455f1d6c00 session 0x564560ea21e0
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196001792 unmapped: 39378944 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:37.491187+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 300 heartbeat osd_stat(store_statfs(0x1af6c4000/0x0/0x1bfc00000, data 0x6d803fe/0x6f88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196001792 unmapped: 39378944 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:38.491328+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196001792 unmapped: 39378944 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2832338 data_alloc: 301989888 data_used: 16777216
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:39.491551+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 300 heartbeat osd_stat(store_statfs(0x1af6c4000/0x0/0x1bfc00000, data 0x6d803fe/0x6f88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196001792 unmapped: 39378944 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:40.491648+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196001792 unmapped: 39378944 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:41.491797+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196001792 unmapped: 39378944 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:42.491943+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196009984 unmapped: 39370752 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:43.492127+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196009984 unmapped: 39370752 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2832338 data_alloc: 301989888 data_used: 16777216
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:44.492360+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196009984 unmapped: 39370752 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 300 heartbeat osd_stat(store_statfs(0x1af6c4000/0x0/0x1bfc00000, data 0x6d803fe/0x6f88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:45.492570+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196009984 unmapped: 39370752 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:46.492801+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.715038300s of 12.906694412s, submitted: 20
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196018176 unmapped: 39362560 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c1000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:47.492998+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196018176 unmapped: 39362560 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:48.493193+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196018176 unmapped: 39362560 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834668 data_alloc: 301989888 data_used: 16777216
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:49.493447+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196018176 unmapped: 39362560 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:50.493630+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196026368 unmapped: 39354368 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:51.493798+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196026368 unmapped: 39354368 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c1000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:52.493981+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196026368 unmapped: 39354368 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:53.494188+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c1000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196026368 unmapped: 39354368 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834668 data_alloc: 301989888 data_used: 16777216
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:54.494427+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196026368 unmapped: 39354368 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:55.494683+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196026368 unmapped: 39354368 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:56.494888+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196026368 unmapped: 39354368 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:57.495225+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196026368 unmapped: 39354368 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:58.495481+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834668 data_alloc: 301989888 data_used: 16777216
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:59.495772+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c1000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:00.495987+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:01.496156+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:02.496341+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:03.496605+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:04.496848+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834668 data_alloc: 301989888 data_used: 16777216
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:05.498457+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c1000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:06.498643+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:07.498852+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:08.499114+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:09.499304+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834668 data_alloc: 301989888 data_used: 16777216
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c1000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:10.499585+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196034560 unmapped: 39346176 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:11.499854+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196042752 unmapped: 39337984 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:12.500083+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196042752 unmapped: 39337984 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c1000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:13.500265+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196042752 unmapped: 39337984 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:14.500464+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834668 data_alloc: 301989888 data_used: 16777216
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196042752 unmapped: 39337984 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 28.499053955s of 28.547760010s, submitted: 14
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 ms_handle_reset con 0x564560d01c00 session 0x564564839860
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Got map version 55
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:15.500650+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:16.500828+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:17.501020+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:18.501330+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:19.501622+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:20.501843+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:21.502080+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:22.502284+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:23.502446+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:24.502706+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:25.502920+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:26.503148+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:27.503306+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196157440 unmapped: 39223296 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:28.503509+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196165632 unmapped: 39215104 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:29.503767+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196165632 unmapped: 39215104 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:30.504003+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196165632 unmapped: 39215104 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:31.504201+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196165632 unmapped: 39215104 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:32.504402+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196165632 unmapped: 39215104 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:33.504611+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196165632 unmapped: 39215104 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:34.504840+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196165632 unmapped: 39215104 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:35.505047+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196165632 unmapped: 39215104 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:36.505243+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196173824 unmapped: 39206912 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:37.505542+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196173824 unmapped: 39206912 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:38.505693+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196173824 unmapped: 39206912 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:39.506041+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196173824 unmapped: 39206912 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:40.506333+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196173824 unmapped: 39206912 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:41.506506+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196173824 unmapped: 39206912 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:42.506703+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196173824 unmapped: 39206912 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:43.506860+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196173824 unmapped: 39206912 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:44.507044+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196182016 unmapped: 39198720 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:45.507279+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196182016 unmapped: 39198720 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:46.507553+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196182016 unmapped: 39198720 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:47.507758+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196182016 unmapped: 39198720 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:48.508011+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196182016 unmapped: 39198720 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:49.508329+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196182016 unmapped: 39198720 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:50.508562+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196182016 unmapped: 39198720 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:51.508760+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196182016 unmapped: 39198720 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:52.509105+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196190208 unmapped: 39190528 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:53.509316+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196190208 unmapped: 39190528 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:54.509489+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196190208 unmapped: 39190528 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:55.509715+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196190208 unmapped: 39190528 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:56.510006+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196190208 unmapped: 39190528 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:57.510203+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196190208 unmapped: 39190528 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:58.510359+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196190208 unmapped: 39190528 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:59.510935+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196190208 unmapped: 39190528 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:00.511785+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196198400 unmapped: 39182336 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:01.512386+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196198400 unmapped: 39182336 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:02.513134+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196198400 unmapped: 39182336 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:03.513685+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196198400 unmapped: 39182336 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:04.514008+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196198400 unmapped: 39182336 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:05.514271+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196198400 unmapped: 39182336 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:06.514430+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196198400 unmapped: 39182336 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:07.514588+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196198400 unmapped: 39182336 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:08.514861+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196206592 unmapped: 39174144 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:09.515054+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196206592 unmapped: 39174144 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:10.515308+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196206592 unmapped: 39174144 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:11.515642+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196206592 unmapped: 39174144 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:12.516014+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196206592 unmapped: 39174144 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:13.516344+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196206592 unmapped: 39174144 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:14.519500+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196206592 unmapped: 39174144 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:15.519821+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196206592 unmapped: 39174144 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:16.520115+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196214784 unmapped: 39165952 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:17.520413+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196214784 unmapped: 39165952 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:18.520782+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196214784 unmapped: 39165952 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:19.521032+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196214784 unmapped: 39165952 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:20.521271+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196214784 unmapped: 39165952 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:21.521513+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196214784 unmapped: 39165952 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:22.521676+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196214784 unmapped: 39165952 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:23.521855+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196214784 unmapped: 39165952 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:24.522048+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 301989888 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196222976 unmapped: 39157760 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:25.522188+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196222976 unmapped: 39157760 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:26.522380+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:27.522618+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:28.522865+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:29.523036+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 285212672 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:30.523268+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:31.523798+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:32.524156+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:33.524405+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:34.524542+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 285212672 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:35.525109+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:36.525336+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:37.525511+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:38.525886+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:39.526015+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 285212672 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:40.526260+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196231168 unmapped: 39149568 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:41.526500+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196239360 unmapped: 39141376 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:42.526652+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196239360 unmapped: 39141376 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:43.526770+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196239360 unmapped: 39141376 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:44.526881+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196239360 unmapped: 39141376 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 285212672 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:45.527025+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196239360 unmapped: 39141376 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:46.527145+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196239360 unmapped: 39141376 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:47.527263+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 196337664 unmapped: 39043072 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: do_command 'config diff' '{prefix=config diff}'
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: do_command 'config show' '{prefix=config show}'
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:48.527374+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195772416 unmapped: 39608320 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:49.527527+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: prioritycache tune_memory target: 5709084876 mapped: 195534848 unmapped: 39845888 heap: 235380736 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: bluestore.MempoolThread(0x56455d913b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2834124 data_alloc: 285212672 data_used: 16781312
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: osd.4 301 heartbeat osd_stat(store_statfs(0x1af6c2000/0x0/0x1bfc00000, data 0x6d825f6/0x6f8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x95af9b7), peers [0,1,2,3,5] op hist [])
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: tick
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_tickets
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:50.527657+0000)
Dec 05 10:29:20 np0005546420.localdomain ceph-osd[32907]: do_command 'log dump' '{prefix=log dump}'
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1875169573' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 05 10:29:21 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:21.147 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.59419 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.49740 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2538132355' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1065335361' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.69623 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2449392201' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.59437 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.49746 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1156651427' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3438603613' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.69638 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2495029872' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1875169573' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 05 10:29:21 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/655590175' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 05 10:29:21 np0005546420.localdomain sshd[337545]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3473533840' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.49755 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.49761 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3958671327' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.69656 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3592232353' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.59464 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.49773 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/655590175' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.69671 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2979823156' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: pgmap v1041: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.59479 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.69680 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/144106489' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3473533840' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/760792570' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 05 10:29:22 np0005546420.localdomain crontab[337671]: (root) LIST (root)
Dec 05 10:29:22 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:22.465 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 05 10:29:22 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/994206404' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/176700523' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: from='client.69686 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: from='client.49794 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: from='client.59497 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3849043050' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: from='client.69698 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/994206404' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1991560768' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: from='client.59512 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3861348710' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: from='client.69710 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/176700523' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3884987977' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 05 10:29:23 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/596828585' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain sshd[337545]: Connection reset by authenticating user root 45.140.17.124 port 20740 [preauth]
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2340551787' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2460320278' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain sshd[337904]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: from='client.49812 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3884987977' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: from='client.59542 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/596828585' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1897303923' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: pgmap v1042: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: from='client.69746 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3298743897' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3055037170' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2340551787' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2460320278' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/722084721' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2804643855' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3850272669' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2904466438' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 05 10:29:24 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2095697973' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1228995446' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1451638519' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2939213288' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2804643855' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3850272669' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2426293403' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/508782084' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/904263691' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2977406459' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2904466438' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2095697973' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1077270412' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2564239001' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:53.003670+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56235400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee562e4000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 117 ms_handle_reset con 0x55ee562e4000 session 0x55ee5632ad20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57063400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:54.003887+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 126877696 unmapped: 32980992 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 118 ms_handle_reset con 0x55ee57063400 session 0x55ee562aa960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:55.004024+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 118562816 unmapped: 41295872 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:56.004157+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116367360 unmapped: 43491328 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 119 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee55d4fc20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 119 heartbeat osd_stat(store_statfs(0x1b659e000/0x0/0x1bfc00000, data 0x503258e/0x50ef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1496880 data_alloc: 285212672 data_used: 3825664
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:57.004277+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116391936 unmapped: 43466752 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 119 heartbeat osd_stat(store_statfs(0x1b5d9f000/0x0/0x1bfc00000, data 0x5832076/0x58ed000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 120 ms_handle_reset con 0x55ee56235400 session 0x55ee5632a5a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:58.004424+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116432896 unmapped: 43425792 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53cd3800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:05:59.004558+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 121 ms_handle_reset con 0x55ee53cd3800 session 0x55ee55b3ad20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116285440 unmapped: 43573248 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 121 heartbeat osd_stat(store_statfs(0x1b4d99000/0x0/0x1bfc00000, data 0x683670a/0x68f4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.642805099s of 10.076757431s, submitted: 391
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee562e4000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:00.004746+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116318208 unmapped: 43540480 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 122 ms_handle_reset con 0x55ee562e4000 session 0x55ee55e54000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 122 heartbeat osd_stat(store_statfs(0x1b8d95000/0x0/0x1bfc00000, data 0x2838a47/0x28f6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 123 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 123 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 123 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 123 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:01.005159+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116359168 unmapped: 43499520 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1186945 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:02.005318+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116432896 unmapped: 43425792 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:03.005434+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116432896 unmapped: 43425792 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 123 heartbeat osd_stat(store_statfs(0x1b8d91000/0x0/0x1bfc00000, data 0x283a86b/0x28f9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:04.005592+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116432896 unmapped: 43425792 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 123 heartbeat osd_stat(store_statfs(0x1b8d91000/0x0/0x1bfc00000, data 0x283a86b/0x28f9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:05.005784+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116432896 unmapped: 43425792 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:06.006018+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116301824 unmapped: 43556864 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:07.006192+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116301824 unmapped: 43556864 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:08.006362+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116301824 unmapped: 43556864 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:09.006525+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116301824 unmapped: 43556864 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:10.006685+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116310016 unmapped: 43548672 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:11.006892+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116310016 unmapped: 43548672 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:12.007042+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116310016 unmapped: 43548672 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:13.007214+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116310016 unmapped: 43548672 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:14.007460+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116310016 unmapped: 43548672 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:15.007683+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116310016 unmapped: 43548672 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:16.007855+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116310016 unmapped: 43548672 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:17.008058+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116326400 unmapped: 43532288 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:18.008225+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116326400 unmapped: 43532288 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:19.008454+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116334592 unmapped: 43524096 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:20.008641+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116334592 unmapped: 43524096 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:21.008841+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116334592 unmapped: 43524096 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:22.009044+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116334592 unmapped: 43524096 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:23.009243+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116334592 unmapped: 43524096 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:24.009468+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116334592 unmapped: 43524096 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:25.009614+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116334592 unmapped: 43524096 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:26.009776+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116334592 unmapped: 43524096 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:27.009921+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116334592 unmapped: 43524096 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:28.010048+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116342784 unmapped: 43515904 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:29.010236+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116342784 unmapped: 43515904 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:30.010474+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116342784 unmapped: 43515904 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:31.010643+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116342784 unmapped: 43515904 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:32.010808+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116342784 unmapped: 43515904 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:33.011030+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116359168 unmapped: 43499520 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:34.011231+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116367360 unmapped: 43491328 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:35.011406+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116383744 unmapped: 43474944 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:36.011574+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116383744 unmapped: 43474944 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:37.011842+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116383744 unmapped: 43474944 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets getting new tickets!
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:38.012148+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _finish_auth 0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:38.013713+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116383744 unmapped: 43474944 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:39.012353+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116383744 unmapped: 43474944 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:40.012583+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116383744 unmapped: 43474944 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:41.012757+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116383744 unmapped: 43474944 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:42.012946+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116383744 unmapped: 43474944 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:43.013187+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:44.013484+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:45.013824+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:46.013984+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:47.014161+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:48.014355+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:49.014537+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:50.014709+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:51.014854+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:52.015025+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:53.015146+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:54.015356+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:55.015497+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:56.015660+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116408320 unmapped: 43450368 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:57.015859+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 43433984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:58.016060+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 43433984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:06:59.016212+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 43433984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:00.016391+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 43433984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:01.016560+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 43433984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:02.016750+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 43433984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:03.016900+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 43433984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:04.017148+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 43433984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:05.017345+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 43433984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:06.017496+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 43433984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:07.017655+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116449280 unmapped: 43409408 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:08.017905+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116449280 unmapped: 43409408 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:09.018054+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116449280 unmapped: 43409408 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:10.018239+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116449280 unmapped: 43409408 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:11.018402+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116449280 unmapped: 43409408 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:12.018597+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116449280 unmapped: 43409408 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:13.018750+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116457472 unmapped: 43401216 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:14.018931+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116457472 unmapped: 43401216 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:15.019049+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116465664 unmapped: 43393024 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:16.019217+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116465664 unmapped: 43393024 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:17.019384+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116465664 unmapped: 43393024 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:18.019536+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116465664 unmapped: 43393024 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:19.019740+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116465664 unmapped: 43393024 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:20.019920+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116465664 unmapped: 43393024 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:21.020104+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116465664 unmapped: 43393024 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188411 data_alloc: 285212672 data_used: 3837952
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:22.020299+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116473856 unmapped: 43384832 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:23.020465+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee562fac00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 83.791275024s of 84.024917603s, submitted: 89
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b8d90000/0x0/0x1bfc00000, data 0x283ca83/0x28fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115802112 unmapped: 44056576 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 125 ms_handle_reset con 0x55ee562fac00 session 0x55ee55e55680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:24.020686+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115851264 unmapped: 44007424 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 125 heartbeat osd_stat(store_statfs(0x1b8d8a000/0x0/0x1bfc00000, data 0x283f5b8/0x2903000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 126 heartbeat osd_stat(store_statfs(0x1b8d8a000/0x0/0x1bfc00000, data 0x283f5b8/0x2903000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:25.020860+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 126 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee55e545a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 43950080 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 126 heartbeat osd_stat(store_statfs(0x1b8d85000/0x0/0x1bfc00000, data 0x28410fb/0x2905000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:26.021059+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 43950080 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1197509 data_alloc: 285212672 data_used: 3850240
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:27.021300+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 43950080 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:28.021487+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 43950080 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:29.021691+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 43950080 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:30.021850+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 126 heartbeat osd_stat(store_statfs(0x1b8d85000/0x0/0x1bfc00000, data 0x28410fb/0x2905000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 43950080 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 126 heartbeat osd_stat(store_statfs(0x1b8d85000/0x0/0x1bfc00000, data 0x28410fb/0x2905000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 126 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 126 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:31.022059+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 127 heartbeat osd_stat(store_statfs(0x1b8d85000/0x0/0x1bfc00000, data 0x28410fb/0x2905000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1197737 data_alloc: 285212672 data_used: 3850240
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:32.022224+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 127 heartbeat osd_stat(store_statfs(0x1b8d84000/0x0/0x1bfc00000, data 0x28432f3/0x2909000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:33.022371+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:34.022599+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:35.022797+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:36.023110+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 127 heartbeat osd_stat(store_statfs(0x1b8d84000/0x0/0x1bfc00000, data 0x28432f3/0x2909000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1197737 data_alloc: 285212672 data_used: 3850240
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:37.024782+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:38.024983+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:39.025393+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:40.025782+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:41.025924+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1197737 data_alloc: 285212672 data_used: 3850240
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:42.026119+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 127 heartbeat osd_stat(store_statfs(0x1b8d84000/0x0/0x1bfc00000, data 0x28432f3/0x2909000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:43.026282+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:44.026885+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:45.027025+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115916800 unmapped: 43941888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 127 heartbeat osd_stat(store_statfs(0x1b8d84000/0x0/0x1bfc00000, data 0x28432f3/0x2909000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:46.027190+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115933184 unmapped: 43925504 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1197737 data_alloc: 285212672 data_used: 3850240
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:47.027343+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.932479858s of 24.078664780s, submitted: 53
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115957760 unmapped: 43900928 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:48.027590+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53cd3800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 129 ms_handle_reset con 0x55ee53cd3800 session 0x55ee55e541e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115982336 unmapped: 43876352 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 129 heartbeat osd_stat(store_statfs(0x1b8d79000/0x0/0x1bfc00000, data 0x28479dd/0x2913000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:49.027897+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115982336 unmapped: 43876352 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56235400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:50.028057+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 129 ms_handle_reset con 0x55ee56235400 session 0x55ee55e550e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115982336 unmapped: 43876352 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:51.028306+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 115982336 unmapped: 43876352 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1207630 data_alloc: 285212672 data_used: 3850240
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:52.028454+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116015104 unmapped: 43843584 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:53.028599+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116031488 unmapped: 43827200 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:54.028839+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 129 heartbeat osd_stat(store_statfs(0x1b8d7c000/0x0/0x1bfc00000, data 0x284797b/0x2912000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116047872 unmapped: 43810816 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:55.029039+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116047872 unmapped: 43810816 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:56.029186+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116129792 unmapped: 43728896 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1212168 data_alloc: 285212672 data_used: 3862528
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:57.029322+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116129792 unmapped: 43728896 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:58.029505+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee562e4000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.485318184s of 10.754067421s, submitted: 91
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 ms_handle_reset con 0x55ee562e4000 session 0x55ee53d4f4a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116129792 unmapped: 43728896 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:07:59.029668+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b8d78000/0x0/0x1bfc00000, data 0x2849b63/0x2915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116137984 unmapped: 43720704 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b8d79000/0x0/0x1bfc00000, data 0x2849b63/0x2915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:00.029852+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116137984 unmapped: 43720704 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:01.029992+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116137984 unmapped: 43720704 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1213667 data_alloc: 285212672 data_used: 3862528
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:02.030184+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116154368 unmapped: 43704320 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:03.030407+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116154368 unmapped: 43704320 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:04.030645+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116154368 unmapped: 43704320 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:05.030847+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b8d79000/0x0/0x1bfc00000, data 0x2849b63/0x2915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116154368 unmapped: 43704320 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:06.031088+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116154368 unmapped: 43704320 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1213098 data_alloc: 285212672 data_used: 3862528
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:07.031349+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116154368 unmapped: 43704320 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:08.036417+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116162560 unmapped: 43696128 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:09.036573+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b8d79000/0x0/0x1bfc00000, data 0x2849b63/0x2915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116162560 unmapped: 43696128 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:10.037125+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116170752 unmapped: 43687936 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:11.038014+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116170752 unmapped: 43687936 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1213098 data_alloc: 285212672 data_used: 3862528
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:12.039151+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116170752 unmapped: 43687936 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:13.039740+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116170752 unmapped: 43687936 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:14.066112+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116170752 unmapped: 43687936 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b8d79000/0x0/0x1bfc00000, data 0x2849b63/0x2915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:15.066278+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116170752 unmapped: 43687936 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:16.066894+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116170752 unmapped: 43687936 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1213098 data_alloc: 285212672 data_used: 3862528
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:17.067523+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116170752 unmapped: 43687936 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:18.067837+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:19.068281+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b8d79000/0x0/0x1bfc00000, data 0x2849b63/0x2915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:20.068567+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:21.068821+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1213098 data_alloc: 285212672 data_used: 3862528
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:22.069162+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:23.069449+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:24.069856+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b8d79000/0x0/0x1bfc00000, data 0x2849b63/0x2915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:25.070003+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:26.070279+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b8d79000/0x0/0x1bfc00000, data 0x2849b63/0x2915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1213098 data_alloc: 285212672 data_used: 3862528
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:27.070648+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b8d79000/0x0/0x1bfc00000, data 0x2849b63/0x2915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:28.070878+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:29.071038+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:30.071297+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b8d79000/0x0/0x1bfc00000, data 0x2849b63/0x2915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:31.071544+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1213098 data_alloc: 285212672 data_used: 3862528
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:32.071668+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116178944 unmapped: 43679744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:33.071806+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116187136 unmapped: 43671552 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:34.072133+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116187136 unmapped: 43671552 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:35.072350+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116187136 unmapped: 43671552 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:36.072533+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b8d79000/0x0/0x1bfc00000, data 0x2849b63/0x2915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116187136 unmapped: 43671552 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1213098 data_alloc: 285212672 data_used: 3862528
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:37.072786+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee562fac00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 39.483917236s of 39.643508911s, submitted: 38
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116187136 unmapped: 43671552 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:38.073021+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 131 ms_handle_reset con 0x55ee562fac00 session 0x55ee560c85a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 131 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee560c90e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53cd3800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116244480 unmapped: 43614208 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:39.073212+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 131 ms_handle_reset con 0x55ee53cd3800 session 0x55ee539bc1e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56235400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116252672 unmapped: 43606016 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:40.073355+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 132 ms_handle_reset con 0x55ee56235400 session 0x55ee539bc960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116260864 unmapped: 43597824 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:41.073488+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee562e4000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 132 heartbeat osd_stat(store_statfs(0x1b8d6a000/0x0/0x1bfc00000, data 0x284e600/0x2922000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116260864 unmapped: 43597824 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:42.073642+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1229263 data_alloc: 285212672 data_used: 3874816
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 43565056 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:43.073773+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 133 ms_handle_reset con 0x55ee566b6800 session 0x55ee565d43c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 133 ms_handle_reset con 0x55ee562e4000 session 0x55ee5223fe00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116301824 unmapped: 43556864 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:44.073988+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116375552 unmapped: 43483136 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:45.074129+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 134 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee565d52c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53cd3800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 134 ms_handle_reset con 0x55ee53cd3800 session 0x55ee565d4000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56235400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 134 ms_handle_reset con 0x55ee56235400 session 0x55ee56bd7860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 116391936 unmapped: 43466752 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:46.074279+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 134 ms_handle_reset con 0x55ee566b6800 session 0x55ee56bd9e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53b27400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 134 ms_handle_reset con 0x55ee53b27400 session 0x55ee566c34a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 117399552 unmapped: 42459136 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:47.076065+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1341544 data_alloc: 285212672 data_used: 3874816
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 134 heartbeat osd_stat(store_statfs(0x1b7c4f000/0x0/0x1bfc00000, data 0x35688c5/0x363d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:48.076221+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 117473280 unmapped: 42385408 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.406002998s of 11.074691772s, submitted: 149
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:49.076365+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 117350400 unmapped: 42508288 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 134 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee56bd72c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 134 heartbeat osd_stat(store_statfs(0x1b84e8000/0x0/0x1bfc00000, data 0x2852853/0x2925000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 134 heartbeat osd_stat(store_statfs(0x1b84e8000/0x0/0x1bfc00000, data 0x2852853/0x2925000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:50.076486+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 117350400 unmapped: 42508288 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:51.076632+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 117350400 unmapped: 42508288 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:52.076773+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 117391360 unmapped: 42467328 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1244832 data_alloc: 285212672 data_used: 3887104
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 135 heartbeat osd_stat(store_statfs(0x1b8964000/0x0/0x1bfc00000, data 0x2854a4b/0x2929000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:53.076920+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 117391360 unmapped: 42467328 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:54.077149+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 118439936 unmapped: 41418752 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:55.077329+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 118439936 unmapped: 41418752 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:56.077492+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 118448128 unmapped: 41410560 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:57.077643+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 118448128 unmapped: 41410560 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1245917 data_alloc: 285212672 data_used: 3887104
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:58.077788+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 118448128 unmapped: 41410560 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 135 heartbeat osd_stat(store_statfs(0x1b8964000/0x0/0x1bfc00000, data 0x2854a4b/0x2929000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:08:59.077999+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 118448128 unmapped: 41410560 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.877472878s of 11.030001640s, submitted: 61
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:00.078226+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 118448128 unmapped: 41410560 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 135 heartbeat osd_stat(store_statfs(0x1b8963000/0x0/0x1bfc00000, data 0x2854abc/0x292b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53cd3800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:01.078411+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 118464512 unmapped: 41394176 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 136 ms_handle_reset con 0x55ee53cd3800 session 0x55ee56bd3c20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:02.078634+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 118521856 unmapped: 41336832 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1259659 data_alloc: 285212672 data_used: 3899392
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 136 heartbeat osd_stat(store_statfs(0x1b895b000/0x0/0x1bfc00000, data 0x2857201/0x2932000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56235400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:03.078784+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 118530048 unmapped: 41328640 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:04.079021+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119635968 unmapped: 40222720 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 137 ms_handle_reset con 0x55ee56235400 session 0x55ee566c3a40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 138 ms_handle_reset con 0x55ee566b6800 session 0x55ee566c3860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:05.079193+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119660544 unmapped: 40198144 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:06.079359+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119660544 unmapped: 40198144 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b7800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:07.079481+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119709696 unmapped: 40148992 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 139 ms_handle_reset con 0x55ee566b7800 session 0x55ee566c2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1269632 data_alloc: 285212672 data_used: 3928064
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:08.079614+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119726080 unmapped: 40132608 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 139 heartbeat osd_stat(store_statfs(0x1b8951000/0x0/0x1bfc00000, data 0x285d7d8/0x293b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:09.079760+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119734272 unmapped: 40124416 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:10.079895+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119734272 unmapped: 40124416 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.156395912s of 10.829750061s, submitted: 176
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 140 heartbeat osd_stat(store_statfs(0x1b8955000/0x0/0x1bfc00000, data 0x285d767/0x2939000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:11.080022+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119750656 unmapped: 40108032 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:12.080245+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119750656 unmapped: 40108032 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1271346 data_alloc: 285212672 data_used: 3940352
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:13.080405+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119750656 unmapped: 40108032 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:14.080633+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119758848 unmapped: 40099840 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:15.080828+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119758848 unmapped: 40099840 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:16.081009+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119767040 unmapped: 40091648 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 heartbeat osd_stat(store_statfs(0x1b894b000/0x0/0x1bfc00000, data 0x2861b73/0x2941000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:17.081144+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119767040 unmapped: 40091648 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1274348 data_alloc: 285212672 data_used: 3940352
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 heartbeat osd_stat(store_statfs(0x1b894c000/0x0/0x1bfc00000, data 0x2861b73/0x2941000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:18.081295+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119767040 unmapped: 40091648 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:19.081438+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 heartbeat osd_stat(store_statfs(0x1b894b000/0x0/0x1bfc00000, data 0x2861c0e/0x2942000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 119775232 unmapped: 40083456 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Got map version 49
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:20.081663+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120012800 unmapped: 39845888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 heartbeat osd_stat(store_statfs(0x1b894b000/0x0/0x1bfc00000, data 0x2861c0e/0x2942000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:21.081824+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120012800 unmapped: 39845888 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:22.082041+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120020992 unmapped: 39837696 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1276116 data_alloc: 285212672 data_used: 3940352
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:23.082268+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120020992 unmapped: 39837696 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 heartbeat osd_stat(store_statfs(0x1b894b000/0x0/0x1bfc00000, data 0x2861c0e/0x2942000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:24.082463+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120020992 unmapped: 39837696 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 heartbeat osd_stat(store_statfs(0x1b894b000/0x0/0x1bfc00000, data 0x2861c0e/0x2942000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:25.082662+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120020992 unmapped: 39837696 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:26.082875+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120020992 unmapped: 39837696 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:27.083076+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120020992 unmapped: 39837696 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1276116 data_alloc: 285212672 data_used: 3940352
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:28.083309+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120020992 unmapped: 39837696 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.139915466s of 18.227054596s, submitted: 41
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 heartbeat osd_stat(store_statfs(0x1b894b000/0x0/0x1bfc00000, data 0x2861c0e/0x2942000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:29.083474+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120029184 unmapped: 39829504 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Got map version 50
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:30.083612+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120061952 unmapped: 39796736 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:31.083806+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120061952 unmapped: 39796736 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:32.083946+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120061952 unmapped: 39796736 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1278580 data_alloc: 285212672 data_used: 3940352
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:33.084218+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120061952 unmapped: 39796736 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 heartbeat osd_stat(store_statfs(0x1b894a000/0x0/0x1bfc00000, data 0x2861d73/0x2944000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:34.084398+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120061952 unmapped: 39796736 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:35.084618+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120061952 unmapped: 39796736 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 heartbeat osd_stat(store_statfs(0x1b894a000/0x0/0x1bfc00000, data 0x2861d73/0x2944000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1006411252' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:36.084774+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120061952 unmapped: 39796736 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 heartbeat osd_stat(store_statfs(0x1b894a000/0x0/0x1bfc00000, data 0x2861d73/0x2944000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:37.084897+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120061952 unmapped: 39796736 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1278580 data_alloc: 285212672 data_used: 3940352
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:38.085048+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120061952 unmapped: 39796736 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.979614258s of 10.007925987s, submitted: 7
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:39.085225+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120070144 unmapped: 39788544 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:40.085366+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120102912 unmapped: 39755776 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee55e303c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:41.085530+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120111104 unmapped: 39747584 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53cd3800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:42.085753+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120111104 unmapped: 39747584 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1282767 data_alloc: 285212672 data_used: 3944448
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 142 ms_handle_reset con 0x55ee53cd3800 session 0x55ee56bd2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 142 heartbeat osd_stat(store_statfs(0x1b8949000/0x0/0x1bfc00000, data 0x2861e15/0x2945000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:43.086011+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120160256 unmapped: 39698432 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:44.086345+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120160256 unmapped: 39698432 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56235400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 142 ms_handle_reset con 0x55ee56235400 session 0x55ee55e301e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 143 ms_handle_reset con 0x55ee566b6800 session 0x55ee55e31c20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:45.086542+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120184832 unmapped: 39673856 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:46.086710+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 143 ms_handle_reset con 0x55ee57504800 session 0x55ee566c2d20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120193024 unmapped: 39665664 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 143 heartbeat osd_stat(store_statfs(0x1b8940000/0x0/0x1bfc00000, data 0x28664f2/0x294d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 143 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee55e30000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:47.087057+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120201216 unmapped: 39657472 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1293086 data_alloc: 285212672 data_used: 3964928
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:48.087214+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120233984 unmapped: 39624704 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 143 heartbeat osd_stat(store_statfs(0x1b8943000/0x0/0x1bfc00000, data 0x28664e4/0x294b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:49.087396+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 143 heartbeat osd_stat(store_statfs(0x1b8942000/0x0/0x1bfc00000, data 0x28664f4/0x294c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120233984 unmapped: 39624704 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:50.087538+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120233984 unmapped: 39624704 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.376147270s of 11.845266342s, submitted: 109
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 144 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 144 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:51.095865+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:52.096045+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1294810 data_alloc: 285212672 data_used: 3977216
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:53.096185+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 144 heartbeat osd_stat(store_statfs(0x1b893e000/0x0/0x1bfc00000, data 0x28687a6/0x294f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:54.096398+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:55.096524+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:56.096682+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:57.096847+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1295002 data_alloc: 285212672 data_used: 3977216
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:58.097018+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:09:59.097151+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 144 heartbeat osd_stat(store_statfs(0x1b893d000/0x0/0x1bfc00000, data 0x2868841/0x2950000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:00.097363+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:01.097516+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:02.097648+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 144 heartbeat osd_stat(store_statfs(0x1b8940000/0x0/0x1bfc00000, data 0x28687d5/0x294e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120242176 unmapped: 39616512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1294334 data_alloc: 285212672 data_used: 3977216
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:03.097836+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120250368 unmapped: 39608320 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:04.098037+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120250368 unmapped: 39608320 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:05.098208+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120250368 unmapped: 39608320 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 144 heartbeat osd_stat(store_statfs(0x1b8940000/0x0/0x1bfc00000, data 0x28687d5/0x294e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:06.098384+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120250368 unmapped: 39608320 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.128582001s of 16.202493668s, submitted: 28
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:07.098547+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120258560 unmapped: 39600128 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1298344 data_alloc: 285212672 data_used: 3989504
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 145 heartbeat osd_stat(store_statfs(0x1b893b000/0x0/0x1bfc00000, data 0x286ab50/0x2952000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:08.098716+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120258560 unmapped: 39600128 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:09.098890+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120258560 unmapped: 39600128 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:10.099077+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120258560 unmapped: 39600128 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 145 heartbeat osd_stat(store_statfs(0x1b893b000/0x0/0x1bfc00000, data 0x286ab50/0x2952000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:11.099266+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120258560 unmapped: 39600128 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:12.099450+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120258560 unmapped: 39600128 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1298344 data_alloc: 285212672 data_used: 3989504
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:13.099598+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53cd3800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 145 ms_handle_reset con 0x55ee53cd3800 session 0x55ee55b3ab40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120274944 unmapped: 39583744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:14.099747+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56235400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120274944 unmapped: 39583744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 145 ms_handle_reset con 0x55ee56235400 session 0x55ee55e310e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 145 ms_handle_reset con 0x55ee566b6800 session 0x55ee55e305a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 145 heartbeat osd_stat(store_statfs(0x1b893b000/0x0/0x1bfc00000, data 0x286ab60/0x2953000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:15.099880+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120299520 unmapped: 39559168 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:16.100028+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120307712 unmapped: 39550976 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:17.100140+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1303503 data_alloc: 285212672 data_used: 4001792
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:18.100297+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:19.100458+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.427089691s of 12.592432976s, submitted: 72
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 146 heartbeat osd_stat(store_statfs(0x1b8937000/0x0/0x1bfc00000, data 0x286cd48/0x2956000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:20.100610+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:21.100764+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:22.100898+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1304461 data_alloc: 285212672 data_used: 4001792
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:23.101061+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 146 heartbeat osd_stat(store_statfs(0x1b8938000/0x0/0x1bfc00000, data 0x286cd48/0x2956000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57063c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 146 ms_handle_reset con 0x55ee57063c00 session 0x55ee55e31860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:24.101257+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 146 heartbeat osd_stat(store_statfs(0x1b8936000/0x0/0x1bfc00000, data 0x286cdba/0x2958000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:25.101451+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120291328 unmapped: 39567360 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:26.101619+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120291328 unmapped: 39567360 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 146 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee55e30960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:27.101824+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120299520 unmapped: 39559168 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1306302 data_alloc: 285212672 data_used: 4001792
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:28.102029+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120299520 unmapped: 39559168 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:29.102181+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120299520 unmapped: 39559168 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.297556877s of 10.430279732s, submitted: 33
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:30.102334+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 146 heartbeat osd_stat(store_statfs(0x1b8938000/0x0/0x1bfc00000, data 0x286cd48/0x2956000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120299520 unmapped: 39559168 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:31.102522+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120299520 unmapped: 39559168 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:32.102660+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120299520 unmapped: 39559168 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1306494 data_alloc: 285212672 data_used: 4001792
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:33.102811+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120299520 unmapped: 39559168 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:34.103022+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120299520 unmapped: 39559168 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 146 heartbeat osd_stat(store_statfs(0x1b8937000/0x0/0x1bfc00000, data 0x286cecd/0x2957000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:35.103169+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120307712 unmapped: 39550976 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:36.103330+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120315904 unmapped: 39542784 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:37.103535+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120315904 unmapped: 39542784 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1311598 data_alloc: 285212672 data_used: 4014080
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:38.103680+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 148 heartbeat osd_stat(store_statfs(0x1b892e000/0x0/0x1bfc00000, data 0x2871612/0x295e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:39.103846+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:40.104045+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:41.104243+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.822051048s of 11.987138748s, submitted: 81
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:42.104418+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1313720 data_alloc: 285212672 data_used: 4014080
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:43.104589+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:44.104838+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 148 heartbeat osd_stat(store_statfs(0x1b8930000/0x0/0x1bfc00000, data 0x28716dc/0x295e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:45.105021+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 148 heartbeat osd_stat(store_statfs(0x1b8930000/0x0/0x1bfc00000, data 0x28716dc/0x295e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 39534592 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:46.105205+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120332288 unmapped: 39526400 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:47.105405+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120365056 unmapped: 39493632 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1321596 data_alloc: 285212672 data_used: 4026368
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:48.105542+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 151 heartbeat osd_stat(store_statfs(0x1b8926000/0x0/0x1bfc00000, data 0x2875d49/0x2966000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120373248 unmapped: 39485440 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53cd3800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:49.105837+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 151 heartbeat osd_stat(store_statfs(0x1b8921000/0x0/0x1bfc00000, data 0x287822a/0x296c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120381440 unmapped: 39477248 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:50.106050+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120381440 unmapped: 39477248 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56235400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:51.106188+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120397824 unmapped: 39460864 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:52.106355+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120397824 unmapped: 39460864 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1325700 data_alloc: 285212672 data_used: 4030464
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 151 heartbeat osd_stat(store_statfs(0x1b8923000/0x0/0x1bfc00000, data 0x2878259/0x296b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:53.106553+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120397824 unmapped: 39460864 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.796140671s of 12.010698318s, submitted: 87
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:54.106787+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120397824 unmapped: 39460864 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:55.106948+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120397824 unmapped: 39460864 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:56.107351+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120414208 unmapped: 39444480 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:57.107482+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120438784 unmapped: 39419904 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1332022 data_alloc: 285212672 data_used: 4042752
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:58.107650+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b891b000/0x0/0x1bfc00000, data 0x287c84b/0x2972000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120438784 unmapped: 39419904 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b891b000/0x0/0x1bfc00000, data 0x287c84b/0x2972000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:10:59.107827+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120438784 unmapped: 39419904 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:00.108133+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 153 ms_handle_reset con 0x55ee566b6800 session 0x55ee566c32c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57062400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120471552 unmapped: 39387136 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 153 ms_handle_reset con 0x55ee57062400 session 0x55ee53d450e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:01.108302+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b891c000/0x0/0x1bfc00000, data 0x287c84b/0x2972000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120471552 unmapped: 39387136 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:02.108516+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120471552 unmapped: 39387136 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1331302 data_alloc: 285212672 data_used: 4046848
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b891c000/0x0/0x1bfc00000, data 0x287c84b/0x2972000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:03.108704+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b891c000/0x0/0x1bfc00000, data 0x287c84b/0x2972000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120471552 unmapped: 39387136 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:04.108932+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120471552 unmapped: 39387136 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:05.109050+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120471552 unmapped: 39387136 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.573661804s of 11.688492775s, submitted: 56
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 154 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 154 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56234c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:06.109187+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 154 ms_handle_reset con 0x55ee56234c00 session 0x55ee5223c1e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120668160 unmapped: 39190528 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:07.109366+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120668160 unmapped: 39190528 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1407018 data_alloc: 285212672 data_used: 4059136
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 154 heartbeat osd_stat(store_statfs(0x1b80d8000/0x0/0x1bfc00000, data 0x30bda63/0x31b5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:08.109537+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120684544 unmapped: 39174144 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:09.109747+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120684544 unmapped: 39174144 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:10.110000+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120619008 unmapped: 39239680 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57062800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 155 ms_handle_reset con 0x55ee57062800 session 0x55ee53c3e3c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:11.110155+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120643584 unmapped: 39215104 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 155 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee53bb2b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b80d5000/0x0/0x1bfc00000, data 0x30bfd75/0x31b9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:12.110331+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120643584 unmapped: 39215104 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1411540 data_alloc: 285212672 data_used: 4063232
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:13.110484+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120651776 unmapped: 39206912 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:14.110700+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120659968 unmapped: 39198720 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 heartbeat osd_stat(store_statfs(0x1b80d3000/0x0/0x1bfc00000, data 0x30bfd95/0x31bb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56234c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:15.110886+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 ms_handle_reset con 0x55ee566b6800 session 0x55ee56bd8b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 ms_handle_reset con 0x55ee56234c00 session 0x55ee53d13a40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120676352 unmapped: 39182336 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.140418053s of 10.349626541s, submitted: 71
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:16.111285+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57062400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 ms_handle_reset con 0x55ee57062400 session 0x55ee5632ba40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:17.111456+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1420279 data_alloc: 285212672 data_used: 4075520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:18.111692+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 heartbeat osd_stat(store_statfs(0x1b80cc000/0x0/0x1bfc00000, data 0x30c21d3/0x31c0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:19.111953+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 heartbeat osd_stat(store_statfs(0x1b80cc000/0x0/0x1bfc00000, data 0x30c21d3/0x31c0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:20.112211+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 heartbeat osd_stat(store_statfs(0x1b6f2f000/0x0/0x1bfc00000, data 0x30c21c3/0x31bf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:21.112383+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:22.112635+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1419026 data_alloc: 285212672 data_used: 4075520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 heartbeat osd_stat(store_statfs(0x1b6f2e000/0x0/0x1bfc00000, data 0x30c225e/0x31c0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:23.112901+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 heartbeat osd_stat(store_statfs(0x1b6f2e000/0x0/0x1bfc00000, data 0x30c225e/0x31c0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:24.113216+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:25.113410+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:26.113629+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:27.113845+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120709120 unmapped: 39149568 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1419026 data_alloc: 285212672 data_used: 4075520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.788381577s of 11.843729973s, submitted: 14
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 heartbeat osd_stat(store_statfs(0x1b6f2d000/0x0/0x1bfc00000, data 0x30c22bb/0x31c1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:28.114044+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee569b0c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 ms_handle_reset con 0x55ee569b0c00 session 0x55ee53bb21e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120717312 unmapped: 39141376 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:29.114189+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120717312 unmapped: 39141376 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:30.114367+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120717312 unmapped: 39141376 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:31.114597+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120717312 unmapped: 39141376 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:32.114765+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120725504 unmapped: 39133184 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1422284 data_alloc: 285212672 data_used: 4075520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:33.115013+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 heartbeat osd_stat(store_statfs(0x1b6f2d000/0x0/0x1bfc00000, data 0x30c22bb/0x31c1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120725504 unmapped: 39133184 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:34.115240+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 heartbeat osd_stat(store_statfs(0x1b6f2d000/0x0/0x1bfc00000, data 0x30c22bb/0x31c1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120725504 unmapped: 39133184 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:35.115425+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120725504 unmapped: 39133184 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:36.115570+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.1 total, 600.0 interval
                                                          Cumulative writes: 12K writes, 45K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s
                                                          Cumulative WAL: 12K writes, 3398 syncs, 3.54 writes per sync, written: 0.04 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 6182 writes, 19K keys, 6182 commit groups, 1.0 writes per commit group, ingest: 17.37 MB, 0.03 MB/s
                                                          Interval WAL: 6182 writes, 2614 syncs, 2.36 writes per sync, written: 0.02 GB, 0.03 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120750080 unmapped: 39108608 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:37.115727+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1425880 data_alloc: 285212672 data_used: 4075520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120750080 unmapped: 39108608 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.038515091s of 10.109594345s, submitted: 16
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 heartbeat osd_stat(store_statfs(0x1b6f2a000/0x0/0x1bfc00000, data 0x30c23db/0x31c4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:38.116028+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee569b0c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 ms_handle_reset con 0x55ee569b0c00 session 0x55ee56bd65a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120733696 unmapped: 39124992 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:39.116218+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 120733696 unmapped: 39124992 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:40.116360+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 157 heartbeat osd_stat(store_statfs(0x1b6f25000/0x0/0x1bfc00000, data 0x30c46ed/0x31c8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 158 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 158 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121782272 unmapped: 38076416 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:41.116508+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121782272 unmapped: 38076416 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:42.116659+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 159 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee562aa3c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1441708 data_alloc: 285212672 data_used: 4087808
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121790464 unmapped: 38068224 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:43.116796+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 160 heartbeat osd_stat(store_statfs(0x1b6f1b000/0x0/0x1bfc00000, data 0x30c8dd5/0x31d2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121790464 unmapped: 38068224 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56234c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 160 ms_handle_reset con 0x55ee56234c00 session 0x55ee53bb3860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:44.116982+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121790464 unmapped: 38068224 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 161 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:45.117128+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121815040 unmapped: 38043648 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:46.117324+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 161 ms_handle_reset con 0x55ee566b6800 session 0x55ee539bf0e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121815040 unmapped: 38043648 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:47.117465+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1449302 data_alloc: 285212672 data_used: 4108288
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121815040 unmapped: 38043648 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.868262291s of 10.075300217s, submitted: 76
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:48.117605+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57062400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 162 ms_handle_reset con 0x55ee57062400 session 0x55ee562aab40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121831424 unmapped: 38027264 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:49.117793+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 163 heartbeat osd_stat(store_statfs(0x1b6f12000/0x0/0x1bfc00000, data 0x30cf83f/0x31db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121839616 unmapped: 38019072 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:50.118067+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121839616 unmapped: 38019072 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:51.118215+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121864192 unmapped: 37994496 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:52.118364+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1453898 data_alloc: 285212672 data_used: 4124672
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121864192 unmapped: 37994496 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:53.118560+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121888768 unmapped: 37969920 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:54.118761+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121888768 unmapped: 37969920 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:55.118911+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 164 heartbeat osd_stat(store_statfs(0x1b6f0b000/0x0/0x1bfc00000, data 0x30d3e3e/0x31e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121896960 unmapped: 37961728 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:56.119080+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121896960 unmapped: 37961728 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:57.119264+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 165 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 165 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57062400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 165 ms_handle_reset con 0x55ee57062400 session 0x55ee560c8f00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1466476 data_alloc: 285212672 data_used: 4145152
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121905152 unmapped: 37953536 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:58.119423+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b6f04000/0x0/0x1bfc00000, data 0x30d617a/0x31ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.485180855s of 10.647356987s, submitted: 78
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56234c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121913344 unmapped: 37945344 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:11:59.119584+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 166 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee53d443c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 166 ms_handle_reset con 0x55ee566b6800 session 0x55ee5577f860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121929728 unmapped: 37928960 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:00.119720+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee569b0c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 167 ms_handle_reset con 0x55ee569b0c00 session 0x55ee55743860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 167 ms_handle_reset con 0x55ee56234c00 session 0x55ee56bd9680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 167 handle_osd_map epochs [166,167], i have 167, src has [1,167]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 121954304 unmapped: 37904384 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:01.119911+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 167 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee55742000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 168 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 168 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 168 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee569b0c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 168 ms_handle_reset con 0x55ee569b0c00 session 0x55ee560c9680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122003456 unmapped: 37855232 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:02.120060+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 169 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 169 ms_handle_reset con 0x55ee566b6800 session 0x55ee56bd7c20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 169 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1488644 data_alloc: 285212672 data_used: 4169728
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122011648 unmapped: 37847040 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:03.120211+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b6eef000/0x0/0x1bfc00000, data 0x30df205/0x31fc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122011648 unmapped: 37847040 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:04.120430+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122011648 unmapped: 37847040 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:05.120582+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57062400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 169 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 170 ms_handle_reset con 0x55ee57062400 session 0x55ee55d9c5a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122011648 unmapped: 37847040 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:06.120718+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123076608 unmapped: 36782080 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:07.120913+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 170 heartbeat osd_stat(store_statfs(0x1b6eec000/0x0/0x1bfc00000, data 0x30e14a8/0x3202000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1503075 data_alloc: 285212672 data_used: 4194304
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123101184 unmapped: 36757504 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:08.121117+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57507c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 171 ms_handle_reset con 0x55ee57507c00 session 0x55ee54437a40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57507c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 171 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee55e3c5a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123142144 unmapped: 36716544 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:09.121304+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.234313011s of 10.763747215s, submitted: 164
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 171 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 172 ms_handle_reset con 0x55ee566b6800 session 0x55ee5223f860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee569b0c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 172 ms_handle_reset con 0x55ee569b0c00 session 0x55ee53a40b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123158528 unmapped: 36700160 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:10.121495+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 172 heartbeat osd_stat(store_statfs(0x1b6ee0000/0x0/0x1bfc00000, data 0x30e5bf2/0x320d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 172 ms_handle_reset con 0x55ee57507c00 session 0x55ee53c3f0e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123142144 unmapped: 36716544 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:11.121666+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 172 heartbeat osd_stat(store_statfs(0x1b6ee0000/0x0/0x1bfc00000, data 0x30e5bf2/0x320d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57062400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 172 ms_handle_reset con 0x55ee57062400 session 0x55ee5223da40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 173 handle_osd_map epochs [173,173], i have 173, src has [1,173]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123174912 unmapped: 36683776 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 173 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee53d44960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:12.121833+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 173 ms_handle_reset con 0x55ee566b6800 session 0x55ee557423c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee569b0c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1506497 data_alloc: 285212672 data_used: 4202496
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122953728 unmapped: 36904960 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 173 heartbeat osd_stat(store_statfs(0x1b6ee1000/0x0/0x1bfc00000, data 0x30e7dd0/0x320d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:13.122016+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122953728 unmapped: 36904960 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:14.122195+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 174 handle_osd_map epochs [174,174], i have 174, src has [1,174]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 174 ms_handle_reset con 0x55ee569b0c00 session 0x55ee5223c960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122994688 unmapped: 36864000 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:15.122358+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 174 heartbeat osd_stat(store_statfs(0x1b6edd000/0x0/0x1bfc00000, data 0x30ea155/0x3210000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 174 heartbeat osd_stat(store_statfs(0x1b6edd000/0x0/0x1bfc00000, data 0x30ea155/0x3210000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122994688 unmapped: 36864000 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:16.122517+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122994688 unmapped: 36864000 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 175 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:17.122633+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1511396 data_alloc: 285212672 data_used: 4222976
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122880000 unmapped: 36978688 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:18.122797+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122880000 unmapped: 36978688 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 175 ms_handle_reset con 0x55ee56c70800 session 0x55ee55ed81e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:19.122946+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122880000 unmapped: 36978688 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:20.123111+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee569b1800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.419086456s of 11.192325592s, submitted: 200
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122880000 unmapped: 36978688 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 176 handle_osd_map epochs [176,176], i have 176, src has [1,176]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 176 handle_osd_map epochs [176,176], i have 176, src has [1,176]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 176 ms_handle_reset con 0x55ee52fad800 session 0x55ee53d132c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 176 handle_osd_map epochs [176,176], i have 176, src has [1,176]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 176 heartbeat osd_stat(store_statfs(0x1b6add000/0x0/0x1bfc00000, data 0x30ec652/0x3211000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:21.123294+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 177 handle_osd_map epochs [177,177], i have 177, src has [1,177]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 177 handle_osd_map epochs [177,177], i have 177, src has [1,177]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122920960 unmapped: 36937728 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:22.123423+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 177 ms_handle_reset con 0x55ee52fad800 session 0x55ee53d13a40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 178 handle_osd_map epochs [177,178], i have 178, src has [1,178]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1530092 data_alloc: 285212672 data_used: 4247552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 178 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee53a410e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122978304 unmapped: 36880384 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 178 ms_handle_reset con 0x55ee569b1800 session 0x55ee55e3d860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:23.123565+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 122978304 unmapped: 36880384 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:24.123754+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 178 handle_osd_map epochs [178,179], i have 178, src has [1,179]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123052032 unmapped: 36806656 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 179 handle_osd_map epochs [179,179], i have 179, src has [1,179]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 179 handle_osd_map epochs [179,179], i have 179, src has [1,179]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:25.123889+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 179 ms_handle_reset con 0x55ee566b6800 session 0x55ee5632ab40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123109376 unmapped: 36749312 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:26.124059+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b6ac7000/0x0/0x1bfc00000, data 0x30f75bb/0x3225000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b6ac7000/0x0/0x1bfc00000, data 0x30f75bb/0x3225000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123109376 unmapped: 36749312 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:27.124278+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee569b0c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1529873 data_alloc: 285212672 data_used: 4259840
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123109376 unmapped: 36749312 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:28.124453+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 181 handle_osd_map epochs [181,181], i have 181, src has [1,181]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 181 ms_handle_reset con 0x55ee569b0c00 session 0x55ee56bd83c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 181 ms_handle_reset con 0x55ee52fad800 session 0x55ee5223c5a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123158528 unmapped: 36700160 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:29.124648+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123158528 unmapped: 36700160 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:30.124883+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.870095253s of 10.212316513s, submitted: 133
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 182 heartbeat osd_stat(store_statfs(0x1b6ac2000/0x0/0x1bfc00000, data 0x30fbade/0x322b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123158528 unmapped: 36700160 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:31.125086+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123158528 unmapped: 36700160 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:32.125263+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1535541 data_alloc: 285212672 data_used: 4272128
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123158528 unmapped: 36700160 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:33.125444+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123158528 unmapped: 36700160 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:34.125690+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123158528 unmapped: 36700160 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:35.125903+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:36.126046+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b6ac2000/0x0/0x1bfc00000, data 0x30fbade/0x322b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:37.126220+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1537663 data_alloc: 285212672 data_used: 4272128
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:38.126380+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:39.126522+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:40.126716+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b6abf000/0x0/0x1bfc00000, data 0x30fdcd6/0x322f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:41.126895+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:42.127096+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1537823 data_alloc: 285212672 data_used: 4276224
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:43.127221+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:44.127404+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b6abf000/0x0/0x1bfc00000, data 0x30fdcd6/0x322f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.580036163s of 13.663175583s, submitted: 55
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:45.127560+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 183 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee565d41e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:46.127766+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 183 ms_handle_reset con 0x55ee566b6800 session 0x55ee55e541e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b6abc000/0x0/0x1bfc00000, data 0x30fde1c/0x3232000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:47.127905+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1547071 data_alloc: 285212672 data_used: 4276224
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:48.128053+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:49.128349+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123166720 unmapped: 36691968 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:50.128440+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b6aba000/0x0/0x1bfc00000, data 0x30fdfaa/0x3234000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123174912 unmapped: 36683776 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:51.128582+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123174912 unmapped: 36683776 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:52.128741+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1556847 data_alloc: 285212672 data_used: 4288512
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123174912 unmapped: 36683776 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:53.128917+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 184 handle_osd_map epochs [184,184], i have 184, src has [1,184]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 184 handle_osd_map epochs [184,184], i have 184, src has [1,184]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 184 handle_osd_map epochs [184,184], i have 184, src has [1,184]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123174912 unmapped: 36683776 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:54.129121+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee569b1800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.790949821s of 10.141227722s, submitted: 71
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 184 ms_handle_reset con 0x55ee569b1800 session 0x55ee566c2960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 184 heartbeat osd_stat(store_statfs(0x1b6ab8000/0x0/0x1bfc00000, data 0x31003bc/0x3236000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123215872 unmapped: 36642816 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:55.129254+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 184 ms_handle_reset con 0x55ee56c70800 session 0x55ee566c1e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 184 ms_handle_reset con 0x55ee56c70800 session 0x55ee562ab2c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:56.129384+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123273216 unmapped: 36585472 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 184 ms_handle_reset con 0x55ee52fad800 session 0x55ee53c3e5a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:57.129520+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123281408 unmapped: 36577280 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1555299 data_alloc: 285212672 data_used: 4288512
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:58.129689+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123281408 unmapped: 36577280 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:12:59.129893+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123289600 unmapped: 36569088 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:00.130016+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123297792 unmapped: 36560896 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 184 handle_osd_map epochs [184,185], i have 184, src has [1,185]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b7a98000/0x0/0x1bfc00000, data 0x3100480/0x3236000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:01.130115+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123314176 unmapped: 36544512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:02.130256+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123314176 unmapped: 36544512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1558047 data_alloc: 285212672 data_used: 4304896
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:03.130384+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123314176 unmapped: 36544512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:04.130592+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123314176 unmapped: 36544512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b7a95000/0x0/0x1bfc00000, data 0x310260c/0x3238000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:05.130728+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123314176 unmapped: 36544512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:06.130898+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123314176 unmapped: 36544512 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.650093079s of 12.236418724s, submitted: 96
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:07.131029+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123346944 unmapped: 36511744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1560679 data_alloc: 285212672 data_used: 4317184
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:08.131192+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123346944 unmapped: 36511744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:09.131349+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123346944 unmapped: 36511744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b7a92000/0x0/0x1bfc00000, data 0x31049b6/0x323b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:10.131536+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123346944 unmapped: 36511744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:11.131745+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123346944 unmapped: 36511744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:12.131927+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123346944 unmapped: 36511744 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:13.132041+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1564887 data_alloc: 285212672 data_used: 4317184
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123355136 unmapped: 36503552 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b7a91000/0x0/0x1bfc00000, data 0x3104a29/0x323d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:14.132236+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123355136 unmapped: 36503552 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:15.132427+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123355136 unmapped: 36503552 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b7a91000/0x0/0x1bfc00000, data 0x3104a29/0x323d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:16.132564+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123355136 unmapped: 36503552 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 187 handle_osd_map epochs [187,187], i have 187, src has [1,187]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 187 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee56bd8b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:17.132796+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123379712 unmapped: 36478976 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b7a8e000/0x0/0x1bfc00000, data 0x3106bae/0x323f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.851346016s of 11.088992119s, submitted: 80
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:18.133021+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1570299 data_alloc: 285212672 data_used: 4329472
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123387904 unmapped: 36470784 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:19.133268+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123387904 unmapped: 36470784 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b7a8c000/0x0/0x1bfc00000, data 0x3106cab/0x3241000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:20.133424+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123404288 unmapped: 36454400 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:21.133560+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123404288 unmapped: 36454400 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:22.133731+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123404288 unmapped: 36454400 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b7a8e000/0x0/0x1bfc00000, data 0x3106c49/0x3240000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:23.133860+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1568370 data_alloc: 285212672 data_used: 4329472
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123404288 unmapped: 36454400 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 187 ms_handle_reset con 0x55ee566b6800 session 0x55ee53d4e5a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:24.134044+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123412480 unmapped: 36446208 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee569b1800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:25.134230+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123428864 unmapped: 36429824 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 188 ms_handle_reset con 0x55ee569b1800 session 0x55ee5223d0e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:26.134426+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123453440 unmapped: 36405248 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:27.134589+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 123453440 unmapped: 36405248 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 189 ms_handle_reset con 0x55ee52fad800 session 0x55ee55e54b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 189 heartbeat osd_stat(store_statfs(0x1b7a86000/0x0/0x1bfc00000, data 0x310902f/0x3247000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:28.134727+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1581984 data_alloc: 285212672 data_used: 4341760
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124518400 unmapped: 35340288 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.198145866s of 10.350853920s, submitted: 49
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 189 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee55e545a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:29.134852+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124518400 unmapped: 35340288 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 190 ms_handle_reset con 0x55ee566b6800 session 0x55ee53a41680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 190 ms_handle_reset con 0x55ee56c70800 session 0x55ee56bd63c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:30.135015+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124559360 unmapped: 35299328 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 191 ms_handle_reset con 0x55ee56668000 session 0x55ee5223de00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:31.135157+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 191 ms_handle_reset con 0x55ee56668000 session 0x55ee55e55e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124559360 unmapped: 35299328 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 191 handle_osd_map epochs [190,191], i have 191, src has [1,191]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 191 ms_handle_reset con 0x55ee52fad800 session 0x55ee566bc1e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:32.135320+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124616704 unmapped: 35241984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:33.135449+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 191 handle_osd_map epochs [191,191], i have 191, src has [1,191]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1588682 data_alloc: 285212672 data_used: 4341760
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124616704 unmapped: 35241984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 191 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee5632a780
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 191 heartbeat osd_stat(store_statfs(0x1b7a7c000/0x0/0x1bfc00000, data 0x310f9d4/0x3251000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:34.135638+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124657664 unmapped: 35201024 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 192 ms_handle_reset con 0x55ee566b6800 session 0x55ee55b3a960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:35.135849+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 192 heartbeat osd_stat(store_statfs(0x1b7a76000/0x0/0x1bfc00000, data 0x3111e7e/0x3257000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124747776 unmapped: 35110912 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:36.136061+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 194 handle_osd_map epochs [193,194], i have 194, src has [1,194]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 194 handle_osd_map epochs [193,194], i have 194, src has [1,194]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124796928 unmapped: 35061760 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 194 handle_osd_map epochs [193,194], i have 194, src has [1,194]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 194 ms_handle_reset con 0x55ee56c70c00 session 0x55ee55d9c000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 194 ms_handle_reset con 0x55ee56c70800 session 0x55ee539be1e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 194 ms_handle_reset con 0x55ee52fad800 session 0x55ee560a14a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:37.136261+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124821504 unmapped: 35037184 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 194 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee53c3ed20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 194 ms_handle_reset con 0x55ee56668000 session 0x55ee56bd61e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 194 ms_handle_reset con 0x55ee566b6800 session 0x55ee539be960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:38.136448+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1602667 data_alloc: 285212672 data_used: 4358144
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124846080 unmapped: 35012608 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b7a70000/0x0/0x1bfc00000, data 0x311641f/0x325e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:39.136633+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.201289177s of 10.797846794s, submitted: 182
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124846080 unmapped: 35012608 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:40.136823+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124846080 unmapped: 35012608 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:41.136977+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124846080 unmapped: 35012608 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:42.137148+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 195 heartbeat osd_stat(store_statfs(0x1b7a6b000/0x0/0x1bfc00000, data 0x31186e1/0x3262000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 195 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 195 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124854272 unmapped: 35004416 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 196 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:43.137323+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 196 ms_handle_reset con 0x55ee566b6800 session 0x55ee55743c20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1608202 data_alloc: 285212672 data_used: 4378624
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124878848 unmapped: 34979840 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:44.137526+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 197 handle_osd_map epochs [197,197], i have 197, src has [1,197]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124878848 unmapped: 34979840 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 197 ms_handle_reset con 0x55ee52fad800 session 0x55ee557421e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 197 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee53c3ef00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:45.137741+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 197 heartbeat osd_stat(store_statfs(0x1b7a64000/0x0/0x1bfc00000, data 0x311cd83/0x3269000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [0,0,1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124878848 unmapped: 34979840 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 197 ms_handle_reset con 0x55ee56668000 session 0x55ee55e3da40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:46.137952+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124878848 unmapped: 34979840 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 197 ms_handle_reset con 0x55ee56c70800 session 0x55ee55f25680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:47.138158+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 197 ms_handle_reset con 0x55ee56c70800 session 0x55ee55f24d20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124837888 unmapped: 35020800 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 197 heartbeat osd_stat(store_statfs(0x1b7a67000/0x0/0x1bfc00000, data 0x311cd12/0x3267000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:48.138320+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1616122 data_alloc: 285212672 data_used: 4395008
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124837888 unmapped: 35020800 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 197 heartbeat osd_stat(store_statfs(0x1b7a67000/0x0/0x1bfc00000, data 0x311cd12/0x3267000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:49.138496+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124837888 unmapped: 35020800 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 197 heartbeat osd_stat(store_statfs(0x1b7a62000/0x0/0x1bfc00000, data 0x311ceac/0x326b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:50.138642+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124837888 unmapped: 35020800 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:51.138829+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124837888 unmapped: 35020800 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.208571434s of 12.614439011s, submitted: 131
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:52.139049+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124837888 unmapped: 35020800 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:53.139205+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1619492 data_alloc: 285212672 data_used: 4407296
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124837888 unmapped: 35020800 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 198 heartbeat osd_stat(store_statfs(0x1b7a5e000/0x0/0x1bfc00000, data 0x311f0fc/0x326f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:54.139511+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124837888 unmapped: 35020800 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:55.139698+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124837888 unmapped: 35020800 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:56.139900+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124837888 unmapped: 35020800 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 198 ms_handle_reset con 0x55ee52fad800 session 0x55ee53c3e1e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:57.140135+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 198 heartbeat osd_stat(store_statfs(0x1b7a5b000/0x0/0x1bfc00000, data 0x311f258/0x3271000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124846080 unmapped: 35012608 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 198 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee5440d4a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:58.140302+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1622596 data_alloc: 285212672 data_used: 4407296
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124583936 unmapped: 35274752 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:13:59.140468+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 124665856 unmapped: 35192832 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:00.140634+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 198 ms_handle_reset con 0x55ee566b6800 session 0x55ee56075860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 125779968 unmapped: 34078720 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:01.140771+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57062c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 198 ms_handle_reset con 0x55ee57062c00 session 0x55ee560743c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 134242304 unmapped: 25616384 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.666356087s of 10.200255394s, submitted: 98
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:02.149999+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 199 ms_handle_reset con 0x55ee566b6800 session 0x55ee53c3fc20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 125640704 unmapped: 34217984 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 199 ms_handle_reset con 0x55ee52fad800 session 0x55ee54436000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 199 heartbeat osd_stat(store_statfs(0x1b2a5a000/0x0/0x1bfc00000, data 0x811f2a9/0x8274000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:03.150151+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 200 ms_handle_reset con 0x55ee56c70800 session 0x55ee55e54780
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2244779 data_alloc: 285212672 data_used: 4419584
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 126754816 unmapped: 33103872 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 200 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee53c3fa40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:04.150336+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135266304 unmapped: 24592384 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57062c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:05.150507+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135282688 unmapped: 24576000 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:06.150763+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135356416 unmapped: 24502272 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 201 ms_handle_reset con 0x55ee57062c00 session 0x55ee56bd7e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:07.150936+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 127123456 unmapped: 32735232 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:08.151113+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2745840 data_alloc: 285212672 data_used: 4431872
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135798784 unmapped: 24059904 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 201 heartbeat osd_stat(store_statfs(0x1aea4b000/0x0/0x1bfc00000, data 0xc125e5c/0xc282000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 201 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee56bd7860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:09.151305+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 201 ms_handle_reset con 0x55ee52fad800 session 0x55ee53b2ab40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136019968 unmapped: 23838720 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 202 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 202 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:10.151489+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 202 ms_handle_reset con 0x55ee566b6800 session 0x55ee566c3e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 127860736 unmapped: 31997952 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:11.151706+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 128049152 unmapped: 31809536 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.009266853s of 10.127912521s, submitted: 128
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:12.151903+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 128155648 unmapped: 31703040 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:13.152070+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3518588 data_alloc: 285212672 data_used: 4448256
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 128335872 unmapped: 31522816 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 202 heartbeat osd_stat(store_statfs(0x1a7a49000/0x0/0x1bfc00000, data 0x131281cd/0x13285000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:14.152429+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 128458752 unmapped: 31399936 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 202 ms_handle_reset con 0x55ee56c70800 session 0x55ee566c2f00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:15.153099+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136986624 unmapped: 22872064 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:16.153315+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 137027584 unmapped: 22831104 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:17.153467+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 heartbeat osd_stat(store_statfs(0x1a4249000/0x0/0x1bfc00000, data 0x1692a330/0x16a84000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 137240576 unmapped: 22618112 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56318000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee56318000 session 0x55ee566c3c20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:18.153631+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4014497 data_alloc: 285212672 data_used: 4460544
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 128991232 unmapped: 30867456 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee52fad800 session 0x55ee566c2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:19.153783+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 129089536 unmapped: 30769152 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee56bd2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee566b6800 session 0x55ee560c9860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:20.154055+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee56c70800 session 0x55ee55e30d20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 137109504 unmapped: 22749184 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:21.154233+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 heartbeat osd_stat(store_statfs(0x19fa48000/0x0/0x1bfc00000, data 0x1b12a351/0x1b286000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c71c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 128860160 unmapped: 30998528 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee56c71c00 session 0x55ee560c90e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.612820625s of 10.002184868s, submitted: 136
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:22.154424+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee52fad800 session 0x55ee560c85a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 128917504 unmapped: 30941184 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:23.154598+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee53bb21e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4459690 data_alloc: 285212672 data_used: 4460544
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 130113536 unmapped: 29745152 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee56c70800 session 0x55ee565d4b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee566b6800 session 0x55ee56bd8d20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57505000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:24.154759+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 130220032 unmapped: 29638656 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 heartbeat osd_stat(store_statfs(0x19d818000/0x0/0x1bfc00000, data 0x1d35937b/0x1d4b6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:25.154904+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 heartbeat osd_stat(store_statfs(0x19d818000/0x0/0x1bfc00000, data 0x1d35937b/0x1d4b6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 130220032 unmapped: 29638656 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:26.155058+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 130236416 unmapped: 29622272 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee57505000 session 0x55ee566c3a40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee52fad800 session 0x55ee55e3d2c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:27.155264+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 137322496 unmapped: 22536192 heap: 159858688 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee5223f680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 heartbeat osd_stat(store_statfs(0x19d818000/0x0/0x1bfc00000, data 0x1d3593b4/0x1d4b6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:28.155612+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee566b6800 session 0x55ee55e3cf00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4914125 data_alloc: 285212672 data_used: 4460544
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133750784 unmapped: 34504704 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:29.155774+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133881856 unmapped: 34373632 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:30.155937+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133947392 unmapped: 34308096 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:31.156124+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 142499840 unmapped: 25755648 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee56c70800 session 0x55ee55e3d0e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:32.156317+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 heartbeat osd_stat(store_statfs(0x197df0000/0x0/0x1bfc00000, data 0x229383b3/0x22a95000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 142548992 unmapped: 25706496 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.893447876s of 10.535332680s, submitted: 198
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:33.156471+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 5200495 data_alloc: 285212672 data_used: 4460544
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 134168576 unmapped: 34086912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:34.156638+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 134242304 unmapped: 34013184 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:35.156788+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 134381568 unmapped: 33873920 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b7400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:36.163205+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 ms_handle_reset con 0x55ee566b7400 session 0x55ee565d5e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 204 heartbeat osd_stat(store_statfs(0x19563a000/0x0/0x1bfc00000, data 0x251383a3/0x25294000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 134471680 unmapped: 33783808 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:37.163372+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 142966784 unmapped: 25288704 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:38.163544+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 204 ms_handle_reset con 0x55ee52fad800 session 0x55ee565d4960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 5633451 data_alloc: 285212672 data_used: 4472832
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 134701056 unmapped: 33554432 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:39.163696+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 134864896 unmapped: 33390592 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 205 heartbeat osd_stat(store_statfs(0x191e40000/0x0/0x1bfc00000, data 0x2892e9b9/0x28a8d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:40.163876+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 143425536 unmapped: 24829952 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:41.164021+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135184384 unmapped: 33071104 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:42.164155+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135266304 unmapped: 32989184 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.644872665s of 10.320636749s, submitted: 110
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 205 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee565d5860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:43.164324+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 6132823 data_alloc: 285212672 data_used: 4489216
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135430144 unmapped: 32825344 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:44.164585+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 206 ms_handle_reset con 0x55ee56668000 session 0x55ee55743680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 206 ms_handle_reset con 0x55ee566b6800 session 0x55ee559221e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135446528 unmapped: 32808960 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:45.164765+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 206 heartbeat osd_stat(store_statfs(0x18ee38000/0x0/0x1bfc00000, data 0x2b930e04/0x2ba95000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135446528 unmapped: 32808960 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:46.164910+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135479296 unmapped: 32776192 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:47.165024+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 208 ms_handle_reset con 0x55ee56c70800 session 0x55ee5223e1e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133414912 unmapped: 34840576 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:48.165177+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1809443 data_alloc: 285212672 data_used: 4513792
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133414912 unmapped: 34840576 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:49.165302+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 208 ms_handle_reset con 0x55ee56c70800 session 0x55ee56bd2d20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133414912 unmapped: 34840576 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:50.165439+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133423104 unmapped: 34832384 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 208 ms_handle_reset con 0x55ee52fad800 session 0x55ee56075c20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:51.165583+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 208 heartbeat osd_stat(store_statfs(0x1b7634000/0x0/0x1bfc00000, data 0x3135318/0x329a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133464064 unmapped: 34791424 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:52.165722+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133464064 unmapped: 34791424 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 208 ms_handle_reset con 0x55ee56668000 session 0x55ee55e3de00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 208 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee56bd8780
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b6800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.229310989s of 10.026658058s, submitted: 230
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 208 ms_handle_reset con 0x55ee566b6800 session 0x55ee56bd72c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 208 ms_handle_reset con 0x55ee57504400 session 0x55ee565d5c20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:53.165892+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 208 heartbeat osd_stat(store_statfs(0x1b7637000/0x0/0x1bfc00000, data 0x3135327/0x3297000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1805459 data_alloc: 285212672 data_used: 4513792
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133464064 unmapped: 34791424 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 208 heartbeat osd_stat(store_statfs(0x1b7637000/0x0/0x1bfc00000, data 0x3135327/0x3297000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:54.166131+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133464064 unmapped: 34791424 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:55.166272+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133488640 unmapped: 34766848 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:56.166485+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133488640 unmapped: 34766848 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:57.166633+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133111808 unmapped: 35143680 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 ms_handle_reset con 0x55ee52fad800 session 0x55ee53a41a40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:58.166797+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1811217 data_alloc: 285212672 data_used: 4526080
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133111808 unmapped: 35143680 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b7633000/0x0/0x1bfc00000, data 0x3137628/0x329b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:14:59.166925+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133111808 unmapped: 35143680 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:00.167114+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133111808 unmapped: 35143680 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:01.167288+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b7632000/0x0/0x1bfc00000, data 0x3137638/0x329c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133111808 unmapped: 35143680 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:02.167463+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133120000 unmapped: 35135488 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:03.167667+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1814653 data_alloc: 285212672 data_used: 4526080
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133120000 unmapped: 35135488 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.045143127s of 11.170186996s, submitted: 40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:04.167847+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee53a40000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 ms_handle_reset con 0x55ee56668000 session 0x55ee5576de00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56c70800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 ms_handle_reset con 0x55ee56c70800 session 0x55ee560750e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133136384 unmapped: 35119104 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:05.170391+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133136384 unmapped: 35119104 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:06.172711+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133136384 unmapped: 35119104 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b7633000/0x0/0x1bfc00000, data 0x313777d/0x329b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:07.173779+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133136384 unmapped: 35119104 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:08.174234+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b7633000/0x0/0x1bfc00000, data 0x313777d/0x329b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1812276 data_alloc: 285212672 data_used: 4526080
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133136384 unmapped: 35119104 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:09.175096+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b7632000/0x0/0x1bfc00000, data 0x3137818/0x329c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133136384 unmapped: 35119104 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/155186084' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:10.175597+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133136384 unmapped: 35119104 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b7631000/0x0/0x1bfc00000, data 0x31378b3/0x329d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:11.176548+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 35110912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:12.177244+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b7633000/0x0/0x1bfc00000, data 0x3137847/0x329b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 35110912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:13.178006+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b7633000/0x0/0x1bfc00000, data 0x3137847/0x329b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1814272 data_alloc: 285212672 data_used: 4526080
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 35110912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:14.178194+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 35110912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:15.178646+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b7633000/0x0/0x1bfc00000, data 0x3137847/0x329b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 35110912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:16.178905+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.987096786s of 12.093241692s, submitted: 26
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 35110912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:17.179108+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 35110912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:18.179351+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1817784 data_alloc: 285212672 data_used: 4538368
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 35110912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:19.179680+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 35110912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:20.180105+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 35110912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:21.180317+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 210 heartbeat osd_stat(store_statfs(0x1b762f000/0x0/0x1bfc00000, data 0x3139bf1/0x329e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 35110912 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:22.180618+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133152768 unmapped: 35102720 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:23.180842+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 210 heartbeat osd_stat(store_statfs(0x1b762f000/0x0/0x1bfc00000, data 0x3139bf1/0x329e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1820208 data_alloc: 285212672 data_used: 4538368
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133169152 unmapped: 35086336 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:24.181135+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133169152 unmapped: 35086336 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:25.181369+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 210 heartbeat osd_stat(store_statfs(0x1b762f000/0x0/0x1bfc00000, data 0x3139ccc/0x329f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133169152 unmapped: 35086336 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:26.181595+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.222406387s of 10.382949829s, submitted: 63
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133169152 unmapped: 35086336 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:27.181795+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 212 heartbeat osd_stat(store_statfs(0x1b7629000/0x0/0x1bfc00000, data 0x313c027/0x32a4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133193728 unmapped: 35061760 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:28.182179+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1832768 data_alloc: 285212672 data_used: 4550656
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133259264 unmapped: 34996224 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:29.182425+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133283840 unmapped: 34971648 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:30.182710+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 213 ms_handle_reset con 0x55ee52fad800 session 0x55ee5223e1e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133316608 unmapped: 34938880 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 213 heartbeat osd_stat(store_statfs(0x1b761f000/0x0/0x1bfc00000, data 0x314064b/0x32ac000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:31.182912+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 213 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee53b2af00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133341184 unmapped: 34914304 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:32.183094+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 215 handle_osd_map epochs [215,215], i have 215, src has [1,215]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133373952 unmapped: 34881536 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:33.183237+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1847366 data_alloc: 285212672 data_used: 4575232
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 133398528 unmapped: 34856960 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:34.183439+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 132571136 unmapped: 35684352 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 216 ms_handle_reset con 0x55ee56668000 session 0x55ee565d5e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:35.183599+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 217 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 132603904 unmapped: 35651584 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:36.183899+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 217 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 132603904 unmapped: 35651584 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b7610000/0x0/0x1bfc00000, data 0x3149485/0x32bd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.800498009s of 10.427931786s, submitted: 163
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 217 handle_osd_map epochs [218,218], i have 218, src has [1,218]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 218 handle_osd_map epochs [218,218], i have 218, src has [1,218]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:37.184123+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 218 ms_handle_reset con 0x55ee57504400 session 0x55ee565d5860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 132653056 unmapped: 35602432 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:38.184295+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1863381 data_alloc: 285212672 data_used: 4599808
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 132734976 unmapped: 35520512 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:39.184617+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 132734976 unmapped: 35520512 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:40.184812+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7606000/0x0/0x1bfc00000, data 0x314db69/0x32c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 220 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 134873088 unmapped: 33382400 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:41.185155+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 220 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Got map version 51
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135045120 unmapped: 33210368 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56234400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 221 ms_handle_reset con 0x55ee56234400 session 0x55ee565d41e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:42.185370+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 221 handle_osd_map epochs [221,222], i have 221, src has [1,222]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 222 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135127040 unmapped: 33128448 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:43.186033+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 222 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee52fad800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 222 ms_handle_reset con 0x55ee52fad800 session 0x55ee565d4000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1878435 data_alloc: 285212672 data_used: 4603904
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 135184384 unmapped: 33071104 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:44.186379+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 223 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136249344 unmapped: 32006144 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:45.186797+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 224 heartbeat osd_stat(store_statfs(0x1b6453000/0x0/0x1bfc00000, data 0x3156ce6/0x32d9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136249344 unmapped: 32006144 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:46.187143+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 225 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee55743680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136265728 unmapped: 31989760 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.403828621s of 10.005825996s, submitted: 169
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:47.187294+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 226 ms_handle_reset con 0x55ee56668000 session 0x55ee55e3d2c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136282112 unmapped: 31973376 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:48.187501+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 226 heartbeat osd_stat(store_statfs(0x1b6447000/0x0/0x1bfc00000, data 0x315d706/0x32e5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 226 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 226 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 226 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 226 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1895666 data_alloc: 285212672 data_used: 4628480
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136306688 unmapped: 31948800 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 227 heartbeat osd_stat(store_statfs(0x1b6442000/0x0/0x1bfc00000, data 0x315fa9c/0x32e9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:49.187654+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136347648 unmapped: 31907840 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:50.187830+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 227 heartbeat osd_stat(store_statfs(0x1b6446000/0x0/0x1bfc00000, data 0x315fa8c/0x32e8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136347648 unmapped: 31907840 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:51.188042+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 228 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 228 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 228 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136364032 unmapped: 31891456 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:52.188265+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 228 heartbeat osd_stat(store_statfs(0x1b6441000/0x0/0x1bfc00000, data 0x3161daa/0x32ec000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136364032 unmapped: 31891456 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 228 heartbeat osd_stat(store_statfs(0x1b6440000/0x0/0x1bfc00000, data 0x3161e0c/0x32ed000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 228 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:53.188429+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1901259 data_alloc: 285212672 data_used: 4653056
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136396800 unmapped: 31858688 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:54.188641+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 229 ms_handle_reset con 0x55ee57504400 session 0x55ee55e3cf00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136404992 unmapped: 31850496 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:55.188788+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136429568 unmapped: 31825920 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:56.188989+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 231 handle_osd_map epochs [230,231], i have 231, src has [1,231]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136470528 unmapped: 31784960 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6434000/0x0/0x1bfc00000, data 0x31686f3/0x32f8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 231 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.748531342s of 10.100909233s, submitted: 158
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:57.189147+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 232 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136486912 unmapped: 31768576 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:58.189307+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1908495 data_alloc: 285212672 data_used: 4661248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136486912 unmapped: 31768576 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:15:59.189592+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136486912 unmapped: 31768576 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:00.189798+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136486912 unmapped: 31768576 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:01.190015+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 233 heartbeat osd_stat(store_statfs(0x1b6431000/0x0/0x1bfc00000, data 0x316ac2e/0x32fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b9c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 233 ms_handle_reset con 0x55ee566b9c00 session 0x55ee5223f680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136519680 unmapped: 31735808 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:02.190328+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136527872 unmapped: 31727616 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:03.190696+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1914666 data_alloc: 285212672 data_used: 4681728
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136527872 unmapped: 31727616 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:04.191004+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 233 heartbeat osd_stat(store_statfs(0x1b642e000/0x0/0x1bfc00000, data 0x316ce35/0x32ff000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136527872 unmapped: 31727616 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:05.191110+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 233 handle_osd_map epochs [233,234], i have 233, src has [1,234]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136544256 unmapped: 31711232 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:06.191411+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136609792 unmapped: 31645696 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:07.191649+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136609792 unmapped: 31645696 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:08.191905+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 234 heartbeat osd_stat(store_statfs(0x1b6428000/0x0/0x1bfc00000, data 0x316f092/0x3303000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1921342 data_alloc: 285212672 data_used: 4698112
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136609792 unmapped: 31645696 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:09.192209+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136609792 unmapped: 31645696 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:10.192445+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136617984 unmapped: 31637504 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:11.192594+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136617984 unmapped: 31637504 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:12.192808+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 234 heartbeat osd_stat(store_statfs(0x1b6428000/0x0/0x1bfc00000, data 0x316f092/0x3303000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136617984 unmapped: 31637504 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:13.193007+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.786162376s of 16.142854691s, submitted: 114
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1918349 data_alloc: 285212672 data_used: 4698112
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136642560 unmapped: 31612928 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:14.193237+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136642560 unmapped: 31612928 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:15.193403+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136642560 unmapped: 31612928 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:16.193609+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136642560 unmapped: 31612928 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:17.193801+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 235 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136658944 unmapped: 31596544 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:18.193983+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 235 heartbeat osd_stat(store_statfs(0x1b6424000/0x0/0x1bfc00000, data 0x3171532/0x3309000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1927309 data_alloc: 285212672 data_used: 4710400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136667136 unmapped: 31588352 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 235 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee566c3e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:19.194127+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136708096 unmapped: 31547392 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:20.194277+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136732672 unmapped: 31522816 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:21.194428+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136732672 unmapped: 31522816 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:22.194560+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 236 heartbeat osd_stat(store_statfs(0x1b6423000/0x0/0x1bfc00000, data 0x317389e/0x330b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136732672 unmapped: 31522816 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:23.194752+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1927749 data_alloc: 285212672 data_used: 4722688
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136732672 unmapped: 31522816 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:24.194997+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136732672 unmapped: 31522816 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:25.195103+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.657583237s of 11.959821701s, submitted: 76
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 236 ms_handle_reset con 0x55ee56668000 session 0x55ee566c2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 236 ms_handle_reset con 0x55ee57504400 session 0x55ee566c3a40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136740864 unmapped: 31514624 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:26.195248+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee584dc000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 237 ms_handle_reset con 0x55ee584dc000 session 0x55ee56bd8d20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136781824 unmapped: 31473664 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:27.195432+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136790016 unmapped: 31465472 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:28.195576+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 237 heartbeat osd_stat(store_statfs(0x1b641a000/0x0/0x1bfc00000, data 0x3175c36/0x3312000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee562fb400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 237 ms_handle_reset con 0x55ee562fb400 session 0x55ee560c9860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1939187 data_alloc: 285212672 data_used: 4739072
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 136880128 unmapped: 31375360 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 237 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee560c85a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:29.195692+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 237 heartbeat osd_stat(store_statfs(0x1b641c000/0x0/0x1bfc00000, data 0x3175c1e/0x3311000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 237 ms_handle_reset con 0x55ee56668000 session 0x55ee53bb2780
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 237 ms_handle_reset con 0x55ee57504400 session 0x55ee55b3af00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 137175040 unmapped: 31080448 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:30.195883+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 237 heartbeat osd_stat(store_statfs(0x1b5d9b000/0x0/0x1bfc00000, data 0x37f6be5/0x3992000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee584dc000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 237 ms_handle_reset con 0x55ee584dc000 session 0x55ee55ed94a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566bb800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 237 ms_handle_reset con 0x55ee566bb800 session 0x55ee560c8960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 137199616 unmapped: 31055872 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:31.196045+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 137003008 unmapped: 31252480 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:32.196185+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 238 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee565d54a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee584dc000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 238 ms_handle_reset con 0x55ee56668000 session 0x55ee55ed8960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 238 ms_handle_reset con 0x55ee584dc000 session 0x55ee56bd6b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139616256 unmapped: 28639232 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 239 ms_handle_reset con 0x55ee57504400 session 0x55ee566c2d20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:33.196394+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 239 ms_handle_reset con 0x55ee544a2000 session 0x55ee566c21e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2093558 data_alloc: 285212672 data_used: 4747264
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139616256 unmapped: 28639232 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:34.196559+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139616256 unmapped: 28639232 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:35.196726+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 239 heartbeat osd_stat(store_statfs(0x1b53d6000/0x0/0x1bfc00000, data 0x41b794c/0x4358000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139616256 unmapped: 28639232 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:36.196882+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.761612892s of 11.549414635s, submitted: 176
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 239 ms_handle_reset con 0x55ee544a2000 session 0x55ee562ab860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140435456 unmapped: 27820032 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:37.197022+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 239 ms_handle_reset con 0x55ee56668000 session 0x55ee55bbf2c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:38.197158+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139862016 unmapped: 28393472 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 240 handle_osd_map epochs [240,240], i have 240, src has [1,240]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 240 ms_handle_reset con 0x55ee57504400 session 0x55ee55ed90e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee584dc000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 240 ms_handle_reset con 0x55ee584dc000 session 0x55ee55ed8960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 240 ms_handle_reset con 0x55ee53ab2c00 session 0x55ee5632a000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 240 handle_osd_map epochs [240,240], i have 240, src has [1,240]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2239114 data_alloc: 285212672 data_used: 4759552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:39.197307+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139886592 unmapped: 28368896 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 241 handle_osd_map epochs [240,241], i have 241, src has [1,241]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 241 handle_osd_map epochs [240,241], i have 241, src has [1,241]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 241 ms_handle_reset con 0x55ee544a2000 session 0x55ee55e31e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:40.197441+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139894784 unmapped: 28360704 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 241 heartbeat osd_stat(store_statfs(0x1b4fcf000/0x0/0x1bfc00000, data 0x41bc07e/0x435f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:41.197640+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139780096 unmapped: 28475392 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:42.197797+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139796480 unmapped: 28459008 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 242 ms_handle_reset con 0x55ee56668000 session 0x55ee55e54d20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:43.198028+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139837440 unmapped: 28418048 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2038804 data_alloc: 285212672 data_used: 4780032
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:44.198231+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 242 handle_osd_map epochs [242,243], i have 242, src has [1,243]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 243 handle_osd_map epochs [243,243], i have 243, src has [1,243]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 243 ms_handle_reset con 0x55ee57504400 session 0x55ee53d45c20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee584dc000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139599872 unmapped: 28655616 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 243 ms_handle_reset con 0x55ee584dc000 session 0x55ee55e54b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:45.198389+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 139640832 unmapped: 28614656 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 243 heartbeat osd_stat(store_statfs(0x1b6005000/0x0/0x1bfc00000, data 0x318315e/0x3327000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [0,0,1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:46.198564+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 243 heartbeat osd_stat(store_statfs(0x1b6002000/0x0/0x1bfc00000, data 0x3183238/0x3329000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 243 handle_osd_map epochs [244,244], i have 244, src has [1,244]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140115968 unmapped: 28139520 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 244 heartbeat osd_stat(store_statfs(0x1b6002000/0x0/0x1bfc00000, data 0x3183238/0x3329000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:47.198723+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.404898643s of 10.522976875s, submitted: 472
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Got map version 52
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140107776 unmapped: 28147712 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:48.198860+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140107776 unmapped: 28147712 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1995699 data_alloc: 285212672 data_used: 4784128
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:49.199018+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140107776 unmapped: 28147712 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:50.199136+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140107776 unmapped: 28147712 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:51.199289+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140107776 unmapped: 28147712 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b5ffa000/0x0/0x1bfc00000, data 0x31878cd/0x3332000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:52.199414+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee53ab2400 session 0x55ee5632a960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee53ab2400 session 0x55ee55b3a000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee544a2000 session 0x55ee5632b4a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140124160 unmapped: 28131328 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee56668000 session 0x55ee55e55e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:53.199554+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee57504400 session 0x55ee566c2960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee584dc000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee584dc000 session 0x55ee53bb2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee584dc000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee584dc000 session 0x55ee53a410e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee53ab2400 session 0x55ee562abc20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee544a2000 session 0x55ee53d4ef00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140394496 unmapped: 27860992 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2069163 data_alloc: 285212672 data_used: 4800512
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:54.199730+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140394496 unmapped: 27860992 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee56668000 session 0x55ee5440c960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:55.199859+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140394496 unmapped: 27860992 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee57504400 session 0x55ee53d12780
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee57504400 session 0x55ee55ed9680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:56.200471+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 ms_handle_reset con 0x55ee53ab2400 session 0x55ee539bc960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140410880 unmapped: 27844608 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b5932000/0x0/0x1bfc00000, data 0x384e8f8/0x39fb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:57.200634+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140427264 unmapped: 27828224 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.480721474s of 10.842885971s, submitted: 100
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:58.200783+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140468224 unmapped: 27787264 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2078595 data_alloc: 285212672 data_used: 5894144
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:16:59.200949+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140468224 unmapped: 27787264 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:00.201126+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140468224 unmapped: 27787264 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:01.201253+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140468224 unmapped: 27787264 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:02.201787+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140435456 unmapped: 27820032 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b5930000/0x0/0x1bfc00000, data 0x384e9cc/0x39fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:03.201913+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140435456 unmapped: 27820032 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2079531 data_alloc: 285212672 data_used: 5894144
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:04.202135+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140435456 unmapped: 27820032 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:05.202297+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 141484032 unmapped: 26771456 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:06.202469+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140369920 unmapped: 27885568 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:07.202678+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140369920 unmapped: 27885568 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:08.202889+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b592e000/0x0/0x1bfc00000, data 0x384eadb/0x39fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140369920 unmapped: 27885568 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2080433 data_alloc: 285212672 data_used: 5894144
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:09.203023+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140369920 unmapped: 27885568 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.342443466s of 11.525989532s, submitted: 38
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:10.203168+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 140378112 unmapped: 27877376 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:11.203451+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144957440 unmapped: 23298048 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:12.203582+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145039360 unmapped: 23216128 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:13.203737+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144891904 unmapped: 23363584 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:14.203908+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2162461 data_alloc: 285212672 data_used: 6291456
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b5056000/0x0/0x1bfc00000, data 0x412aae2/0x42d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144891904 unmapped: 23363584 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b5056000/0x0/0x1bfc00000, data 0x412aae2/0x42d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:15.204085+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144891904 unmapped: 23363584 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:16.204227+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144900096 unmapped: 23355392 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:17.204374+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144900096 unmapped: 23355392 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:18.204489+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144900096 unmapped: 23355392 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:19.204676+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2160877 data_alloc: 285212672 data_used: 6295552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144900096 unmapped: 23355392 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.453092575s of 10.004937172s, submitted: 148
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:20.204837+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b5056000/0x0/0x1bfc00000, data 0x412aacf/0x42d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144916480 unmapped: 23339008 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:21.205044+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144916480 unmapped: 23339008 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:22.205339+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144916480 unmapped: 23339008 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b5056000/0x0/0x1bfc00000, data 0x412ac00/0x42d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:23.205482+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144916480 unmapped: 23339008 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:24.205803+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2160139 data_alloc: 285212672 data_used: 6295552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144916480 unmapped: 23339008 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:25.206025+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144916480 unmapped: 23339008 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:26.206199+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 144916480 unmapped: 23339008 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b5056000/0x0/0x1bfc00000, data 0x412ac6a/0x42d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:27.290188+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 246 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 246 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 246 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145973248 unmapped: 22282240 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:28.290338+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145973248 unmapped: 22282240 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2163107 data_alloc: 285212672 data_used: 6307840
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:29.290534+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.813672066s of 10.004126549s, submitted: 54
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145973248 unmapped: 22282240 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:30.290716+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145981440 unmapped: 22274048 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:31.290848+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145981440 unmapped: 22274048 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 246 heartbeat osd_stat(store_statfs(0x1b5054000/0x0/0x1bfc00000, data 0x412d00a/0x42d9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:32.291052+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145981440 unmapped: 22274048 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:33.291206+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145981440 unmapped: 22274048 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2162035 data_alloc: 285212672 data_used: 6307840
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:34.291420+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145981440 unmapped: 22274048 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:35.291563+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145981440 unmapped: 22274048 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 246 heartbeat osd_stat(store_statfs(0x1b5054000/0x0/0x1bfc00000, data 0x412d0a1/0x42d9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:36.291743+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145981440 unmapped: 22274048 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56259400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 247 ms_handle_reset con 0x55ee56259400 session 0x55ee55742b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:37.291900+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 247 heartbeat osd_stat(store_statfs(0x1b504e000/0x0/0x1bfc00000, data 0x412f2e9/0x42de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146006016 unmapped: 22249472 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 248 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee569b1c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b9000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 248 ms_handle_reset con 0x55ee566b9000 session 0x55ee539bc3c0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:38.292073+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146014208 unmapped: 22241280 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 249 ms_handle_reset con 0x55ee569b1c00 session 0x55ee557421e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 249 ms_handle_reset con 0x55ee53ab2400 session 0x55ee56075860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2176206 data_alloc: 285212672 data_used: 6324224
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:39.292241+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 249 heartbeat osd_stat(store_statfs(0x1b504b000/0x0/0x1bfc00000, data 0x41315fb/0x42e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146030592 unmapped: 22224896 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.900971413s of 10.077557564s, submitted: 48
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:40.292409+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146030592 unmapped: 22224896 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:41.292555+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56259400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 249 ms_handle_reset con 0x55ee56259400 session 0x55ee560a14a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146038784 unmapped: 22216704 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:42.292722+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 250 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b9000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 250 ms_handle_reset con 0x55ee57504400 session 0x55ee53a40b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146063360 unmapped: 22192128 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:43.292864+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 251 ms_handle_reset con 0x55ee566b9000 session 0x55ee55b3bc20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 251 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 251 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145612800 unmapped: 22642688 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 251 heartbeat osd_stat(store_statfs(0x1b5042000/0x0/0x1bfc00000, data 0x41361b1/0x42eb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2184970 data_alloc: 285212672 data_used: 6336512
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:44.293077+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56318000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145620992 unmapped: 22634496 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:45.293262+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 252 ms_handle_reset con 0x55ee56318000 session 0x55ee539bde00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145620992 unmapped: 22634496 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:46.293401+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145645568 unmapped: 22609920 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 252 heartbeat osd_stat(store_statfs(0x1b5036000/0x0/0x1bfc00000, data 0x413b9ee/0x42f6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:47.293561+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 252 ms_handle_reset con 0x55ee53ab2400 session 0x55ee55e4f680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145645568 unmapped: 22609920 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 252 heartbeat osd_stat(store_statfs(0x1b5034000/0x0/0x1bfc00000, data 0x413b9f1/0x42f6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:48.293749+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 252 heartbeat osd_stat(store_statfs(0x1b5038000/0x0/0x1bfc00000, data 0x413b9f1/0x42f6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [0,0,1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 252 handle_osd_map epochs [253,253], i have 253, src has [1,253]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 252 handle_osd_map epochs [253,253], i have 253, src has [1,253]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145645568 unmapped: 22609920 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56259400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b9000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 253 ms_handle_reset con 0x55ee56259400 session 0x55ee55db8d20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:49.293905+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2192943 data_alloc: 285212672 data_used: 6361088
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 254 ms_handle_reset con 0x55ee566b9000 session 0x55ee5632b0e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.699501038s of 10.063829422s, submitted: 87
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145473536 unmapped: 22781952 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:50.294133+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145473536 unmapped: 22781952 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 254 heartbeat osd_stat(store_statfs(0x1b5030000/0x0/0x1bfc00000, data 0x414054b/0x42fe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:51.294333+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145481728 unmapped: 22773760 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 255 heartbeat osd_stat(store_statfs(0x1b502b000/0x0/0x1bfc00000, data 0x4142743/0x4302000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 256 ms_handle_reset con 0x55ee57504400 session 0x55ee54439c20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:52.294509+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 256 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145481728 unmapped: 22773760 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a3c00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:53.294716+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145498112 unmapped: 22757376 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 256 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:54.294915+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2209790 data_alloc: 285212672 data_used: 6385664
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 257 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 257 ms_handle_reset con 0x55ee544a3c00 session 0x55ee55d9c780
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 257 ms_handle_reset con 0x55ee53ab2400 session 0x55ee53d4cf00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145522688 unmapped: 22732800 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b5022000/0x0/0x1bfc00000, data 0x41469bb/0x4309000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:55.295102+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145530880 unmapped: 22724608 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:56.295252+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 257 ms_handle_reset con 0x55ee544a2000 session 0x55ee53d121e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145555456 unmapped: 22700032 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 257 ms_handle_reset con 0x55ee56668000 session 0x55ee56bd34a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56259400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:57.295383+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 257 ms_handle_reset con 0x55ee56259400 session 0x55ee53b2b0e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145121280 unmapped: 23134208 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:58.296591+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145121280 unmapped: 23134208 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b5c1f000/0x0/0x1bfc00000, data 0x31a23a3/0x3360000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:17:59.296730+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2066022 data_alloc: 285212672 data_used: 4886528
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145121280 unmapped: 23134208 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.734122276s of 10.220836639s, submitted: 162
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:00.296874+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145121280 unmapped: 23134208 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:01.297009+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 258 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 258 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145170432 unmapped: 23085056 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:02.297150+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145170432 unmapped: 23085056 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 259 handle_osd_map epochs [260,260], i have 260, src has [1,260]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 260 handle_osd_map epochs [260,260], i have 260, src has [1,260]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:03.297295+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 23076864 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 261 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:04.297472+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2083148 data_alloc: 285212672 data_used: 4898816
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 261 heartbeat osd_stat(store_statfs(0x1b5fbe000/0x0/0x1bfc00000, data 0x31a8d87/0x336d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145252352 unmapped: 23003136 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:05.297633+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145252352 unmapped: 23003136 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:06.297770+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145252352 unmapped: 23003136 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:07.297979+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145260544 unmapped: 22994944 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:08.298119+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145260544 unmapped: 22994944 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 261 heartbeat osd_stat(store_statfs(0x1b5fbd000/0x0/0x1bfc00000, data 0x31ab1a8/0x336e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:09.298273+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2079206 data_alloc: 285212672 data_used: 4898816
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145268736 unmapped: 22986752 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:10.298507+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145268736 unmapped: 22986752 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:11.298735+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.822041512s of 11.307252884s, submitted: 140
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146317312 unmapped: 21938176 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:12.299056+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 262 heartbeat osd_stat(store_statfs(0x1b5fbb000/0x0/0x1bfc00000, data 0x31ad36b/0x3372000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146317312 unmapped: 21938176 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:13.299198+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146317312 unmapped: 21938176 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:14.299370+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2084262 data_alloc: 285212672 data_used: 4911104
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146317312 unmapped: 21938176 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:15.299488+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146317312 unmapped: 21938176 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:16.299756+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146317312 unmapped: 21938176 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 262 heartbeat osd_stat(store_statfs(0x1b5fb9000/0x0/0x1bfc00000, data 0x31ad403/0x3372000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:17.299936+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146317312 unmapped: 21938176 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:18.300161+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 262 heartbeat osd_stat(store_statfs(0x1b5fbb000/0x0/0x1bfc00000, data 0x31ad3cf/0x3372000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146317312 unmapped: 21938176 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:19.300542+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2083078 data_alloc: 285212672 data_used: 4911104
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146325504 unmapped: 21929984 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:20.300731+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146325504 unmapped: 21929984 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 262 heartbeat osd_stat(store_statfs(0x1b5fbc000/0x0/0x1bfc00000, data 0x31ad467/0x3371000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:21.300945+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146325504 unmapped: 21929984 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:22.302067+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146325504 unmapped: 21929984 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:23.302304+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.839811325s of 11.957272530s, submitted: 41
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146358272 unmapped: 21897216 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 262 heartbeat osd_stat(store_statfs(0x1b5fbc000/0x0/0x1bfc00000, data 0x31ad502/0x3371000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:24.302856+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2084186 data_alloc: 285212672 data_used: 4911104
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146358272 unmapped: 21897216 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:25.303242+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146358272 unmapped: 21897216 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:26.303531+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 heartbeat osd_stat(store_statfs(0x1b5fbb000/0x0/0x1bfc00000, data 0x31ad564/0x3372000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146374656 unmapped: 21880832 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:27.303715+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b9000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 ms_handle_reset con 0x55ee566b9000 session 0x55ee55db90e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146374656 unmapped: 21880832 heap: 168255488 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b9000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 heartbeat osd_stat(store_statfs(0x1b5fb5000/0x0/0x1bfc00000, data 0x31af97c/0x3378000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:28.304143+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 163725312 unmapped: 17129472 heap: 180854784 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:29.304374+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2645179 data_alloc: 285212672 data_used: 4923392
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 150806528 unmapped: 38453248 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:30.304533+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146620416 unmapped: 42639360 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:31.304686+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 155074560 unmapped: 34185216 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 heartbeat osd_stat(store_statfs(0x1ab3b4000/0x0/0x1bfc00000, data 0xddaf9c3/0xdf79000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:32.304950+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146685952 unmapped: 42573824 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:33.305135+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.687902451s of 10.098076820s, submitted: 92
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 heartbeat osd_stat(store_statfs(0x1a73b2000/0x0/0x1bfc00000, data 0x11dafb89/0x11f7a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,0,2])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 155115520 unmapped: 34144256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:34.305456+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 heartbeat osd_stat(store_statfs(0x1a73b2000/0x0/0x1bfc00000, data 0x11dafb89/0x11f7a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3862971 data_alloc: 285212672 data_used: 4923392
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162529280 unmapped: 26730496 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:35.305658+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 155205632 unmapped: 34054144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:36.305824+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146817024 unmapped: 42442752 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:37.306025+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 155230208 unmapped: 34029568 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:38.306195+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146833408 unmapped: 42426368 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 heartbeat osd_stat(store_statfs(0x19c7b3000/0x0/0x1bfc00000, data 0x1c9afbef/0x1cb7b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:39.306369+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 5123785 data_alloc: 285212672 data_used: 4923392
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 heartbeat osd_stat(store_statfs(0x19c7b3000/0x0/0x1bfc00000, data 0x1c9afbef/0x1cb7b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146866176 unmapped: 42393600 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:40.306573+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 151068672 unmapped: 38191104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:41.306753+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 heartbeat osd_stat(store_statfs(0x1973b5000/0x0/0x1bfc00000, data 0x21dafbee/0x21f79000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159498240 unmapped: 29761536 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:42.306917+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 ms_handle_reset con 0x55ee53ab2400 session 0x55ee56bd7860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 ms_handle_reset con 0x55ee544a2000 session 0x55ee560c9860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 ms_handle_reset con 0x55ee566b9000 session 0x55ee55d1a1e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56259400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 ms_handle_reset con 0x55ee56259400 session 0x55ee55db94a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 heartbeat osd_stat(store_statfs(0x1957b4000/0x0/0x1bfc00000, data 0x239b00ed/0x23b7a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146948096 unmapped: 42311680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:43.307037+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 ms_handle_reset con 0x55ee56668000 session 0x55ee53d4dc20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.475783348s of 10.647709846s, submitted: 108
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146956288 unmapped: 42303488 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:44.307191+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3475617 data_alloc: 285212672 data_used: 4923392
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 ms_handle_reset con 0x55ee56668000 session 0x55ee544361e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 ms_handle_reset con 0x55ee53ab2400 session 0x55ee562aa5a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146677760 unmapped: 42582016 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:45.307362+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 ms_handle_reset con 0x55ee544a2000 session 0x55ee560a14a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146677760 unmapped: 42582016 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:46.307538+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146677760 unmapped: 42582016 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:47.307685+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 264 heartbeat osd_stat(store_statfs(0x1b5fb7000/0x0/0x1bfc00000, data 0x31afc9b/0x3376000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146677760 unmapped: 42582016 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:48.307851+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146677760 unmapped: 42582016 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:49.308017+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 265 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 265 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 265 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2246583 data_alloc: 285212672 data_used: 4935680
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 265 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 145973248 unmapped: 43286528 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:50.308156+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56259400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 265 handle_osd_map epochs [266,266], i have 266, src has [1,266]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 266 ms_handle_reset con 0x55ee56259400 session 0x55ee557421e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 266 handle_osd_map epochs [266,266], i have 266, src has [1,266]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 266 handle_osd_map epochs [266,266], i have 266, src has [1,266]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146063360 unmapped: 43196416 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:51.308336+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 266 handle_osd_map epochs [266,266], i have 266, src has [1,266]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146153472 unmapped: 43106304 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:52.308545+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 267 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b9000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 267 ms_handle_reset con 0x55ee566b9000 session 0x55ee55e305a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146161664 unmapped: 43098112 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:53.308690+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b5b4f000/0x0/0x1bfc00000, data 0x320de82/0x33dd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 147382272 unmapped: 41877504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:54.308918+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b9000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.577130318s of 10.391112328s, submitted: 274
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2268453 data_alloc: 285212672 data_used: 4956160
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 147382272 unmapped: 41877504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:55.309053+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 268 ms_handle_reset con 0x55ee566b9000 session 0x55ee55743c20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 268 ms_handle_reset con 0x55ee544a2000 session 0x55ee55ed8000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 268 ms_handle_reset con 0x55ee53ab2400 session 0x55ee53d12780
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 147390464 unmapped: 41869312 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56259400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:56.309162+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 269 handle_osd_map epochs [268,269], i have 269, src has [1,269]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 269 ms_handle_reset con 0x55ee56259400 session 0x55ee53d4ef00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 269 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 269 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56668000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 269 ms_handle_reset con 0x55ee56668000 session 0x55ee562abc20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 146776064 unmapped: 42483712 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:57.309272+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 269 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 269 heartbeat osd_stat(store_statfs(0x1b5ab2000/0x0/0x1bfc00000, data 0x32a4c20/0x347a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [0,0,1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 269 ms_handle_reset con 0x55ee53ab2400 session 0x55ee53a410e0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 147013632 unmapped: 42246144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:58.309480+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 269 ms_handle_reset con 0x55ee544a2000 session 0x55ee566c2960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 147013632 unmapped: 42246144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:18:59.309643+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 270 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56259400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 270 ms_handle_reset con 0x55ee56259400 session 0x55ee55e55e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2279975 data_alloc: 285212672 data_used: 4968448
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b9000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 271 ms_handle_reset con 0x55ee566b9000 session 0x55ee55b3a000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 148103168 unmapped: 41156608 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:00.309783+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 271 heartbeat osd_stat(store_statfs(0x1b5a6b000/0x0/0x1bfc00000, data 0x32eb95d/0x34c1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 148201472 unmapped: 41058304 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:01.310027+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 272 heartbeat osd_stat(store_statfs(0x1b5a58000/0x0/0x1bfc00000, data 0x32fcc42/0x34d3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 149602304 unmapped: 39657472 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:02.310225+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee57504400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 273 handle_osd_map epochs [273,273], i have 273, src has [1,273]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 273 ms_handle_reset con 0x55ee57504400 session 0x55ee55e54b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 149954560 unmapped: 39305216 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 273 handle_osd_map epochs [273,273], i have 273, src has [1,273]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:03.310354+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 274 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 274 ms_handle_reset con 0x55ee53ab2400 session 0x55ee53d45c20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 274 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 151019520 unmapped: 38240256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:04.310516+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 274 ms_handle_reset con 0x55ee544a2000 session 0x55ee55e31e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.296730995s of 10.000399590s, submitted: 275
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2318390 data_alloc: 285212672 data_used: 4980736
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 151126016 unmapped: 38133760 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 274 heartbeat osd_stat(store_statfs(0x1b59ba000/0x0/0x1bfc00000, data 0x3391021/0x356f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:05.310683+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56259400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 274 ms_handle_reset con 0x55ee56259400 session 0x55ee5632a000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b9000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 151314432 unmapped: 37945344 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 275 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:06.310819+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 275 ms_handle_reset con 0x55ee566b9000 session 0x55ee55ed8960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 275 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee5622f800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 275 ms_handle_reset con 0x55ee5622f800 session 0x55ee55e545a0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 152436736 unmapped: 36823040 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:07.311012+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 276 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 276 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee53ab2400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 276 ms_handle_reset con 0x55ee53ab2400 session 0x55ee562ab860
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 152551424 unmapped: 36708352 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:08.311160+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 152559616 unmapped: 36700160 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:09.311288+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 276 heartbeat osd_stat(store_statfs(0x1b5963000/0x0/0x1bfc00000, data 0x33ece29/0x35cb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2327615 data_alloc: 285212672 data_used: 4997120
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 152625152 unmapped: 36634624 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:10.311400+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 276 heartbeat osd_stat(store_statfs(0x1b5916000/0x0/0x1bfc00000, data 0x3437ae2/0x3617000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 152625152 unmapped: 36634624 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:11.311580+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 277 handle_osd_map epochs [277,277], i have 277, src has [1,277]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 152936448 unmapped: 36323328 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:12.311731+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 278 handle_osd_map epochs [278,278], i have 278, src has [1,278]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 153993216 unmapped: 35266560 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:13.311938+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 153993216 unmapped: 35266560 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:14.312217+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2341931 data_alloc: 285212672 data_used: 5029888
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b589a000/0x0/0x1bfc00000, data 0x34afc68/0x3692000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 154337280 unmapped: 34922496 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:15.312346+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.111511230s of 11.725858688s, submitted: 242
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 279 heartbeat osd_stat(store_statfs(0x1b589a000/0x0/0x1bfc00000, data 0x34afc68/0x3692000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:16.312509+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 153362432 unmapped: 35897344 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:17.312764+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 153370624 unmapped: 35889152 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:18.312993+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 152436736 unmapped: 36823040 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:19.313151+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 152518656 unmapped: 36741120 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2354179 data_alloc: 285212672 data_used: 5042176
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 279 heartbeat osd_stat(store_statfs(0x1b5829000/0x0/0x1bfc00000, data 0x3520ebb/0x3705000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:20.313326+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 153731072 unmapped: 35528704 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:21.313512+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 153862144 unmapped: 35397632 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:22.313692+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 281 handle_osd_map epochs [281,281], i have 281, src has [1,281]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 153862144 unmapped: 35397632 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 281 handle_osd_map epochs [280,280], i have 281, src has [1,280]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:23.314504+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 154058752 unmapped: 35201024 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:24.314727+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 154181632 unmapped: 35078144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2359427 data_alloc: 285212672 data_used: 5054464
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 281 heartbeat osd_stat(store_statfs(0x1b576e000/0x0/0x1bfc00000, data 0x35d7f7d/0x37c0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:25.314889+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 154189824 unmapped: 35069952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:26.315121+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 154353664 unmapped: 34906112 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.640604973s of 10.175801277s, submitted: 183
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 281 heartbeat osd_stat(store_statfs(0x1b574b000/0x0/0x1bfc00000, data 0x35fad81/0x37e3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:27.315293+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 155516928 unmapped: 33742848 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 282 handle_osd_map epochs [282,282], i have 282, src has [1,282]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 282 handle_osd_map epochs [282,282], i have 282, src has [1,282]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 282 heartbeat osd_stat(store_statfs(0x1b56e6000/0x0/0x1bfc00000, data 0x365e25b/0x3847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:28.315509+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 155541504 unmapped: 33718272 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:29.315765+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 155557888 unmapped: 33701888 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2374053 data_alloc: 285212672 data_used: 5066752
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:30.316225+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 155664384 unmapped: 33595392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:31.316401+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 155828224 unmapped: 33431552 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:32.316572+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 155828224 unmapped: 33431552 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:33.316744+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 156123136 unmapped: 33136640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 283 heartbeat osd_stat(store_statfs(0x1b5690000/0x0/0x1bfc00000, data 0x36b242e/0x389c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:34.317027+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 156123136 unmapped: 33136640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2386771 data_alloc: 285212672 data_used: 5087232
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:35.317205+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 156123136 unmapped: 33136640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:36.317415+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.732077599s of 10.013969421s, submitted: 95
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 156131328 unmapped: 33128448 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b5616000/0x0/0x1bfc00000, data 0x37295e3/0x3916000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:37.317590+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157188096 unmapped: 32071680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:38.317779+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157188096 unmapped: 32071680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:39.318020+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157196288 unmapped: 32063488 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2394637 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b55da000/0x0/0x1bfc00000, data 0x3765819/0x3953000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:40.318231+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157253632 unmapped: 32006144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:41.318483+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157253632 unmapped: 32006144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:42.318667+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157253632 unmapped: 32006144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b55c7000/0x0/0x1bfc00000, data 0x3779af6/0x3967000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:43.318887+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157253632 unmapped: 32006144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:44.319196+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157253632 unmapped: 32006144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2394987 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:45.319512+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157376512 unmapped: 31883264 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:46.319759+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157376512 unmapped: 31883264 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b559b000/0x0/0x1bfc00000, data 0x37a5ae0/0x3993000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:47.320012+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.336925507s of 10.983221054s, submitted: 60
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157392896 unmapped: 31866880 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:48.320260+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157392896 unmapped: 31866880 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:49.320444+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157392896 unmapped: 31866880 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2399659 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b5568000/0x0/0x1bfc00000, data 0x37d8892/0x39c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:50.320629+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 157392896 unmapped: 31866880 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:51.320839+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158588928 unmapped: 30670848 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:52.321061+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158588928 unmapped: 30670848 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b5506000/0x0/0x1bfc00000, data 0x3837d40/0x3a26000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:53.321314+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158588928 unmapped: 30670848 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:54.321636+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158801920 unmapped: 30457856 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2410591 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:55.321807+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158818304 unmapped: 30441472 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:56.321918+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158924800 unmapped: 30334976 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:57.322087+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158728192 unmapped: 30531584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.281880379s of 10.487033844s, submitted: 55
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b54b7000/0x0/0x1bfc00000, data 0x3888515/0x3a76000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:58.322242+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158842880 unmapped: 30416896 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:19:59.322466+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158842880 unmapped: 30416896 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2406563 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:00.322645+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158842880 unmapped: 30416896 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b549b000/0x0/0x1bfc00000, data 0x38a70e4/0x3a93000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 ms_handle_reset con 0x55ee532ba400 session 0x55ee56bd6f00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee544a2000
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:01.322850+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158998528 unmapped: 30261248 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:02.323034+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158998528 unmapped: 30261248 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:03.323166+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158998528 unmapped: 30261248 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b545f000/0x0/0x1bfc00000, data 0x38e3d70/0x3acf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:04.323386+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158998528 unmapped: 30261248 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2408773 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:05.323535+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158998528 unmapped: 30261248 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:06.323727+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159006720 unmapped: 30253056 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b544f000/0x0/0x1bfc00000, data 0x38f3c5f/0x3adf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:07.323907+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159006720 unmapped: 30253056 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.273473740s of 10.371294975s, submitted: 20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:08.324147+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159178752 unmapped: 30081024 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:09.324319+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159268864 unmapped: 29990912 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2414497 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:10.324501+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159268864 unmapped: 29990912 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b4298000/0x0/0x1bfc00000, data 0x390a583/0x3af6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:11.324715+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159268864 unmapped: 29990912 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:12.324910+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158334976 unmapped: 30924800 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:13.325075+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b4276000/0x0/0x1bfc00000, data 0x392c8a8/0x3b18000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158334976 unmapped: 30924800 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b4273000/0x0/0x1bfc00000, data 0x392f342/0x3b1b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:14.325352+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 158343168 unmapped: 30916608 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2414831 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:15.325609+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159391744 unmapped: 29868032 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:16.325777+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159490048 unmapped: 29769728 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:17.326093+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159490048 unmapped: 29769728 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b4234000/0x0/0x1bfc00000, data 0x396e053/0x3b5a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:18.326244+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159596544 unmapped: 29663232 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.692152977s of 10.806116104s, submitted: 25
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:19.326420+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159596544 unmapped: 29663232 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2419099 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:20.326591+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159596544 unmapped: 29663232 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:21.326745+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b4206000/0x0/0x1bfc00000, data 0x399cc41/0x3b88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:22.326982+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b4206000/0x0/0x1bfc00000, data 0x399cc41/0x3b88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:23.327144+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:24.327488+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2419569 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:25.327656+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:26.327834+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:27.328138+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:28.328566+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b41f3000/0x0/0x1bfc00000, data 0x39b0900/0x3b9b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:29.328764+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2421493 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:30.329031+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:31.329374+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.251951218s of 12.316149712s, submitted: 13
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:32.329630+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:33.329807+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:34.330107+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b41d6000/0x0/0x1bfc00000, data 0x39ccdca/0x3bb8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2426159 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:35.330302+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159662080 unmapped: 29597696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:36.330818+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159670272 unmapped: 29589504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:37.331375+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159670272 unmapped: 29589504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:38.332103+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159752192 unmapped: 29507584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:39.332407+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b4169000/0x0/0x1bfc00000, data 0x3a3aa70/0x3c25000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159883264 unmapped: 29376512 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2433629 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:40.332871+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159883264 unmapped: 29376512 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:41.333154+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922163010s of 10.030193329s, submitted: 23
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 159883264 unmapped: 29376512 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:42.333439+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160006144 unmapped: 29253632 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:43.333653+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160006144 unmapped: 29253632 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:44.333879+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160006144 unmapped: 29253632 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2434569 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:45.334102+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b4133000/0x0/0x1bfc00000, data 0x3a6f044/0x3c5b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161161216 unmapped: 28098560 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:46.334260+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161243136 unmapped: 28016640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:47.334470+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161243136 unmapped: 28016640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:48.334793+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160088064 unmapped: 29171712 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:49.335047+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160391168 unmapped: 28868608 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2444639 data_alloc: 285212672 data_used: 5099520
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:50.335302+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160391168 unmapped: 28868608 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:51.335521+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.843582153s of 10.038002968s, submitted: 35
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b40a5000/0x0/0x1bfc00000, data 0x3afcaf4/0x3ce9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160464896 unmapped: 28794880 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:52.335742+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160464896 unmapped: 28794880 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 285 handle_osd_map epochs [285,285], i have 285, src has [1,285]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:53.335952+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160587776 unmapped: 28672000 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:54.336304+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 285 heartbeat osd_stat(store_statfs(0x1b4054000/0x0/0x1bfc00000, data 0x3b4c122/0x3d39000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160694272 unmapped: 28565504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2448335 data_alloc: 285212672 data_used: 5111808
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:55.336550+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160694272 unmapped: 28565504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 285 heartbeat osd_stat(store_statfs(0x1b4052000/0x0/0x1bfc00000, data 0x3b4f41d/0x3d3c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:56.337116+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 160923648 unmapped: 28336128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:57.337312+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161038336 unmapped: 28221440 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:58.337600+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161038336 unmapped: 28221440 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:20:59.337851+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 285 heartbeat osd_stat(store_statfs(0x1b4012000/0x0/0x1bfc00000, data 0x3b8f40d/0x3d7c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161071104 unmapped: 28188672 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2450125 data_alloc: 285212672 data_used: 5111808
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:00.338084+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161071104 unmapped: 28188672 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:01.338573+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.762230873s of 10.003785133s, submitted: 68
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 handle_osd_map epochs [286,286], i have 286, src has [1,286]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161095680 unmapped: 28164096 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:02.340613+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161095680 unmapped: 28164096 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:03.341296+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161095680 unmapped: 28164096 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:04.342490+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161095680 unmapped: 28164096 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2456119 data_alloc: 285212672 data_used: 5124096
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:05.342681+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 heartbeat osd_stat(store_statfs(0x1b3ffd000/0x0/0x1bfc00000, data 0x3ba23bc/0x3d90000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161095680 unmapped: 28164096 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 heartbeat osd_stat(store_statfs(0x1b3ffd000/0x0/0x1bfc00000, data 0x3ba23bc/0x3d90000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:06.343262+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161128448 unmapped: 28131328 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:07.343431+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161128448 unmapped: 28131328 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:08.343609+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161128448 unmapped: 28131328 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:09.343784+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 heartbeat osd_stat(store_statfs(0x1b3ff8000/0x0/0x1bfc00000, data 0x3ba75cc/0x3d95000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161128448 unmapped: 28131328 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2457383 data_alloc: 285212672 data_used: 5124096
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:10.344143+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161128448 unmapped: 28131328 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:11.344346+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 heartbeat osd_stat(store_statfs(0x1b3ff8000/0x0/0x1bfc00000, data 0x3ba75cc/0x3d95000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161128448 unmapped: 28131328 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:12.344535+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161128448 unmapped: 28131328 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:13.344734+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161128448 unmapped: 28131328 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:14.345029+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 heartbeat osd_stat(store_statfs(0x1b3ff8000/0x0/0x1bfc00000, data 0x3ba75cc/0x3d95000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161144832 unmapped: 28114944 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 heartbeat osd_stat(store_statfs(0x1b3ff8000/0x0/0x1bfc00000, data 0x3ba75cc/0x3d95000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:15.345234+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2457383 data_alloc: 285212672 data_used: 5124096
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161144832 unmapped: 28114944 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:16.345519+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161144832 unmapped: 28114944 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:17.345682+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161144832 unmapped: 28114944 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:18.346048+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161144832 unmapped: 28114944 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 heartbeat osd_stat(store_statfs(0x1b3ff8000/0x0/0x1bfc00000, data 0x3ba75cc/0x3d95000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:19.346252+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161144832 unmapped: 28114944 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:20.346594+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2457383 data_alloc: 285212672 data_used: 5124096
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161144832 unmapped: 28114944 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:21.346808+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.416286469s of 20.450080872s, submitted: 11
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161153024 unmapped: 28106752 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:22.347083+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161153024 unmapped: 28106752 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:23.347287+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 heartbeat osd_stat(store_statfs(0x1b3fe0000/0x0/0x1bfc00000, data 0x3bbedb8/0x3dae000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161153024 unmapped: 28106752 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:24.347576+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 161153024 unmapped: 28106752 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:25.347773+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2459575 data_alloc: 285212672 data_used: 5124096
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162201600 unmapped: 27058176 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:26.348071+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162201600 unmapped: 27058176 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:27.348275+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 heartbeat osd_stat(store_statfs(0x1b3fcb000/0x0/0x1bfc00000, data 0x3bd4554/0x3dc3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162201600 unmapped: 27058176 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:28.348434+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162209792 unmapped: 27049984 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:29.348847+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162209792 unmapped: 27049984 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:30.349075+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2461707 data_alloc: 285212672 data_used: 5124096
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162267136 unmapped: 26992640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:31.349258+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162267136 unmapped: 26992640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.178208351s of 10.302662849s, submitted: 25
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:32.349514+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 heartbeat osd_stat(store_statfs(0x1b3f55000/0x0/0x1bfc00000, data 0x3c4b1f7/0x3e39000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162267136 unmapped: 26992640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:33.349668+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162283520 unmapped: 26976256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:34.349923+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162283520 unmapped: 26976256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:35.350159+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2468577 data_alloc: 285212672 data_used: 5124096
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162291712 unmapped: 26968064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.1 total, 600.0 interval
                                                          Cumulative writes: 22K writes, 86K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                                          Cumulative WAL: 22K writes, 7660 syncs, 2.92 writes per sync, written: 0.06 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 27.33 MB, 0.05 MB/s
                                                          Interval WAL: 10K writes, 4262 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:36.350403+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162291712 unmapped: 26968064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:37.350633+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162291712 unmapped: 26968064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:38.350892+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 287 heartbeat osd_stat(store_statfs(0x1b3f27000/0x0/0x1bfc00000, data 0x3c743ca/0x3e66000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162291712 unmapped: 26968064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:39.351046+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 287 heartbeat osd_stat(store_statfs(0x1b3f07000/0x0/0x1bfc00000, data 0x3c9463e/0x3e86000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162291712 unmapped: 26968064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:40.351260+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2475003 data_alloc: 285212672 data_used: 5136384
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 162299904 unmapped: 26959872 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:41.351467+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164577280 unmapped: 24682496 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:42.351646+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 287 heartbeat osd_stat(store_statfs(0x1b2d23000/0x0/0x1bfc00000, data 0x3cda77c/0x3ecb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164724736 unmapped: 24535040 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:43.351829+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 287 heartbeat osd_stat(store_statfs(0x1b2d23000/0x0/0x1bfc00000, data 0x3cda77c/0x3ecb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164724736 unmapped: 24535040 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 287 heartbeat osd_stat(store_statfs(0x1b2d23000/0x0/0x1bfc00000, data 0x3cda77c/0x3ecb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:44.352012+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164724736 unmapped: 24535040 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.738777161s of 12.987481117s, submitted: 62
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:45.352150+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2476305 data_alloc: 285212672 data_used: 5136384
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164724736 unmapped: 24535040 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:46.352382+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164724736 unmapped: 24535040 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:47.352567+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164724736 unmapped: 24535040 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:48.352795+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2d03000/0x0/0x1bfc00000, data 0x3cf78ab/0x3eea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164765696 unmapped: 24494080 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:49.352976+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Got map version 53
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164913152 unmapped: 24346624 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:50.353167+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2485603 data_alloc: 285212672 data_used: 5148672
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164913152 unmapped: 24346624 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:51.353360+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:52.353565+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:53.353726+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2c84000/0x0/0x1bfc00000, data 0x3d76ebc/0x3f6a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:54.353949+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:55.354169+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2489449 data_alloc: 285212672 data_used: 5148672
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:56.354365+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.068281174s of 11.255579948s, submitted: 51
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2c7e000/0x0/0x1bfc00000, data 0x3d7de65/0x3f70000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:57.354681+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:58.354860+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:21:59.355025+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2c7e000/0x0/0x1bfc00000, data 0x3d7de65/0x3f70000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:00.355191+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2486805 data_alloc: 285212672 data_used: 5148672
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:01.355365+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:02.355563+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164921344 unmapped: 24338432 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:03.355782+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164372480 unmapped: 24887296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:04.356068+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164372480 unmapped: 24887296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:05.356781+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2c57000/0x0/0x1bfc00000, data 0x3da4289/0x3f97000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2489501 data_alloc: 285212672 data_used: 5148672
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164372480 unmapped: 24887296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:06.357024+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.961326599s of 10.009686470s, submitted: 9
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164372480 unmapped: 24887296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:07.357427+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164372480 unmapped: 24887296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:08.358096+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164372480 unmapped: 24887296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:09.358391+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164372480 unmapped: 24887296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:10.358758+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2489517 data_alloc: 285212672 data_used: 5148672
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164372480 unmapped: 24887296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2c57000/0x0/0x1bfc00000, data 0x3da44e0/0x3f97000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:11.359049+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:12.359295+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:13.359665+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:14.361159+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:15.361447+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2488669 data_alloc: 285212672 data_used: 5148672
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2c4c000/0x0/0x1bfc00000, data 0x3daf1a6/0x3fa2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:16.361808+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2c4c000/0x0/0x1bfc00000, data 0x3daf1a6/0x3fa2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:17.362191+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2c4c000/0x0/0x1bfc00000, data 0x3daf1a6/0x3fa2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:18.362460+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:19.362828+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:20.363145+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2488669 data_alloc: 285212672 data_used: 5148672
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:21.363383+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.990114212s of 14.994854927s, submitted: 1
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2c4c000/0x0/0x1bfc00000, data 0x3daf1a6/0x3fa2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2c4c000/0x0/0x1bfc00000, data 0x3daf1a6/0x3fa2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:22.363536+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:23.363685+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:24.363933+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164380672 unmapped: 24879104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:25.364163+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2492601 data_alloc: 285212672 data_used: 5148672
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 164388864 unmapped: 24870912 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:26.364377+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:27.364620+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165552128 unmapped: 23707648 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2bf0000/0x0/0x1bfc00000, data 0x3e0b2ad/0x3ffe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:28.364879+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165625856 unmapped: 23633920 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 heartbeat osd_stat(store_statfs(0x1b2bf0000/0x0/0x1bfc00000, data 0x3e0b2ad/0x3ffe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:29.365092+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165625856 unmapped: 23633920 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:30.365318+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165642240 unmapped: 23617536 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2497773 data_alloc: 285212672 data_used: 5148672
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:31.366195+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165642240 unmapped: 23617536 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.894487381s of 10.002737045s, submitted: 20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:32.366378+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165642240 unmapped: 23617536 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 289 handle_osd_map epochs [289,289], i have 289, src has [1,289]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:33.366554+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165650432 unmapped: 23609344 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 289 heartbeat osd_stat(store_statfs(0x1b2ba7000/0x0/0x1bfc00000, data 0x3e51fbf/0x4046000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:34.366841+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165658624 unmapped: 23601152 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:35.367017+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165658624 unmapped: 23601152 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2506927 data_alloc: 285212672 data_used: 5160960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 289 heartbeat osd_stat(store_statfs(0x1b2ba7000/0x0/0x1bfc00000, data 0x3e51fbf/0x4046000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:36.367185+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165658624 unmapped: 23601152 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 289 heartbeat osd_stat(store_statfs(0x1b2ba7000/0x0/0x1bfc00000, data 0x3e51fbf/0x4046000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:37.367350+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165658624 unmapped: 23601152 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:38.367533+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165658624 unmapped: 23601152 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:39.367709+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165658624 unmapped: 23601152 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:40.367872+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165658624 unmapped: 23601152 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2504783 data_alloc: 285212672 data_used: 5160960
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:41.368031+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165658624 unmapped: 23601152 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.882663727s of 10.002564430s, submitted: 41
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 handle_osd_map epochs [290,290], i have 290, src has [1,290]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 handle_osd_map epochs [290,290], i have 290, src has [1,290]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2b91000/0x0/0x1bfc00000, data 0x3e6886e/0x405d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:42.368204+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165675008 unmapped: 23584768 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:43.368367+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165683200 unmapped: 23576576 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:44.368579+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165683200 unmapped: 23576576 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2b72000/0x0/0x1bfc00000, data 0x3e83fe2/0x407b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:45.368731+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165699584 unmapped: 23560192 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2509909 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:46.368936+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165699584 unmapped: 23560192 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:47.369137+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165699584 unmapped: 23560192 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:48.369349+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2b5c000/0x0/0x1bfc00000, data 0x3e9b50e/0x4092000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165699584 unmapped: 23560192 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:49.369450+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165707776 unmapped: 23552000 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:50.369642+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165707776 unmapped: 23552000 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2b3e000/0x0/0x1bfc00000, data 0x3eb8b0a/0x40b0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2511797 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:51.369838+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.876500130s of 10.003379822s, submitted: 34
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165715968 unmapped: 23543808 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:52.370025+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165961728 unmapped: 23298048 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:53.370255+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165961728 unmapped: 23298048 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:54.370561+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165986304 unmapped: 23273472 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2af6000/0x0/0x1bfc00000, data 0x3f00f68/0x40f8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2af6000/0x0/0x1bfc00000, data 0x3f00f68/0x40f8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:55.370763+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 165986304 unmapped: 23273472 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2516573 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:56.370918+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2af6000/0x0/0x1bfc00000, data 0x3f00f68/0x40f8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 166010880 unmapped: 23248896 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2af6000/0x0/0x1bfc00000, data 0x3f00f68/0x40f8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:57.371120+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 166010880 unmapped: 23248896 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:58.371301+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 166019072 unmapped: 23240704 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:22:59.371577+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 166019072 unmapped: 23240704 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:00.371741+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 166019072 unmapped: 23240704 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2517549 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2ac3000/0x0/0x1bfc00000, data 0x3f33d11/0x412b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:01.371878+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.890177727s of 10.000646591s, submitted: 21
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:02.372067+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:03.372286+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:04.372561+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:05.372820+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2521045 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:06.372952+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2a8f000/0x0/0x1bfc00000, data 0x3f67d58/0x415f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:07.373158+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2a8f000/0x0/0x1bfc00000, data 0x3f67dbd/0x415f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:08.373344+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:09.376052+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:10.377685+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2521621 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2a8f000/0x0/0x1bfc00000, data 0x3f67dbd/0x415f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:11.378125+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2a8f000/0x0/0x1bfc00000, data 0x3f68014/0x415f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:12.379921+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167067648 unmapped: 22192128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:13.380991+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167075840 unmapped: 22183936 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:14.381497+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167075840 unmapped: 22183936 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:15.382601+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167075840 unmapped: 22183936 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2520357 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:16.383202+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167075840 unmapped: 22183936 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:17.383934+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167075840 unmapped: 22183936 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2a8f000/0x0/0x1bfc00000, data 0x3f68014/0x415f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:18.384723+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2a8f000/0x0/0x1bfc00000, data 0x3f68014/0x415f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167075840 unmapped: 22183936 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:19.385081+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167075840 unmapped: 22183936 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:20.385848+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167075840 unmapped: 22183936 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2520357 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:21.386288+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.956748962s of 19.993089676s, submitted: 7
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167084032 unmapped: 22175744 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:22.386557+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167084032 unmapped: 22175744 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:23.387148+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167084032 unmapped: 22175744 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2a7e000/0x0/0x1bfc00000, data 0x3f7956c/0x4170000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:24.387495+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167084032 unmapped: 22175744 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:25.387906+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167084032 unmapped: 22175744 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2521549 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:26.389103+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167084032 unmapped: 22175744 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:27.389413+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167084032 unmapped: 22175744 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2a7e000/0x0/0x1bfc00000, data 0x3f7956c/0x4170000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:28.389760+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167092224 unmapped: 22167552 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:29.390019+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167092224 unmapped: 22167552 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:30.390196+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167100416 unmapped: 22159360 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2a63000/0x0/0x1bfc00000, data 0x3f9324c/0x418b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2525925 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:31.390367+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.957552910s of 10.051499367s, submitted: 8
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167100416 unmapped: 22159360 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:32.390579+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167124992 unmapped: 22134784 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:33.390817+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167124992 unmapped: 22134784 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2a4b000/0x0/0x1bfc00000, data 0x3fab194/0x41a3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:34.391031+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167124992 unmapped: 22134784 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:35.391177+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167124992 unmapped: 22134784 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2527897 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:36.391426+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167124992 unmapped: 22134784 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:37.391686+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167124992 unmapped: 22134784 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2605000/0x0/0x1bfc00000, data 0x3ff2b31/0x41e9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:38.391901+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167124992 unmapped: 22134784 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:39.392130+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2605000/0x0/0x1bfc00000, data 0x3ff2b31/0x41e9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167124992 unmapped: 22134784 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:40.392423+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167124992 unmapped: 22134784 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2527979 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:41.392674+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167215104 unmapped: 22044672 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:42.392853+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167215104 unmapped: 22044672 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b25d6000/0x0/0x1bfc00000, data 0x40208ef/0x4218000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:43.393091+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167215104 unmapped: 22044672 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.688707352s of 12.801090240s, submitted: 20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:44.393352+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167215104 unmapped: 22044672 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:45.393608+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167215104 unmapped: 22044672 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2533043 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:46.393829+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167231488 unmapped: 22028288 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b25c4000/0x0/0x1bfc00000, data 0x4031f07/0x422a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:47.394003+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167231488 unmapped: 22028288 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:48.394212+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b257b000/0x0/0x1bfc00000, data 0x407b837/0x4273000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167239680 unmapped: 22020096 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:49.394422+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167239680 unmapped: 22020096 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:50.394656+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2578000/0x0/0x1bfc00000, data 0x407f1c8/0x4276000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167239680 unmapped: 22020096 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2540381 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:51.394866+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167239680 unmapped: 22020096 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:52.395103+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167239680 unmapped: 22020096 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:53.395323+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167239680 unmapped: 22020096 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:54.395617+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2557000/0x0/0x1bfc00000, data 0x409fc86/0x4297000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167247872 unmapped: 22011904 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.604664803s of 10.697103500s, submitted: 18
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:55.395835+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 167247872 unmapped: 22011904 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2541569 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:56.396071+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168304640 unmapped: 20955136 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:57.396246+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168304640 unmapped: 20955136 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:58.396512+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168665088 unmapped: 20594688 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:23:59.396788+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168665088 unmapped: 20594688 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b24df000/0x0/0x1bfc00000, data 0x4117d6a/0x430f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:00.397008+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168665088 unmapped: 20594688 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2544667 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:01.397193+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168673280 unmapped: 20586496 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b24b7000/0x0/0x1bfc00000, data 0x413fa84/0x4337000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:02.397388+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168673280 unmapped: 20586496 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:03.397584+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168673280 unmapped: 20586496 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:04.397882+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b24b7000/0x0/0x1bfc00000, data 0x413fab3/0x4336000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168755200 unmapped: 20504576 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:05.398115+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b24b7000/0x0/0x1bfc00000, data 0x413fab3/0x4336000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168755200 unmapped: 20504576 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2548241 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:06.398280+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.303752899s of 11.449698448s, submitted: 28
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168763392 unmapped: 20496384 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:07.398436+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168763392 unmapped: 20496384 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:08.398662+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168951808 unmapped: 20307968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:09.398864+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168951808 unmapped: 20307968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:10.399027+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2457000/0x0/0x1bfc00000, data 0x419ff7f/0x4397000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 168976384 unmapped: 20283392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2548655 data_alloc: 285212672 data_used: 5173248
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 heartbeat osd_stat(store_statfs(0x1b2457000/0x0/0x1bfc00000, data 0x41a0049/0x4397000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:11.399188+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 291 handle_osd_map epochs [291,291], i have 291, src has [1,291]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 169009152 unmapped: 20250624 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:12.399357+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 169009152 unmapped: 20250624 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:13.399519+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 169009152 unmapped: 20250624 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:14.399753+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 169009152 unmapped: 20250624 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:15.399988+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 169009152 unmapped: 20250624 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 291 heartbeat osd_stat(store_statfs(0x1b2453000/0x0/0x1bfc00000, data 0x41a27d7/0x439a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2550837 data_alloc: 285212672 data_used: 5185536
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:16.400187+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.794984818s of 10.000624657s, submitted: 302
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170442752 unmapped: 18817024 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:17.401492+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Got map version 54
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170450944 unmapped: 18808832 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:18.401723+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170459136 unmapped: 18800640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:19.401942+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170459136 unmapped: 18800640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:20.402231+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170459136 unmapped: 18800640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2555013 data_alloc: 285212672 data_used: 5185536
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:21.402434+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170459136 unmapped: 18800640 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 291 heartbeat osd_stat(store_statfs(0x1b242c000/0x0/0x1bfc00000, data 0x41ca280/0x43c2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:22.402658+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170467328 unmapped: 18792448 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:23.402817+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170467328 unmapped: 18792448 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:24.403053+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170467328 unmapped: 18792448 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:25.403245+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170467328 unmapped: 18792448 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2427000/0x0/0x1bfc00000, data 0x41cc478/0x43c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2556623 data_alloc: 285212672 data_used: 5197824
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:26.403437+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.951340675s of 10.002084732s, submitted: 22
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170475520 unmapped: 18784256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:27.403648+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170475520 unmapped: 18784256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b241c000/0x0/0x1bfc00000, data 0x41d7c54/0x43d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:28.403895+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170475520 unmapped: 18784256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:29.404140+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170475520 unmapped: 18784256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:30.404330+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170475520 unmapped: 18784256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2558767 data_alloc: 285212672 data_used: 5197824
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:31.404521+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170475520 unmapped: 18784256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:32.404691+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170475520 unmapped: 18784256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:33.404912+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2404000/0x0/0x1bfc00000, data 0x41f13c0/0x43ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170475520 unmapped: 18784256 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:34.405191+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170483712 unmapped: 18776064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2404000/0x0/0x1bfc00000, data 0x41f13c0/0x43ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:35.405430+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170483712 unmapped: 18776064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2560535 data_alloc: 285212672 data_used: 5197824
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:36.405687+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170483712 unmapped: 18776064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:37.405886+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2403000/0x0/0x1bfc00000, data 0x41f145b/0x43eb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170483712 unmapped: 18776064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:38.406140+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170483712 unmapped: 18776064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:39.406324+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170483712 unmapped: 18776064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:40.406608+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170483712 unmapped: 18776064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2403000/0x0/0x1bfc00000, data 0x41f145b/0x43eb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2560535 data_alloc: 285212672 data_used: 5197824
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:41.406841+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.302908897s of 15.357861519s, submitted: 8
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170483712 unmapped: 18776064 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:42.407066+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170491904 unmapped: 18767872 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:43.407272+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170491904 unmapped: 18767872 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:44.407552+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170491904 unmapped: 18767872 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:45.407735+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2404000/0x0/0x1bfc00000, data 0x41f148a/0x43ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170491904 unmapped: 18767872 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2558613 data_alloc: 285212672 data_used: 5197824
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:46.407939+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170491904 unmapped: 18767872 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:47.408152+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2404000/0x0/0x1bfc00000, data 0x41f148a/0x43ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170491904 unmapped: 18767872 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:48.408349+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170491904 unmapped: 18767872 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:49.408565+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170491904 unmapped: 18767872 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:50.408729+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170500096 unmapped: 18759680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2403000/0x0/0x1bfc00000, data 0x41f15ef/0x43eb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2560381 data_alloc: 285212672 data_used: 5197824
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:51.409025+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170500096 unmapped: 18759680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:52.410151+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170500096 unmapped: 18759680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:53.410331+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170500096 unmapped: 18759680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:54.410533+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170500096 unmapped: 18759680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2403000/0x0/0x1bfc00000, data 0x41f15ef/0x43eb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:55.410690+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.981701851s of 14.012890816s, submitted: 6
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170500096 unmapped: 18759680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2559515 data_alloc: 285212672 data_used: 5197824
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:56.410879+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170500096 unmapped: 18759680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:57.411062+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2403000/0x0/0x1bfc00000, data 0x41f16b9/0x43eb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170500096 unmapped: 18759680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:58.411262+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170508288 unmapped: 18751488 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:24:59.411358+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170508288 unmapped: 18751488 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:00.411561+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170508288 unmapped: 18751488 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:01.411719+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2561283 data_alloc: 285212672 data_used: 5197824
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170508288 unmapped: 18751488 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:02.411889+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2403000/0x0/0x1bfc00000, data 0x41f16b9/0x43eb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170508288 unmapped: 18751488 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:03.412083+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170508288 unmapped: 18751488 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:04.412355+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170508288 unmapped: 18751488 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:05.412526+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170516480 unmapped: 18743296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:06.412706+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2560417 data_alloc: 285212672 data_used: 5197824
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170516480 unmapped: 18743296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:07.412871+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2404000/0x0/0x1bfc00000, data 0x41f16e8/0x43ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170516480 unmapped: 18743296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:08.413042+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2404000/0x0/0x1bfc00000, data 0x41f16e8/0x43ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.975884438s of 13.019474030s, submitted: 8
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170516480 unmapped: 18743296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:09.413234+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2403000/0x0/0x1bfc00000, data 0x41f174d/0x43eb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170516480 unmapped: 18743296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:10.413398+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170516480 unmapped: 18743296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:11.413558+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2560625 data_alloc: 285212672 data_used: 5197824
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170516480 unmapped: 18743296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:12.413704+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170516480 unmapped: 18743296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:13.413855+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170516480 unmapped: 18743296 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:14.414048+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170524672 unmapped: 18735104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:15.414219+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 heartbeat osd_stat(store_statfs(0x1b2403000/0x0/0x1bfc00000, data 0x41f187c/0x43eb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170524672 unmapped: 18735104 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:16.414368+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2560449 data_alloc: 285212672 data_used: 5197824
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170541056 unmapped: 18718720 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 293 heartbeat osd_stat(store_statfs(0x1b2403000/0x0/0x1bfc00000, data 0x41f187c/0x43eb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:17.414530+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170541056 unmapped: 18718720 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:18.414699+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170541056 unmapped: 18718720 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:19.414886+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170541056 unmapped: 18718720 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:20.415065+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 293 heartbeat osd_stat(store_statfs(0x1b23fe000/0x0/0x1bfc00000, data 0x41f3cc1/0x43ef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170541056 unmapped: 18718720 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:21.415283+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2564651 data_alloc: 285212672 data_used: 5210112
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170549248 unmapped: 18710528 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:22.415484+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170549248 unmapped: 18710528 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:23.415639+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170549248 unmapped: 18710528 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:24.415887+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170549248 unmapped: 18710528 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:25.416175+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170549248 unmapped: 18710528 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.187938690s of 17.436899185s, submitted: 38
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:26.416282+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2566419 data_alloc: 285212672 data_used: 5210112
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 293 heartbeat osd_stat(store_statfs(0x1b23fe000/0x0/0x1bfc00000, data 0x41f3cc1/0x43ef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 293 handle_osd_map epochs [294,294], i have 294, src has [1,294]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 293 handle_osd_map epochs [294,294], i have 294, src has [1,294]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170565632 unmapped: 18694144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:27.416378+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170565632 unmapped: 18694144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:28.416535+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170565632 unmapped: 18694144 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:29.416710+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 294 heartbeat osd_stat(store_statfs(0x1b23f9000/0x0/0x1bfc00000, data 0x41f5f54/0x43f4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170573824 unmapped: 18685952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:30.416913+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170573824 unmapped: 18685952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:31.417138+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2569229 data_alloc: 285212672 data_used: 5210112
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170573824 unmapped: 18685952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:32.417325+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170573824 unmapped: 18685952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:33.417501+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 294 heartbeat osd_stat(store_statfs(0x1b23f9000/0x0/0x1bfc00000, data 0x41f5f54/0x43f4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170573824 unmapped: 18685952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:34.417715+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170573824 unmapped: 18685952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:35.417885+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170573824 unmapped: 18685952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:36.418082+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2567675 data_alloc: 285212672 data_used: 5210112
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.736025810s of 10.798563957s, submitted: 20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170598400 unmapped: 18661376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:37.418234+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 295 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 295 handle_osd_map epochs [295,295], i have 295, src has [1,295]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170598400 unmapped: 18661376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:38.418427+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 295 heartbeat osd_stat(store_statfs(0x1b23f7000/0x0/0x1bfc00000, data 0x41f8363/0x43f7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 171646976 unmapped: 17612800 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:39.418619+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 171646976 unmapped: 17612800 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:40.419420+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 170598400 unmapped: 18661376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:41.419614+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2575491 data_alloc: 285212672 data_used: 5222400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 171646976 unmapped: 17612800 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:42.419738+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 295 heartbeat osd_stat(store_statfs(0x1b23d9000/0x0/0x1bfc00000, data 0x421515d/0x4415000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 171646976 unmapped: 17612800 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:43.420043+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 171646976 unmapped: 17612800 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:44.420252+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 171655168 unmapped: 17604608 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:45.420393+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 171655168 unmapped: 17604608 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:46.420548+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2577243 data_alloc: 285212672 data_used: 5222400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 295 handle_osd_map epochs [296,296], i have 296, src has [1,296]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 296 handle_osd_map epochs [296,296], i have 296, src has [1,296]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 171761664 unmapped: 17498112 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:47.420707+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 296 heartbeat osd_stat(store_statfs(0x1b23a8000/0x0/0x1bfc00000, data 0x4241442/0x4445000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 171769856 unmapped: 17489920 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:48.420869+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.617666245s of 11.802483559s, submitted: 65
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 171769856 unmapped: 17489920 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:49.421062+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172015616 unmapped: 17244160 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:50.422125+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172015616 unmapped: 17244160 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:51.422405+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2588027 data_alloc: 285212672 data_used: 5234688
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 296 heartbeat osd_stat(store_statfs(0x1b236b000/0x0/0x1bfc00000, data 0x42808ff/0x4483000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 296 handle_osd_map epochs [297,297], i have 297, src has [1,297]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172023808 unmapped: 17235968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:52.423095+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 297 handle_osd_map epochs [297,297], i have 297, src has [1,297]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172023808 unmapped: 17235968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:53.423766+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172023808 unmapped: 17235968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:54.424047+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172023808 unmapped: 17235968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:55.424440+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172023808 unmapped: 17235968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:56.425335+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2591197 data_alloc: 285212672 data_used: 5246976
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 297 heartbeat osd_stat(store_statfs(0x1b3322000/0x0/0x1bfc00000, data 0x42c8ae1/0x44cc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172023808 unmapped: 17235968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:57.426244+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:58.426645+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172023808 unmapped: 17235968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:25:59.427130+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172023808 unmapped: 17235968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:00.427489+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172023808 unmapped: 17235968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:01.427673+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172023808 unmapped: 17235968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.550234795s of 12.739934921s, submitted: 49
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2593127 data_alloc: 285212672 data_used: 5259264
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 298 heartbeat osd_stat(store_statfs(0x1b3322000/0x0/0x1bfc00000, data 0x42c8ae1/0x44cc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:02.428415+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172040192 unmapped: 17219584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:03.428744+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172040192 unmapped: 17219584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:04.429090+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172040192 unmapped: 17219584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:05.429545+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172040192 unmapped: 17219584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 298 heartbeat osd_stat(store_statfs(0x1b331d000/0x0/0x1bfc00000, data 0x42cacd9/0x44d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:06.429700+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172040192 unmapped: 17219584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2593127 data_alloc: 285212672 data_used: 5259264
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:07.429896+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172040192 unmapped: 17219584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:08.430142+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172040192 unmapped: 17219584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:09.430340+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172040192 unmapped: 17219584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:10.430602+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172048384 unmapped: 17211392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:11.430782+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172048384 unmapped: 17211392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2593127 data_alloc: 285212672 data_used: 5259264
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 298 heartbeat osd_stat(store_statfs(0x1b331d000/0x0/0x1bfc00000, data 0x42cacd9/0x44d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:12.431107+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172048384 unmapped: 17211392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:13.431330+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172048384 unmapped: 17211392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:14.431553+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172048384 unmapped: 17211392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:15.431705+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172048384 unmapped: 17211392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:16.431863+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172048384 unmapped: 17211392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2593127 data_alloc: 285212672 data_used: 5259264
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 298 heartbeat osd_stat(store_statfs(0x1b331d000/0x0/0x1bfc00000, data 0x42cacd9/0x44d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:17.432156+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172048384 unmapped: 17211392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:18.432541+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172056576 unmapped: 17203200 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:19.432794+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172056576 unmapped: 17203200 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:20.432976+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172056576 unmapped: 17203200 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 298 heartbeat osd_stat(store_statfs(0x1b331d000/0x0/0x1bfc00000, data 0x42cacd9/0x44d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:21.433177+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172056576 unmapped: 17203200 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2593127 data_alloc: 285212672 data_used: 5259264
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:22.433362+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172056576 unmapped: 17203200 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:23.433535+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172056576 unmapped: 17203200 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:24.433724+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172056576 unmapped: 17203200 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:25.433908+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172056576 unmapped: 17203200 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:26.434090+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172064768 unmapped: 17195008 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 298 heartbeat osd_stat(store_statfs(0x1b331d000/0x0/0x1bfc00000, data 0x42cacd9/0x44d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2593127 data_alloc: 285212672 data_used: 5259264
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:27.434264+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172064768 unmapped: 17195008 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:28.434457+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172064768 unmapped: 17195008 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:29.434750+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172064768 unmapped: 17195008 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:30.434921+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172064768 unmapped: 17195008 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:31.435083+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172064768 unmapped: 17195008 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2593127 data_alloc: 285212672 data_used: 5259264
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:32.435268+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 298 heartbeat osd_stat(store_statfs(0x1b331d000/0x0/0x1bfc00000, data 0x42cacd9/0x44d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172064768 unmapped: 17195008 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:33.435456+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee56259400
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.888864517s of 31.925882339s, submitted: 28
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172072960 unmapped: 17186816 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:34.435728+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172548096 unmapped: 16711680 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 298 heartbeat osd_stat(store_statfs(0x1b331c000/0x0/0x1bfc00000, data 0x42cad0c/0x44d2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 298 handle_osd_map epochs [299,299], i have 299, src has [1,299]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 299 ms_handle_reset con 0x55ee56259400 session 0x55ee56bd3e00
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: handle_auth_request added challenge on 0x55ee566b9800
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:35.435856+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172580864 unmapped: 16678912 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:36.436011+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 300 ms_handle_reset con 0x55ee566b9800 session 0x55ee53bb2780
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172621824 unmapped: 16637952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605236 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 300 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42cf351/0x44d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:37.436183+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172621824 unmapped: 16637952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:38.436407+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172621824 unmapped: 16637952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:39.436589+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172621824 unmapped: 16637952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:40.436755+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172621824 unmapped: 16637952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 300 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42cf351/0x44d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:41.436877+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172621824 unmapped: 16637952 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605236 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:42.437070+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172630016 unmapped: 16629760 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:43.437257+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172630016 unmapped: 16629760 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:44.437450+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172630016 unmapped: 16629760 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:45.437592+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172630016 unmapped: 16629760 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:46.437746+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172630016 unmapped: 16629760 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _renew_subs
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _send_mon_message to mon.np0005546420 at v2:172.18.0.104:3300/0
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.878329277s of 13.277907372s, submitted: 88
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2606304 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42cf351/0x44d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:47.437916+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42cf351/0x44d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 16613376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:48.438106+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 16613376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:49.438232+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 16613376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:50.438405+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3311000/0x0/0x1bfc00000, data 0x42d1549/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 16613376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:51.438592+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 16613376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2606304 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:52.438786+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 16613376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:53.439035+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 16613376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:54.439255+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 16613376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:55.439463+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 16613376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3311000/0x0/0x1bfc00000, data 0x42d1549/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:56.439894+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 16613376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2606304 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:57.440139+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 16613376 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:58.440344+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172654592 unmapped: 16605184 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:26:59.440534+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172654592 unmapped: 16605184 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:00.440752+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172654592 unmapped: 16605184 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3311000/0x0/0x1bfc00000, data 0x42d1549/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:01.440914+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172654592 unmapped: 16605184 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2606304 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:02.441095+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172654592 unmapped: 16605184 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:03.441263+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172654592 unmapped: 16605184 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:04.441570+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172654592 unmapped: 16605184 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:05.441759+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172654592 unmapped: 16605184 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3311000/0x0/0x1bfc00000, data 0x42d1549/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:06.441989+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172662784 unmapped: 16596992 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2606304 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:07.442213+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172662784 unmapped: 16596992 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:08.442455+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172662784 unmapped: 16596992 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3311000/0x0/0x1bfc00000, data 0x42d1549/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:09.442650+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172662784 unmapped: 16596992 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:10.442890+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172662784 unmapped: 16596992 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:11.443090+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172662784 unmapped: 16596992 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2606304 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:12.443260+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172662784 unmapped: 16596992 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3311000/0x0/0x1bfc00000, data 0x42d1549/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:13.443423+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172662784 unmapped: 16596992 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:14.443642+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172670976 unmapped: 16588800 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.358356476s of 28.382196426s, submitted: 20
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 ms_handle_reset con 0x55ee53cd3800 session 0x55ee55e30b40
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:15.443832+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Got map version 55
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1193881100,v1:172.18.0.106:6811/1193881100]
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172949504 unmapped: 16310272 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:16.444060+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172949504 unmapped: 16310272 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:17.444252+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172949504 unmapped: 16310272 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:18.444438+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172949504 unmapped: 16310272 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:19.444638+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172949504 unmapped: 16310272 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:20.444813+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172949504 unmapped: 16310272 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:21.445032+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172949504 unmapped: 16310272 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:22.445235+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172957696 unmapped: 16302080 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:23.445509+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172957696 unmapped: 16302080 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:24.445829+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172957696 unmapped: 16302080 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:25.446039+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172965888 unmapped: 16293888 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:26.446258+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172965888 unmapped: 16293888 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:27.446428+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172965888 unmapped: 16293888 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:28.446633+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172965888 unmapped: 16293888 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:29.446851+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172965888 unmapped: 16293888 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:30.447065+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172974080 unmapped: 16285696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:31.447260+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172974080 unmapped: 16285696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:32.447533+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172974080 unmapped: 16285696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:33.447743+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172974080 unmapped: 16285696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:34.448087+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172974080 unmapped: 16285696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:35.448311+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172974080 unmapped: 16285696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:36.448530+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172974080 unmapped: 16285696 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:37.448685+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172982272 unmapped: 16277504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:38.448929+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172982272 unmapped: 16277504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:39.449167+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172982272 unmapped: 16277504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:40.449365+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172982272 unmapped: 16277504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:41.449540+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172982272 unmapped: 16277504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:42.449706+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172982272 unmapped: 16277504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:43.449923+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172982272 unmapped: 16277504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:44.450211+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172982272 unmapped: 16277504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:45.450447+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172982272 unmapped: 16277504 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:46.450680+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172990464 unmapped: 16269312 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:47.450874+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172990464 unmapped: 16269312 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:48.451072+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172990464 unmapped: 16269312 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:49.451288+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172990464 unmapped: 16269312 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:50.451474+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172990464 unmapped: 16269312 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:51.451648+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172990464 unmapped: 16269312 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:52.451824+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172990464 unmapped: 16269312 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:53.452006+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172998656 unmapped: 16261120 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:54.452228+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172998656 unmapped: 16261120 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:55.452432+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172998656 unmapped: 16261120 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:56.452638+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172998656 unmapped: 16261120 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:57.452836+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173006848 unmapped: 16252928 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:58.452991+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173006848 unmapped: 16252928 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:27:59.454462+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173006848 unmapped: 16252928 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:00.455633+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173015040 unmapped: 16244736 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:01.456572+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173015040 unmapped: 16244736 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:02.457584+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173023232 unmapped: 16236544 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:03.461832+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173023232 unmapped: 16236544 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:04.463347+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173023232 unmapped: 16236544 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:05.465079+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173023232 unmapped: 16236544 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:06.465217+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173023232 unmapped: 16236544 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:07.466126+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173023232 unmapped: 16236544 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:08.468231+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173023232 unmapped: 16236544 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:09.468369+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173023232 unmapped: 16236544 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:10.469097+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173031424 unmapped: 16228352 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:11.470537+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173031424 unmapped: 16228352 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:12.471233+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173031424 unmapped: 16228352 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:13.472031+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173031424 unmapped: 16228352 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:14.472239+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173031424 unmapped: 16228352 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:15.472568+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173031424 unmapped: 16228352 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:16.472853+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173031424 unmapped: 16228352 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:17.473521+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173031424 unmapped: 16228352 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:18.473745+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173039616 unmapped: 16220160 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:19.474025+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173039616 unmapped: 16220160 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:20.474589+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173039616 unmapped: 16220160 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:21.475024+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173039616 unmapped: 16220160 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:22.475316+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173039616 unmapped: 16220160 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:23.475542+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173039616 unmapped: 16220160 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:24.475839+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173039616 unmapped: 16220160 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:25.476030+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173047808 unmapped: 16211968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:26.476224+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173047808 unmapped: 16211968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:27.476408+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173047808 unmapped: 16211968 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:28.476607+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173056000 unmapped: 16203776 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:29.476774+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173056000 unmapped: 16203776 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:30.476950+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173056000 unmapped: 16203776 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:31.477162+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173056000 unmapped: 16203776 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:32.477409+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173056000 unmapped: 16203776 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:33.477808+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173064192 unmapped: 16195584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:34.478292+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173064192 unmapped: 16195584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:35.478515+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173064192 unmapped: 16195584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:36.478700+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173064192 unmapped: 16195584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:37.478950+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173064192 unmapped: 16195584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:38.479139+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173064192 unmapped: 16195584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:39.479350+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173064192 unmapped: 16195584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:40.479536+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173064192 unmapped: 16195584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:41.479754+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173064192 unmapped: 16195584 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:42.479905+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173072384 unmapped: 16187392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:43.480063+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173072384 unmapped: 16187392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:44.480258+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173072384 unmapped: 16187392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:45.480440+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173072384 unmapped: 16187392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:46.480595+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173072384 unmapped: 16187392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:47.480740+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173072384 unmapped: 16187392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:48.480910+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173072384 unmapped: 16187392 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:49.481118+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173080576 unmapped: 16179200 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:50.481308+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173080576 unmapped: 16179200 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:51.481461+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173211648 unmapped: 16048128 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:52.481586+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: bluestore.MempoolThread(0x55ee52249b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2605648 data_alloc: 285212672 data_used: 5271552
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: do_command 'config diff' '{prefix=config diff}'
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: do_command 'config show' '{prefix=config show}'
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 172957696 unmapped: 16302080 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:53.505528+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: prioritycache tune_memory target: 5709084876 mapped: 173408256 unmapped: 15851520 heap: 189259776 old mem: 4047415775 new mem: 4047415775
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: tick
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_tickets
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T10:28:54.505748+0000)
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: osd.1 301 heartbeat osd_stat(store_statfs(0x1b3312000/0x0/0x1bfc00000, data 0x42d175c/0x44dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,2,3,4,5] op hist [])
Dec 05 10:29:25 np0005546420.localdomain ceph-osd[31961]: do_command 'log dump' '{prefix=log dump}'
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3124610438' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain sshd[337904]: Invalid user pi from 45.140.17.124 port 20752
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 05 10:29:25 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/770411058' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 05 10:29:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.
Dec 05 10:29:25 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.
Dec 05 10:29:25 np0005546420.localdomain podman[338113]: 2025-12-05 10:29:25.940996676 +0000 UTC m=+0.089775093 container health_status cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:29:25 np0005546420.localdomain podman[338113]: 2025-12-05 10:29:25.955316328 +0000 UTC m=+0.104094745 container exec_died cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 10:29:25 np0005546420.localdomain systemd[1]: cea1ff364cc50edb1cf334f0ae38aca87a453eb1a5a479e1187e4ecd2130871a.service: Deactivated successfully.
Dec 05 10:29:26 np0005546420.localdomain systemd[1]: tmp-crun.4gxVkU.mount: Deactivated successfully.
Dec 05 10:29:26 np0005546420.localdomain podman[338114]: 2025-12-05 10:29:26.125710358 +0000 UTC m=+0.273650669 container health_status e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/213318773' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 05 10:29:26 np0005546420.localdomain podman[338114]: 2025-12-05 10:29:26.181720407 +0000 UTC m=+0.329660698 container exec_died e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Dec 05 10:29:26 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:26.182 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:26 np0005546420.localdomain systemd[1]: e0b82e797928e50200fc02649b24e6e155806c366f43ebe5a8f1bc98aa6393b0.service: Deactivated successfully.
Dec 05 10:29:26 np0005546420.localdomain sshd[337904]: Connection reset by invalid user pi 45.140.17.124 port 20752 [preauth]
Dec 05 10:29:26 np0005546420.localdomain rsyslogd[756]: imjournal from <localhost:ceph-osd>: begin to drop messages due to rate-limiting
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/1775870448' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1006411252' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1157323018' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/155186084' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2199205451' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2082169024' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3764741491' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3124610438' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: pgmap v1043: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3824219565' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/770411058' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1565975189' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3826885917' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/213318773' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3542270520' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1067600155' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2417101056' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2809551813' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain sshd[338203]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3328435018' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1463127876' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 05 10:29:26 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/137661438' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2966057998' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2809551813' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3328435018' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: from='client.49926 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2438169107' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3141093583' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3382441882' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1463127876' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/137661438' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1754721662' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1564811728' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 05 10:29:27 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2187013888' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 05 10:29:27 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:27.488 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:28 np0005546420.localdomain systemd[1]: Starting Hostname Service...
Dec 05 10:29:28 np0005546420.localdomain systemd[1]: Started Hostname Service.
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.49941 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.59668 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.49944 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2187013888' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.49953 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.69860 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.59680 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.49959 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.69869 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.49965 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.59686 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: pgmap v1044: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.69875 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.59692 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:28 np0005546420.localdomain ceph-mon[298353]: from='client.69878 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:29.022 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 10:29:29 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:29.022 281103 DEBUG nova.compute.manager [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2996029181' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.49980 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.59707 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/612013775' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.69890 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.59716 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.49995 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/99034948' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.69902 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.59734 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1390980034' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2996029181' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/381495711' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2784429249' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "versions"} v 0)
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/47696383' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 05 10:29:29 np0005546420.localdomain sshd[338203]: Connection reset by authenticating user root 45.140.17.124 port 20766 [preauth]
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:29 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:29 np0005546420.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.
Dec 05 10:29:29 np0005546420.localdomain podman[338678]: 2025-12-05 10:29:29.976408661 +0000 UTC m=+0.094054924 container health_status 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 10:29:29 np0005546420.localdomain podman[338678]: 2025-12-05 10:29:29.990392732 +0000 UTC m=+0.108039025 container exec_died 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 10:29:30 np0005546420.localdomain systemd[1]: 128765f1bffe28d93e38d631323e6b4d8fa945f96092ea39759eb9225034e931.service: Deactivated successfully.
Dec 05 10:29:30 np0005546420.localdomain sshd[338697]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2206894764' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='client.50010 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='client.69908 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='client.59749 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/285231559' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='client.69920 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/47696383' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='client.59764 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2249540049' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: pgmap v1045: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2206894764' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/254386428' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/921027868' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:30 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:31 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:31.183 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='client.69935 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3453698154' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/921027868' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='client.50067 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2800836190' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2137497473' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 05 10:29:31 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1019332224' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 05 10:29:32 np0005546420.localdomain sshd[338697]: Connection reset by authenticating user root 45.140.17.124 port 20792 [preauth]
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2251628215' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/1019332224' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: from='client.69995 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/3748157558' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: pgmap v1046: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: from='client.59845 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/1923616649' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/2154360954' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/2251628215' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 05 10:29:32 np0005546420.localdomain sshd[339042]: main: sshd: ssh-rsa algorithm is disabled
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df"} v 0)
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3771637564' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 05 10:29:32 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:32.538 281103 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "df"} v 0)
Dec 05 10:29:32 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3539030127' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 05 10:29:33 np0005546420.localdomain kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 05 10:29:33 np0005546420.localdomain kernel: cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 05 10:29:33 np0005546420.localdomain kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 05 10:29:33 np0005546420.localdomain kernel: cfg80211: failed to load regulatory.db
Dec 05 10:29:33 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 05 10:29:33 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3022934821' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 05 10:29:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/3771637564' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 05 10:29:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.106:0/960916702' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 05 10:29:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3539030127' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 05 10:29:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.108:0/2922162195' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 05 10:29:33 np0005546420.localdomain ceph-mon[298353]: from='client.? 172.18.0.107:0/3022934821' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 05 10:29:33 np0005546420.localdomain ceph-mon[298353]: mon.np0005546420@1(peon) e15 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 05 10:29:33 np0005546420.localdomain ceph-mon[298353]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3490129684' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 05 10:29:33 np0005546420.localdomain nova_compute[281099]: 2025-12-05 10:29:33.871 281103 DEBUG oslo_service.periodic_task [None req-125c0aad-2e7b-45b3-b9ac-540f5a88da25 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
